diff --git a/doc/architecture/blueprints/ai_gateway/index.md b/doc/architecture/blueprints/ai_gateway/index.md
index 9313b692b5d55e246e37e25f9cf4f05d07e38b72..f88d89c8e83b00b2018c5e9b8319571d61664e1c 100644
--- a/doc/architecture/blueprints/ai_gateway/index.md
+++ b/doc/architecture/blueprints/ai_gateway/index.md
@@ -162,12 +162,12 @@ The AI-Gateway protocol defines each request in the following way:
 Each JSON envelope contains 3 elements:
 
 1. `type`: A string identifier specifying a type of information that is being presented in the envelopes
-  `payload`. The AI-gateway single-purpose endpoint may ignore any types it does not know about.
+   `payload`. The AI-gateway single-purpose endpoint may ignore any types it does not know about.
 1. `payload`: The actual information that can be used by the AI-Gateway single-purpose endpoint to send requests to 3rd party AI services providers. The data inside the `payload` element can differ depending on the `type`, and the version of
-  the client providing the `payload`. This means that the AI-Gateway
- single-purpose endpoint must consider the structure and the type of data present inside the `payload` optional, and gracefully handle missing or malformed information.
+   the client providing the `payload`. This means that the AI-Gateway
+   single-purpose endpoint must consider the structure and the type of data present inside the `payload` optional, and gracefully handle missing or malformed information.
 1. `metadata`: This field contains information about a client that built this `prompt_components` envelope. Information from the `metadata` field may, or may not be used by GitLab for
-  telemetry. The same as with the `payload` all fields inside the `metadata` shall be considered optional.
+   telemetry. The same as with the `payload` all fields inside the `metadata` shall be considered optional.
 
 The only envelope field that is expected to likely change often is the
 `payload` one. There we need to make sure that all fields are
diff --git a/doc/architecture/blueprints/ci_pipeline_components/index.md b/doc/architecture/blueprints/ci_pipeline_components/index.md
index 90bdcdf46bde5a34b50545b1801a4ffd8b3be587..6053db250cb574d515805c362149f605a219079f 100644
--- a/doc/architecture/blueprints/ci_pipeline_components/index.md
+++ b/doc/architecture/blueprints/ci_pipeline_components/index.md
@@ -459,9 +459,9 @@ Today we have different use cases where using explicit input parameters would be
 
 1. `Run Pipeline` UI form.
     - **Problem today**: We are using top-level variables with `variables:*:description` to surface environment variables to the UI.
-    The problem with this is the mix of responsibilities as well as the jump in [precedence](../../../ci/variables/index.md#cicd-variable-precedence)
-    that a variable gets (from a YAML variable to a pipeline variable).
-    Building validation and features on top of this solution is challenging and complex.
+      The problem with this is the mix of responsibilities as well as the jump in [precedence](../../../ci/variables/index.md#cicd-variable-precedence)
+      that a variable gets (from a YAML variable to a pipeline variable).
+      Building validation and features on top of this solution is challenging and complex.
 1. Trigger a pipeline via API. For example `POST /projects/:id/pipelines/trigger` with `{ inputs: { provider: 'aws' } }`
 1. Trigger a pipeline via `trigger:` syntax.
 
diff --git a/doc/architecture/blueprints/ci_pipeline_processing/index.md b/doc/architecture/blueprints/ci_pipeline_processing/index.md
index a1e3092905c9c21a4234ef1eec22b74539f5cc5e..da1a8b1321848a0f11c7c0bc8d5b05bacb53c984 100644
--- a/doc/architecture/blueprints/ci_pipeline_processing/index.md
+++ b/doc/architecture/blueprints/ci_pipeline_processing/index.md
@@ -347,7 +347,7 @@ Let's define their differences first;
 - A failed job;
   - It is a machine response of the CI system to executing the job content. It indicates that execution failed for some reason.
   - It is equal answer of the system to success. The fact that something is failed is relative,
-  and might be desired outcome of CI execution, like in when executing tests that some are failing.
+    and might be desired outcome of CI execution, like in when executing tests that some are failing.
   - We know the result and [there can be artifacts](../../../ci/yaml/index.md#artifactswhen).
   - `after_script` is run.
   - Its eventual state is "failed" so subsequent jobs can run depending on their `when` values.
diff --git a/doc/architecture/blueprints/gitlab_ci_events/index.md b/doc/architecture/blueprints/gitlab_ci_events/index.md
index afa7f32411196406461822742cb2a4d9cc72b7cb..e4fe3e34d1f53f696e59438253eac24d5ac59831 100644
--- a/doc/architecture/blueprints/gitlab_ci_events/index.md
+++ b/doc/architecture/blueprints/gitlab_ci_events/index.md
@@ -56,10 +56,10 @@ Any accepted proposal should take in consideration the following requirements an
 
 1. Defining events should be done in separate files.
     - If we define all events in a single file, then the single file gets too complicated and hard to
-    maintain for users. Then, users need to separate their configs with the `include` keyword again and we end up
-    with the same solution.
+      maintain for users. Then, users need to separate their configs with the `include` keyword again and we end up
+      with the same solution.
     - The structure of the pipelines, the personas and the jobs will be different depending on the events being
-    subscribed to and the goals of the subscription.
+      subscribed to and the goals of the subscription.
 1. A single subscription configuration file should define a single pipeline that is created when an event is triggered.
     - The pipeline config can include other files with the `include` keyword.
     - The pipeline can have many jobs and trigger child pipelines or multi-project pipelines.
@@ -68,14 +68,14 @@ Any accepted proposal should take in consideration the following requirements an
 1. The event subscription and emiting events should be performant, scalable, and non blocking.
     - Reading from the database is usually faster than reading from files.
     - A CI event can potentially have many subscriptions.
-    This also includes evaluating the right YAML files to create pipelines.
+      This also includes evaluating the right YAML files to create pipelines.
     - The main business logic (e.g. creating an issue) should not be affected
-    by any subscriptions to the given CI event (e.g. issue created).
+      by any subscriptions to the given CI event (e.g. issue created).
 1. The CI events design should be implemented in a maintainable and extensible way.
     - If there is a `issues/create` event, then any new event (`merge_request/created`) can be added without
-    much effort.
+      much effort.
     - We expect that many events will be added. It should be trivial for developers to
-    register domain events (e.g. 'issue closed') as GitLab-defined CI events.
+      register domain events (e.g. 'issue closed') as GitLab-defined CI events.
     - Also, we should consider the opportunity of supporting user-defined CI events long term (e.g. 'order shipped').
 
 ### Options
diff --git a/doc/architecture/blueprints/modular_monolith/bounded_contexts.md b/doc/architecture/blueprints/modular_monolith/bounded_contexts.md
index 877baba9bbddbc084b0532911d3b4b0bf148ca23..7f76d67d332b4198db9024c16a9226069a4a27e1 100644
--- a/doc/architecture/blueprints/modular_monolith/bounded_contexts.md
+++ b/doc/architecture/blueprints/modular_monolith/bounded_contexts.md
@@ -29,9 +29,9 @@ The majority of the code is not properly namespaced and organized:
 1. Define a list of characteristics that bounded contexts should have. For example: must relate to at least 1 product category.
 1. Have a list of top-level bounded contexts where all domain code is broken down into.
 1. Engineers can clearly see the list of available bounded contexts and can make an easy decision where to add
-  new classes and modules.
+   new classes and modules.
 1. Define a process for adding a new bounded context to the application. This should occur quite infrequently
-  and new bounded contexts need to adhere to the characteristics defined previously.
+   and new bounded contexts need to adhere to the characteristics defined previously.
 1. Enforce the list of bounded contexts so that no new top-level namespaces can be used aside from the authorized ones.
 
 ## Iterations
diff --git a/doc/architecture/blueprints/modular_monolith/packages_extraction.md b/doc/architecture/blueprints/modular_monolith/packages_extraction.md
index 2b9a64e06313082b27150fc486ee792d70f7f2d7..60a7156ac95ea9925f1266211de42085ace756b7 100644
--- a/doc/architecture/blueprints/modular_monolith/packages_extraction.md
+++ b/doc/architecture/blueprints/modular_monolith/packages_extraction.md
@@ -13,40 +13,40 @@ The general steps of refactoring existing code to modularization could be:
 
 1. Use the same namespace for all classes and modules related to the same [bounded context](bounded_contexts.md).
 
-    - **Why?** Without even a rough understanding of the domains at play in the codebase it is difficult to draw a plan.
-      Having well namespaced code that everyone else can follow is also the pre-requisite for modularization.
-    - If a domain is already well namespaced and no similar or related namespaces exist, we can move directly to the
-      next step.
+   - **Why?** Without even a rough understanding of the domains at play in the codebase it is difficult to draw a plan.
+     Having well namespaced code that everyone else can follow is also the pre-requisite for modularization.
+   - If a domain is already well namespaced and no similar or related namespaces exist, we can move directly to the
+     next step.
 1. Prepare Rails development for Packwerk packages. This is a **once off step** with maybe some improvements
-  added over time.
+   added over time.
 
-    - We will have the Rails autoloader to work with Packwerk's directory structure, as demonstrated in
-      [this PoC](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/129254/diffs#note_1512982957).
-    - We will have [Danger-Packwerk](https://github.com/rubyatscale/danger-packwerk) running in CI for merge requests.
-    - We will possibly have Packer check running in Lefthook on pre-commit or pre-push.
+   - We will have the Rails autoloader to work with Packwerk's directory structure, as demonstrated in
+     [this PoC](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/129254/diffs#note_1512982957).
+   - We will have [Danger-Packwerk](https://github.com/rubyatscale/danger-packwerk) running in CI for merge requests.
+   - We will possibly have Packer check running in Lefthook on pre-commit or pre-push.
 1. Move file into a Packwerk package.
 
-    - This should consist in creating a Packwerk package and iteratively move files into the package.
-    - Constants are auto-loaded correctly whether they are in `app/` or `lib/` inside a Packwerk package.
-    - This is a phase where the domain code will be split between the package directory and the Rails directory structure.
-    **We must move quickly here**.
+   - This should consist in creating a Packwerk package and iteratively move files into the package.
+   - Constants are auto-loaded correctly whether they are in `app/` or `lib/` inside a Packwerk package.
+   - This is a phase where the domain code will be split between the package directory and the Rails directory structure.
+     **We must move quickly here**.
 1. Enforce namespace boundaries by requiring packages declare their [dependencies explicitly](https://github.com/Shopify/packwerk/blob/main/USAGE.md#enforcing-dependency-boundary)
    and only depend on other packages' [public interface](https://github.com/rubyatscale/packwerk-extensions#privacy-checker).
 
-    - **Why?** Up until now all constants would be public since we have not enforced privacy. By moving existing files
-      into packages without enforcing boundaries we can focus on wrapping a namespace in a package without being distracted
-      by Packwer privacy violations. By enforcing privacy afterwards we gain an understanding of coupling between various
-      constants and domains.
-    - This way we know what constants need to be made public (as they are used by other packages) and what can
-      remain private (taking the benefit of encapsulation). We will use Packwerk's recorded violations (like Rubocop TODOs)
-      to refactor the code over time.
-    - We can update the dependency graph to see where it fit in the overall architecture.
+   - **Why?** Up until now all constants would be public since we have not enforced privacy. By moving existing files
+     into packages without enforcing boundaries we can focus on wrapping a namespace in a package without being distracted
+     by Packwer privacy violations. By enforcing privacy afterwards we gain an understanding of coupling between various
+     constants and domains.
+   - This way we know what constants need to be made public (as they are used by other packages) and what can
+     remain private (taking the benefit of encapsulation). We will use Packwerk's recorded violations (like Rubocop TODOs)
+     to refactor the code over time.
+   - We can update the dependency graph to see where it fit in the overall architecture.
 1. Work off Packwerk's recorded violations to make refactorings. **This is a long term phase** that the DRIs of the
-  domain need to nurture over time. We will use Packwerk failures and the dependency diagram to influence the modular design.
+   domain need to nurture over time. We will use Packwerk failures and the dependency diagram to influence the modular design.
 
-    - Revisit wheteher a class should be private instead of public, and crate a better interface.
-    - Move constants to different package if too coupled with that.
-    - Join packages if they are too coupled to each other.
+   - Revisit wheteher a class should be private instead of public, and crate a better interface.
+   - Move constants to different package if too coupled with that.
+   - Join packages if they are too coupled to each other.
 
 Once we have Packwerk configured for the Rails application (step 2 above), emerging domains could be directly implemented
 as Packwerk packages, benefiting from isolation and clear interface immediately.
diff --git a/doc/architecture/blueprints/runner_tokens/index.md b/doc/architecture/blueprints/runner_tokens/index.md
index 156ea0312c01ea93018389988af9beeec7745c87..3097eaa1f65904afc0a87decb56f6836456003cf 100644
--- a/doc/architecture/blueprints/runner_tokens/index.md
+++ b/doc/architecture/blueprints/runner_tokens/index.md
@@ -233,25 +233,25 @@ The new workflow looks as follows:
 
   1. The user opens the Runners settings page (instance, group, or project level);
   1. The user fills in the details regarding the new desired runner, namely description,
-  tags, protected, locked, etc.;
+     tags, protected, locked, etc.;
   1. The user clicks `Create`. That results in the following:
 
-      1. Creates a new runner in the `ci_runners` table (and corresponding `glrt-` prefixed authentication token);
-      1. Presents the user with instructions on how to configure this new runner on a machine,
-         with possibilities for different supported deployment scenarios (for example, shell, `docker-compose`, Helm chart, etc.)
-         This information contains a token which is available to the user only once, and the UI
-         makes it clear to the user that the value shall not be shown again, as registering the same runner multiple times
-         is discouraged (though not impossible).
+     1. Creates a new runner in the `ci_runners` table (and corresponding `glrt-` prefixed authentication token);
+     1. Presents the user with instructions on how to configure this new runner on a machine,
+        with possibilities for different supported deployment scenarios (for example, shell, `docker-compose`, Helm chart, etc.)
+        This information contains a token which is available to the user only once, and the UI
+        makes it clear to the user that the value shall not be shown again, as registering the same runner multiple times
+        is discouraged (though not impossible).
 
   1. The user copies and pastes the instructions for the intended deployment scenario (a `register` command), leading to the following actions:
 
-      1. Upon executing the new `gitlab-runner register` command in the instructions, `gitlab-runner` performs
-      a call to the `POST /api/v4/runners/verify` with the given runner token;
-      1. If the `POST /api/v4/runners/verify` GitLab endpoint validates the token, the `config.toml`
-      file is populated with the configuration;
-      1. Whenever a runner pings for a job, the respective `ci_runner_machines` record is
-         ["upserted"](https://en.wiktionary.org/wiki/upsert) with the latest information about the
-         runner (with Redis cache in front of it like we do for Runner heartbeats).
+     1. Upon executing the new `gitlab-runner register` command in the instructions, `gitlab-runner` performs
+        a call to the `POST /api/v4/runners/verify` with the given runner token;
+     1. If the `POST /api/v4/runners/verify` GitLab endpoint validates the token, the `config.toml`
+        file is populated with the configuration;
+     1. Whenever a runner pings for a job, the respective `ci_runner_machines` record is
+        ["upserted"](https://en.wiktionary.org/wiki/upsert) with the latest information about the
+        runner (with Redis cache in front of it like we do for Runner heartbeats).
 
 As part of the transition period, we provide admins and top-level group owners with an
 instance/group-level setting (`allow_runner_registration_token`) to disable the legacy registration
diff --git a/doc/development/ai_features/duo_chat.md b/doc/development/ai_features/duo_chat.md
index 63fdd5d6ec08b0d4ed16ce42bacdb028f21cbda8..e4059788829959450913c0bdc383243642a2a04e 100644
--- a/doc/development/ai_features/duo_chat.md
+++ b/doc/development/ai_features/duo_chat.md
@@ -153,14 +153,14 @@ commit it together with the change.
 The following CI jobs for GitLab project run the tests tagged with `real_ai_request`:
 
 - `rspec-ee unit gitlab-duo-chat-zeroshot`:
-   the job runs `ee/spec/lib/gitlab/llm/completions/chat_real_requests_spec.rb`.
-   The job must be manually triggered and is allowed to fail.
+  the job runs `ee/spec/lib/gitlab/llm/completions/chat_real_requests_spec.rb`.
+  The job must be manually triggered and is allowed to fail.
 
 - `rspec-ee unit gitlab-duo-chat-qa`:
-   The job runs the QA evaluation tests in
-   `ee/spec/lib/gitlab/llm/chain/agents/zero_shot/qa_evaluation_spec.rb`.
-   The job must be manually triggered and is allowed to fail.
-   Read about [GitLab Duo Chat QA Evaluation Test](#gitlab-duo-chat-qa-evaluation-test).
+  The job runs the QA evaluation tests in
+  `ee/spec/lib/gitlab/llm/chain/agents/zero_shot/qa_evaluation_spec.rb`.
+  The job must be manually triggered and is allowed to fail.
+  Read about [GitLab Duo Chat QA Evaluation Test](#gitlab-duo-chat-qa-evaluation-test).
 
 - `rspec-ee unit gitlab-duo-chat-qa-fast`:
   The job runs a single QA evaluation test from `ee/spec/lib/gitlab/llm/chain/agents/zero_shot/qa_evaluation_spec.rb`.
@@ -207,12 +207,12 @@ See [the snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/3613745) used
 #### RSpec and helpers
 
 1. [The RSpec file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/spec/lib/gitlab/llm/chain/agents/zero_shot/qa_evaluation_spec.rb)
-  and the included helpers invoke the Chat service, an internal interface with the question.
+   and the included helpers invoke the Chat service, an internal interface with the question.
 
 1. After collecting the Chat service's answer,
-  the answer is injected into a prompt, also known as an "evaluation prompt", that instructs
-  a LLM to grade the correctness of the answer based on the question and a context.
-  The context is simply a JSON serialization of the issue or epic being asked about in each question.
+   the answer is injected into a prompt, also known as an "evaluation prompt", that instructs
+   a LLM to grade the correctness of the answer based on the question and a context.
+   The context is simply a JSON serialization of the issue or epic being asked about in each question.
 
 1. The evaluation prompt is sent to two LLMs, Claude and Vertex.
 
diff --git a/doc/development/application_limits.md b/doc/development/application_limits.md
index b0422517daa2e2bd5166054fb5753a614907e5c5..f30506a16cdfcf79a38bf62e060e18c155e836f3 100644
--- a/doc/development/application_limits.md
+++ b/doc/development/application_limits.md
@@ -171,7 +171,7 @@ The process for adding a new throttle is loosely:
 in the `ApplicationSetting` model.
 1. Update the JSON schema validator for the [rate_limits column](https://gitlab.com/gitlab-org/gitlab/-/blob/63b37287ae028842fcdcf56d311e6bb0c7e09e79/app/validators/json_schemas/application_setting_rate_limits.json).
 1. Extend `Gitlab::RackAttack` and `Gitlab::RackAttack::Request` to configure the new rate limit,
-  and apply it to the desired requests.
+   and apply it to the desired requests.
 1. Add the new settings to the Admin Area form in `app/views/admin/application_settings/_ip_limits.html.haml`.
 1. Document the new settings in [User and IP rate limits](../administration/settings/user_and_ip_rate_limits.md) and [Application settings API](../api/settings.md).
 1. Configure the rate limit for GitLab.com and document it in [GitLab.com-specific rate limits](../user/gitlab_com/index.md#gitlabcom-specific-rate-limits).
diff --git a/doc/development/application_slis/index.md b/doc/development/application_slis/index.md
index 5c18a4d611e432da8a19fc90987e069c2c759385..4d4da57a44abef65240020cb5ace7ae2d9111e44 100644
--- a/doc/development/application_slis/index.md
+++ b/doc/development/application_slis/index.md
@@ -34,7 +34,7 @@ The following metrics are defined:
 - `Gitlab::Metrics::Sli::Apdex.new('foo')` defines:
   - `gitlab_sli_foo_apdex_total` for the total number of measurements.
   - `gitlab_sli_foo_apdex_success_total` for the number of successful
-       measurements.
+    measurements.
 - `Gitlab::Metrics::Sli::ErrorRate.new('foo')` defines:
   - `gitlab_sli_foo_total` for the total number of measurements.
   - `gitlab_sli_foo_error_total` for the number of error
diff --git a/doc/development/application_slis/sidekiq_execution.md b/doc/development/application_slis/sidekiq_execution.md
index cf6ff5b28d7eceb5edd8e6e860114039155e4345..3821e3c8f0e9d0e548d9c76a0217b4e57050901d 100644
--- a/doc/development/application_slis/sidekiq_execution.md
+++ b/doc/development/application_slis/sidekiq_execution.md
@@ -19,18 +19,18 @@ The error rate measures unsuccessful jobs completion when exception occurs as an
 server misbehavior.
 
 - `gitlab_sli_sidekiq_execution_apdex_total`: This counter gets
-   incremented for every successful job execution that does not result in an exception. It ensures slow jobs are not
-   counted twice, because the job is already counted in the error SLI.
+  incremented for every successful job execution that does not result in an exception. It ensures slow jobs are not
+  counted twice, because the job is already counted in the error SLI.
 
 - `gitlab_sli_sidekiq_execution_apdex_success_total`: This counter gets
-   incremented for every successful job that performed faster than
-   the [defined target duration depending on the job urgency](../sidekiq/worker_attributes.md#job-urgency).
+  incremented for every successful job that performed faster than
+  the [defined target duration depending on the job urgency](../sidekiq/worker_attributes.md#job-urgency).
 
 - `gitlab_sli_sidekiq_execution_error_total`: This counter gets
-   incremented for every job that encountered an exception.
+  incremented for every job that encountered an exception.
 
 - `gitlab_sli_sidekiq_execution_total`: This counter gets
-   incremented for every job execution.
+  incremented for every job execution.
 
 These counters are labeled with:
 
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index 9346ec5b7ea5411f049a4a5fe15780a23d153f55..921023aa94ce54afe36d5a49ae7c0d17c4c6f921 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -1115,7 +1115,7 @@ Many other settings are better placed in the app itself, in `ApplicationSetting`
 When adding a setting to `gitlab.yml`:
 
 1. Ensure that it is also
-  [added to Omnibus](https://docs.gitlab.com/omnibus/settings/gitlab.yml#adding-a-new-setting-to-gitlabyml).
+   [added to Omnibus](https://docs.gitlab.com/omnibus/settings/gitlab.yml#adding-a-new-setting-to-gitlabyml).
 1. Ensure that it is also [added to Charts](https://docs.gitlab.com/charts/development/style_guide.html), if needed.
 1. Ensure that it is also [added to GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/templates/gitlab/config/gitlab.yml.erb).
 
diff --git a/doc/development/cicd/index.md b/doc/development/cicd/index.md
index f96ce20b2d33b2baaaaeada04c442adddbc443e8..1162ff7e0c8875db738a9da962e31b1e896c6e72 100644
--- a/doc/development/cicd/index.md
+++ b/doc/development/cicd/index.md
@@ -169,9 +169,9 @@ There are two ways of marking builds as "stuck" and drop them.
     - [In the future](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121761), if the project is not on the plan that available runners for the build require via `allowed_plans`, then the build is immediately dropped with `no_matching_runner`.
 1. If there is no available Runner to pick up a build, it is dropped after 1 hour by [`Ci::StuckBuilds::DropPendingService`](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.0.4-ee/app/services/ci/stuck_builds/drop_pending_service.rb).
     - If a job is not picked up by a runner in 24 hours it is automatically removed from
-    the processing queue after that time.
+      the processing queue after that time.
     - If a pending job is **stuck**, when there is no
-    runner available that can process it, it is removed from the queue after 1 hour.
+      runner available that can process it, it is removed from the queue after 1 hour.
     - In both cases the job's status is changed to `failed` with an appropriate failure reason.
 
 #### The reason behind this difference
diff --git a/doc/development/cicd/templates.md b/doc/development/cicd/templates.md
index 9f62de9bbdcdeb8076e2416d0a551be886ebc85e..2cc1994b3e14bbc8d9c66fcc5d825fdcd0664e35 100644
--- a/doc/development/cicd/templates.md
+++ b/doc/development/cicd/templates.md
@@ -439,7 +439,7 @@ To add a metric definition for a new template:
    - [`config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml)
 
 1. Use the same event name as above as the last argument in the following command to
-  [add new metric definitions](../internal_analytics/metrics/metrics_instrumentation.md#create-a-new-metric-instrumentation-class):
+   [add new metric definitions](../internal_analytics/metrics/metrics_instrumentation.md#create-a-new-metric-instrumentation-class):
 
    ```shell
    bundle exec rails generate gitlab:usage_metric_definition:redis_hll ci_templates <template_metric_event_name>
diff --git a/doc/development/contributing/merge_request_workflow.md b/doc/development/contributing/merge_request_workflow.md
index f713fe62826b85a9703900a725c93f9a4f2e7304..a280d1dde3bc7b18f25200f8028941210ca8e0ce 100644
--- a/doc/development/contributing/merge_request_workflow.md
+++ b/doc/development/contributing/merge_request_workflow.md
@@ -301,7 +301,7 @@ requirements.
 1. Your merge request has at least 1 approval, but depending on your changes
    you might need additional approvals. Refer to the [Approval guidelines](../code_review.md#approval-guidelines).
    - You don't have to select any specific approvers, but you can if you really want
-      specific people to approve your merge request.
+     specific people to approve your merge request.
 1. Merged by a project maintainer.
 
 ### Production use
diff --git a/doc/development/dangerbot.md b/doc/development/dangerbot.md
index 2a2d677e95ab8adad4e9669364ec6833b0728411..dbefbb738d0c0fa4aba76d4b66f9f4a9156fc66f 100644
--- a/doc/development/dangerbot.md
+++ b/doc/development/dangerbot.md
@@ -194,10 +194,10 @@ where Danger is already configured.
 Contributors can configure Danger for their forks with the following steps:
 
 1. Create a [personal API token](https://gitlab.com/-/user_settings/personal_access_tokens?name=GitLab+Dangerbot&scopes=api)
-  that has the `api` scope set (don't forget to copy it to the clipboard).
+   that has the `api` scope set (don't forget to copy it to the clipboard).
 1. In your fork, add a [project CI/CD variable](../ci/variables/index.md#for-a-project)
-  called `DANGER_GITLAB_API_TOKEN` with the token copied in the previous step.
+   called `DANGER_GITLAB_API_TOKEN` with the token copied in the previous step.
 1. Make the variable [masked](../ci/variables/index.md#mask-a-cicd-variable) so it
-  doesn't show up in the job logs. The variable cannot be
-  [protected](../ci/variables/index.md#protect-a-cicd-variable), because it needs
-  to be present for all branches.
+   doesn't show up in the job logs. The variable cannot be
+   [protected](../ci/variables/index.md#protect-a-cicd-variable), because it needs
+   to be present for all branches.
diff --git a/doc/development/database/batched_background_migrations.md b/doc/development/database/batched_background_migrations.md
index ec1a71eb4f4ddd2640f936ea589d61f7cc5e0f3f..da58d8563a6a5921e26d60a789d1708241bae47d 100644
--- a/doc/development/database/batched_background_migrations.md
+++ b/doc/development/database/batched_background_migrations.md
@@ -279,10 +279,10 @@ the migration that was used to enqueue it. Pay careful attention to:
 
 - The job arguments: Needs to exactly match or it will not find the queued migration
 - The `gitlab_schema`: Needs to exactly match or it will not find the queued
-   migration. Even if the `gitlab_schema` of the table has changed from
-   `gitlab_main` to `gitlab_main_cell` in the meantime you must finalize it
-   with `gitlab_main` if that's what was used when queueing the batched
-   background migration.
+  migration. Even if the `gitlab_schema` of the table has changed from
+  `gitlab_main` to `gitlab_main_cell` in the meantime you must finalize it
+  with `gitlab_main` if that's what was used when queueing the batched
+  background migration.
 
 When finalizing a batched background migration you also need to update the
 `finalized_by` in the corresponding `db/docs/batched_background_migrations`
diff --git a/doc/development/database/deleting_migrations.md b/doc/development/database/deleting_migrations.md
index 829477bc755cd640052dcb81e180067667321606..72a69a696f5412514fe34b53082faee6129a9977 100644
--- a/doc/development/database/deleting_migrations.md
+++ b/doc/development/database/deleting_migrations.md
@@ -29,7 +29,7 @@ Migrations can be disabled if:
 In order to disable a migration, the following steps apply to all types of migrations:
 
 1. Turn the migration into a no-op by removing the code inside `#up`, `#down`
-  or `#perform` methods, and adding `# no-op` comment instead.
+   or `#perform` methods, and adding `# no-op` comment instead.
 1. Add a comment explaining why the code is gone.
 
 Disabling migrations requires explicit approval of Database Maintainer.
diff --git a/doc/development/database_review.md b/doc/development/database_review.md
index 0f286bbd8822cac80d3961003d134f6ad711b9db..79e6534f9e5cc5ba7e5d5241c100fa78a107a63f 100644
--- a/doc/development/database_review.md
+++ b/doc/development/database_review.md
@@ -174,7 +174,7 @@ Include in the MR description:
 
 - The query plan for each raw SQL query included in the merge request along with the link to the query plan following each raw SQL snippet.
 - Provide a link to the plan generated using the `explain` command in the [postgres.ai](database/database_lab.md) chatbot. The `explain` command runs
-    `EXPLAIN ANALYZE`.
+  `EXPLAIN ANALYZE`.
   - If it's not possible to get an accurate picture in Database Lab, you may need to
     seed a development environment, and instead provide output
     from `EXPLAIN ANALYZE`. Create links to the plan using [explain.depesz.com](https://explain.depesz.com) or [explain.dalibo.com](https://explain.dalibo.com). Be sure to paste both the plan and the query used in the form.
@@ -263,7 +263,7 @@ to add the raw SQL query and query plan to the Merge Request description, and re
     This can be the number of expected batches times the delay interval.
   - Manually trigger the [database testing](database/database_migration_pipeline.md) job (`db:gitlabcom-database-testing`) in the `test` stage.
   - If a single `update` is below than `1s` the query can be placed
-      directly in a regular migration (inside `db/migrate`).
+    directly in a regular migration (inside `db/migrate`).
   - Background migrations are usually used, but not limited to:
     - Migrating data in larger tables.
     - Making numerous SQL queries per record in a dataset.
diff --git a/doc/development/documentation/review_apps.md b/doc/development/documentation/review_apps.md
index 7c1f0173eab04e521fc800800baf5f6111101214..0042fcfcf31740882b7cf270c0b94963b06cc062 100644
--- a/doc/development/documentation/review_apps.md
+++ b/doc/development/documentation/review_apps.md
@@ -76,13 +76,13 @@ projects, you can use the following CI/CD template to add a manually triggered r
   image: ruby:3.1-alpine
   needs: []
   before_script:
-  - gem install gitlab --no-doc
-  # We need to download the script rather than clone the repo since the
-  # review-docs-cleanup job will not be able to run when the branch gets
-  # deleted (when merging the MR).
-  - apk add --update openssl
-  - wget https://gitlab.com/gitlab-org/gitlab/-/raw/master/scripts/trigger-build.rb
-  - chmod 755 trigger-build.rb
+    - gem install gitlab --no-doc
+    # We need to download the script rather than clone the repo since the
+    # review-docs-cleanup job will not be able to run when the branch gets
+    # deleted (when merging the MR).
+    - apk add --update openssl
+    - wget https://gitlab.com/gitlab-org/gitlab/-/raw/master/scripts/trigger-build.rb
+    - chmod 755 trigger-build.rb
   variables:
     GIT_STRATEGY: none
     DOCS_REVIEW_APPS_DOMAIN: docs.gitlab-review.app
@@ -96,7 +96,7 @@ projects, you can use the following CI/CD template to add a manually triggered r
 # https://docs.gitlab.com/ee/development/documentation/index.html#previewing-the-changes-live
 review-docs-deploy:
   extends:
-  - .review-docs
+    - .review-docs
   environment:
     name: review-docs/mr-${CI_MERGE_REQUEST_IID}
     # DOCS_REVIEW_APPS_DOMAIN and DOCS_GITLAB_REPO_SUFFIX are CI variables
@@ -110,7 +110,7 @@ review-docs-deploy:
 # Cleanup remote environment of gitlab-docs
 review-docs-cleanup:
   extends:
-  - .review-docs
+    - .review-docs
   environment:
     name: review-docs/mr-${CI_MERGE_REQUEST_IID}
     action: stop
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 37505c1e254abe9d78bc24b657d43c101d28efb0..8cb55dde95324c0d13697ad714c09489ab2ad5b3 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -189,9 +189,9 @@ Use the following questions to guide you:
      or `ULTIMATE_FEATURES`.
 1. Will this feature be available globally (system-wide at the GitLab instance level)?
     - Features such as [Geo](../administration/geo/index.md) and
-    [Database Load Balancing](../administration/postgresql/database_load_balancing.md) are used by the entire instance
-    and cannot be restricted to individual user namespaces. These features are defined in the instance license.
-    Add these features to `GLOBAL_FEATURES`.
+      [Database Load Balancing](../administration/postgresql/database_load_balancing.md) are used by the entire instance
+      and cannot be restricted to individual user namespaces. These features are defined in the instance license.
+      Add these features to `GLOBAL_FEATURES`.
 
 ### Guard your EE feature
 
diff --git a/doc/development/fe_guide/emojis.md b/doc/development/fe_guide/emojis.md
index f1e4c55f98567eba3243ab1196ff0a61239effe4..d76ecdbecfde7868a04835526caaa06013b31cdd 100644
--- a/doc/development/fe_guide/emojis.md
+++ b/doc/development/fe_guide/emojis.md
@@ -11,27 +11,27 @@ when your platform does not support it.
 
 ## How to update Emojis
 
- 1. Update the `gemojione` gem
- 1. Update `fixtures/emojis/index.json` from [Gemojione](https://github.com/bonusly/gemojione/blob/master/config/index.json).
-    In the future, we could grab the file directly from the gem.
-    We should probably make a PR on the Gemojione project to get access to
-    all emojis after being parsed or just a raw path to the `json` file itself.
- 1. Ensure [`emoji-unicode-version`](https://www.npmjs.com/package/emoji-unicode-version)
-    is up to date with the latest version.
- 1. Run `bundle exec rake gemojione:aliases`
- 1. Run `bundle exec rake gemojione:digests`
- 1. Run `bundle exec rake gemojione:sprite`
- 1. Ensure new sprite sheets generated for 1x and 2x
-    - `app/assets/images/emoji.png`
-    - `app/assets/images/emoji@2x.png`
- 1. Update `fixtures/emojis/intents.json` with any new emoji that we would like to highlight as having positive or negative intent.
-    - Positive intent should be set to `0.5`.
-    - Neutral intent can be set to `1`. This is applied to all emoji automatically so there is no need to set this explicitly.
-    - Negative intent should be set to `1.5`.
- 1. Ensure you see new individual images copied into `app/assets/images/emoji/`
- 1. Ensure you can see the new emojis and their aliases in the GitLab Flavored Markdown (GLFM) Autocomplete
- 1. Ensure you can see the new emojis and their aliases in the emoji reactions menu
- 1. You might need to add new emoji Unicode support checks and rules for platforms
-    that do not support a certain emoji and we need to fallback to an image.
-    See `app/assets/javascripts/emoji/support/is_emoji_unicode_supported.js`
-    and `app/assets/javascripts/emoji/support/unicode_support_map.js`
+1. Update the `gemojione` gem
+1. Update `fixtures/emojis/index.json` from [Gemojione](https://github.com/bonusly/gemojione/blob/master/config/index.json).
+   In the future, we could grab the file directly from the gem.
+   We should probably make a PR on the Gemojione project to get access to
+   all emojis after being parsed or just a raw path to the `json` file itself.
+1. Ensure [`emoji-unicode-version`](https://www.npmjs.com/package/emoji-unicode-version)
+   is up to date with the latest version.
+1. Run `bundle exec rake gemojione:aliases`
+1. Run `bundle exec rake gemojione:digests`
+1. Run `bundle exec rake gemojione:sprite`
+1. Ensure new sprite sheets generated for 1x and 2x
+   - `app/assets/images/emoji.png`
+   - `app/assets/images/emoji@2x.png`
+1. Update `fixtures/emojis/intents.json` with any new emoji that we would like to highlight as having positive or negative intent.
+   - Positive intent should be set to `0.5`.
+   - Neutral intent can be set to `1`. This is applied to all emoji automatically so there is no need to set this explicitly.
+   - Negative intent should be set to `1.5`.
+1. Ensure you see new individual images copied into `app/assets/images/emoji/`
+1. Ensure you can see the new emojis and their aliases in the GitLab Flavored Markdown (GLFM) Autocomplete
+1. Ensure you can see the new emojis and their aliases in the emoji reactions menu
+1. You might need to add new emoji Unicode support checks and rules for platforms
+   that do not support a certain emoji and we need to fallback to an image.
+   See `app/assets/javascripts/emoji/support/is_emoji_unicode_supported.js`
+   and `app/assets/javascripts/emoji/support/unicode_support_map.js`
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index a391daf8962c5eb43fe6b0d39a8f3c63cdc4317f..d7f1e388e0e6adafe15497e279ac5cb1918b1e02 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -368,7 +368,7 @@ These GitLab-specific references are used exclusively by GitLab (through Gitaly)
 
 - `refs/keep-around/<object-id>`. References to commits that have pipeline jobs or merge requests. The `object-id` points to the commit the pipeline was run on.
 - `refs/merge-requests/<merge-request-iid>/`. [Merges](https://git-scm.com/docs/git-merge) merge two histories together. This ref namespace tracks information about a
-   merge using the following refs under it:
+  merge using the following refs under it:
   - `head`. Current `HEAD` of the merge request.
   - `merge`. Commit for the merge request. Every merge request creates a commit object under `refs/keep-around`.
   - If [merge trains are enabled](../ci/pipelines/merge_trains.md): `train`. Commit for the merge train.
diff --git a/doc/development/github_importer.md b/doc/development/github_importer.md
index c3fad50f2a62d271212d4cceaaf41c4fd2a9f8db..0db36674ab822d451111ddfe30045e04c317f015 100644
--- a/doc/development/github_importer.md
+++ b/doc/development/github_importer.md
@@ -228,7 +228,7 @@ Advancing stages is done in one of two ways:
 
 - Scheduling the worker for the next stage directly.
 - Scheduling a job for `Gitlab::GithubImport::AdvanceStageWorker` which will
-   advance the stage when all work of the current stage has been completed.
+  advance the stage when all work of the current stage has been completed.
 
 The first approach should only be used by workers that perform all their work in
 a single thread, while `AdvanceStageWorker` should be used for everything else.
@@ -309,8 +309,8 @@ We cache two types of lookups:
 
 - A positive lookup, meaning we found a GitLab user ID.
 - A negative lookup, meaning we didn't find a GitLab user ID. Caching this
-   prevents us from performing the same work for users that we know don't exist
-   in our GitLab database.
+  prevents us from performing the same work for users that we know don't exist
+  in our GitLab database.
 
 The expiration time of these keys is 24 hours. When retrieving the cache of a
 positive lookup, we refresh the TTL automatically. The TTL of false lookups is
diff --git a/doc/development/gitlab_flavored_markdown/specification_guide/index.md b/doc/development/gitlab_flavored_markdown/specification_guide/index.md
index 3ee2adb02419f0e28725f99b2b5335877cd543e0..17cc9aaaa82c9bf0e196da9b66ad4a7bd30d0c12 100644
--- a/doc/development/gitlab_flavored_markdown/specification_guide/index.md
+++ b/doc/development/gitlab_flavored_markdown/specification_guide/index.md
@@ -858,7 +858,7 @@ consists of the manually updated Markdown+HTML examples for the
 - It should consist of `H1` header sections, with all examples nested either 2 or 3 levels deep
   within `H2` or `H3` header sections.
 - `H3` header sections must be nested within `H2` header sections. They cannot be
-   nested directly within `H1` header sections.
+  nested directly within `H1` header sections.
 
 It _may_ contain additional prose-only header sections which do not contain any examples.
 
diff --git a/doc/development/graphql_guide/monitoring.md b/doc/development/graphql_guide/monitoring.md
index 6e5c8e730ebd7bb8aef6ff9611107119ae0ff721..57d55791b5bfab6ea7c5e8f2f50c9238045f8c3b 100644
--- a/doc/development/graphql_guide/monitoring.md
+++ b/doc/development/graphql_guide/monitoring.md
@@ -13,7 +13,7 @@ In Kibana we can inspect two kinds of GraphQL logs:
 
 - Logs of each GraphQL query executed within the request.
 - Logs of the full request, which due to [query multiplexing](https://graphql-ruby.org/queries/multiplex.html)
-   may have executed multiple queries.
+  may have executed multiple queries.
 
 ## Logs of each GraphQL query
 
diff --git a/doc/development/i18n/translation.md b/doc/development/i18n/translation.md
index 8bcddcfdddeac623eefc3b73152d420f2507c1e4..cf12a75e355288a43cf20ece313dbf122a5d0732 100644
--- a/doc/development/i18n/translation.md
+++ b/doc/development/i18n/translation.md
@@ -20,7 +20,7 @@ GitLab is being translated into many languages. To select a language to contribu
 
    - If the language you want is available, proceed to the next step.
    - If the language you want is not available,
-      [open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=✓&state=all&label_name[]=Category%3AInternationalization).
+     [open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=✓&state=all&label_name[]=Category%3AInternationalization).
       Notify our Crowdin administrators by including `@gitlab-org/manage/import` in your issue.
    - After the issue and any merge requests are complete, restart this procedure.
 
diff --git a/doc/development/performance.md b/doc/development/performance.md
index 204c887e67f45f0e8f1de3090e5707e3649a1e85..e441ff857839482c975313ac166f8483d7cbc45a 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -136,13 +136,13 @@ In short:
 
 - Don't trust benchmarks you find on the internet.
 - Never make claims based on just benchmarks, always measure in production to
-   confirm your findings.
+  confirm your findings.
 - X being N times faster than Y is meaningless if you don't know what impact it
-   has on your production environment.
+  has on your production environment.
 - A production environment is the _only_ benchmark that always tells the truth
-   (unless your performance monitoring systems are not set up correctly).
+  (unless your performance monitoring systems are not set up correctly).
 - If you must write a benchmark use the benchmark-ips Gem instead of Ruby's
-   `Benchmark` module.
+  `Benchmark` module.
 
 ## Profiling with Stackprof
 
@@ -443,7 +443,7 @@ There are two ways of measuring your own code:
 The `mem_*` values represent different aspects of how objects and memory are allocated in Ruby:
 
 - The following example will create around of `1000` of `mem_objects` since strings
-   can be frozen, and while the underlying string object remains the same, we still need to allocate 1000 references to this string:
+  can be frozen, and while the underlying string object remains the same, we still need to allocate 1000 references to this string:
 
   ```ruby
   Gitlab::Memory::Instrumentation.with_memory_allocations do
@@ -454,7 +454,7 @@ The `mem_*` values represent different aspects of how objects and memory are all
   ```
 
 - The following example will create around of `1000` of `mem_objects`, as strings are created dynamically.
-   Each of them will not allocate additional memory, as they fit into Ruby slot of 40 bytes:
+  Each of them will not allocate additional memory, as they fit into Ruby slot of 40 bytes:
 
   ```ruby
   Gitlab::Memory::Instrumentation.with_memory_allocations do
@@ -466,7 +466,7 @@ The `mem_*` values represent different aspects of how objects and memory are all
   ```
 
 - The following example will create around of `1000` of `mem_objects`, as strings are created dynamically.
-   Each of them will allocate additional memory as strings are larger than Ruby slot of 40 bytes:
+  Each of them will allocate additional memory as strings are larger than Ruby slot of 40 bytes:
 
   ```ruby
   Gitlab::Memory::Instrumentation.with_memory_allocations do
@@ -478,7 +478,7 @@ The `mem_*` values represent different aspects of how objects and memory are all
   ```
 
 - The following example will allocate over 40 kB of data, and perform only a single memory allocation.
-   The existing object will be reallocated/resized on subsequent iterations:
+  The existing object will be reallocated/resized on subsequent iterations:
 
   ```ruby
   Gitlab::Memory::Instrumentation.with_memory_allocations do
@@ -490,7 +490,7 @@ The `mem_*` values represent different aspects of how objects and memory are all
   ```
 
 - The following example will create over 1k of objects, perform over 1k of allocations, each time mutating the object.
-   This does result in copying a lot of data and perform a lot of memory allocations
+  This does result in copying a lot of data and perform a lot of memory allocations
   (as represented by `mem_bytes` counter) indicating very inefficient method of appending string:
 
   ```ruby
diff --git a/doc/development/pipelines/index.md b/doc/development/pipelines/index.md
index d20ac65fb6631376a211583b686f68a3cc3a84ac..67867a941358fef76315208685aee9b34007dcea 100644
--- a/doc/development/pipelines/index.md
+++ b/doc/development/pipelines/index.md
@@ -612,9 +612,9 @@ Our current RSpec tests parallelization setup is as follows:
 1. The `update-tests-metadata` job (which only runs on scheduled pipelines for
    [the canonical project](https://gitlab.com/gitlab-org/gitlab) and updates the `knapsack/report-master.json` in 2 ways:
    1. By default, it takes all the `knapsack/rspec*.json` files and merge them all together into a single
-   `knapsack/report-master.json` file that is saved as artifact.
+      `knapsack/report-master.json` file that is saved as artifact.
    1. (Experimental) When the `AVERAGE_KNAPSACK_REPORT` environment variable is set to `true`, instead of merging the reports, the job will calculate the average of the test duration between `knapsack/report-master.json` and `knapsack/rspec*.json` to reduce the performance impact from potentially random factors such as spec ordering, runner hardware differences, flaky tests, etc.
-   This experimental approach is aimed to better predict the duration for each spec files to distribute load among parallel jobs more evenly so the jobs can finish around the same time.
+      This experimental approach is aimed to better predict the duration for each spec files to distribute load among parallel jobs more evenly so the jobs can finish around the same time.
 
 After that, the next pipeline uses the up-to-date `knapsack/report-master.json` file.
 
diff --git a/doc/development/pipelines/performance.md b/doc/development/pipelines/performance.md
index a7983971dfe093baf15c71d5640a7486c2fdd8a3..2513ce546f922990570184308a20d970da819326 100644
--- a/doc/development/pipelines/performance.md
+++ b/doc/development/pipelines/performance.md
@@ -116,8 +116,8 @@ we introduced a new `cache-workhorse` job that:
 This job tries to download a generic package that contains GitLab Workhorse binaries needed in the GitLab test suite (under `tmp/tests/gitlab-workhorse`).
 
 - If the package URL returns a 404:
-   1. It runs `scripts/setup-test-env`, so that the GitLab Workhorse binaries are built.
-   1. It then creates an archive which contains the binaries and upload it [as a generic package](https://gitlab.com/gitlab-org/gitlab/-/packages/).
+  1. It runs `scripts/setup-test-env`, so that the GitLab Workhorse binaries are built.
+  1. It then creates an archive which contains the binaries and upload it [as a generic package](https://gitlab.com/gitlab-org/gitlab/-/packages/).
 - Otherwise, if the package already exists, it exits the job successfully.
 
 We also changed the `setup-test-env` job to:
diff --git a/doc/development/real_time.md b/doc/development/real_time.md
index a9bfc84760d0e4bf2ebbb290c54f1957434064c4..50eb90bf89e7122127ffc91fe53d9d26f051d029 100644
--- a/doc/development/real_time.md
+++ b/doc/development/real_time.md
@@ -552,9 +552,9 @@ as Action Cable broadcastings, which as mentioned above represent Redis PubSub c
 This means that for each subscriber, two PubSub channels are used:
 
 - One `graphql-event:<namespace>:<topic>` channel per each topic. This channel is used to track which client is subscribed
-   to which event and is shared among all potential clients. The use of a `namespace` is optional and it can be blank.
+  to which event and is shared among all potential clients. The use of a `namespace` is optional and it can be blank.
 - One `graphql-subscription:<subscription-id>` channel per each client. This channel is used to transmit the query result
-   back to the respective client and hence cannot be shared between different clients.
+  back to the respective client and hence cannot be shared between different clients.
 
 The next section describes how the GitLab frontend uses GraphQL subscriptions to implement real-time updates.
 
diff --git a/doc/development/secure_coding_guidelines.md b/doc/development/secure_coding_guidelines.md
index cc213c1bba39ee5349b088040e3aa50eb5d8ca7b..610bf3781c17c0bee7cf8c154ce87c21e7f64976 100644
--- a/doc/development/secure_coding_guidelines.md
+++ b/doc/development/secure_coding_guidelines.md
@@ -224,10 +224,10 @@ have been reported to GitLab include:
 
 - Network mapping of internal services
   - This can help an attacker gather information about internal services
-  that could be used in further attacks. [More details](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51327).
+    that could be used in further attacks. [More details](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51327).
 - Reading internal services, including cloud service metadata.
   - The latter can be a serious problem, because an attacker can obtain keys that allow control of the victim's cloud infrastructure. (This is also a good reason
-  to give only necessary privileges to the token.). [More details](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51490).
+    to give only necessary privileges to the token.). [More details](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51490).
 - When combined with CRLF vulnerability, remote code execution. [More details](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/41293).
 
 ### When to Consider
diff --git a/doc/development/testing_guide/end_to_end/feature_flags.md b/doc/development/testing_guide/end_to_end/feature_flags.md
index 1ec5c7130ac770d93ec6c38e4bd3a8a6e7586398..e96bc24a443b15a9b4c3e8c6db205ce8b2316005 100644
--- a/doc/development/testing_guide/end_to_end/feature_flags.md
+++ b/doc/development/testing_guide/end_to_end/feature_flags.md
@@ -38,8 +38,8 @@ or [feature group](../../feature_flags/index.md#feature-groups).
 
 - If a global feature flag must be used, it is strongly recommended to apply `scope: :global` to the `feature_flag` metadata. This is, however, left up to the SET's discretion to determine the level of risk.
   - For example, a test uses a global feature flag that only affects a small area of the application and is also needed to check for critical issues on live environments.
-  In such a scenario, it would be riskier to skip running the test. For cases like this, `scope` can be left out of the metadata so that it can still run in live environments
-  with administrator access, such as staging.
+    In such a scenario, it would be riskier to skip running the test. For cases like this, `scope` can be left out of the metadata so that it can still run in live environments
+    with administrator access, such as staging.
 
 **Note on `requires_admin`:** This tag should still be applied if there are other actions within the test that require administrator access that are unrelated to updating a
 feature flag (like creating a user via the API).
diff --git a/doc/development/testing_guide/end_to_end/page_objects.md b/doc/development/testing_guide/end_to_end/page_objects.md
index 6858253a2961d42f13e94eb632749dc249abed3b..b9619a45426531d65116ab9a5ac6d58d2ab5bfd6 100644
--- a/doc/development/testing_guide/end_to_end/page_objects.md
+++ b/doc/development/testing_guide/end_to_end/page_objects.md
@@ -248,7 +248,7 @@ These modules must:
 
 1. Extend from the `QA::Page::PageConcern` module, with `extend QA::Page::PageConcern`.
 1. Override the `self.prepended` method if they need to `include`/`prepend` other modules themselves, and/or define
-  `view` or `elements`.
+   `view` or `elements`.
 1. Call `super` as the first thing in `self.prepended`.
 1. Include/prepend other modules and define their `view`/`elements` in a `base.class_eval` block to ensure they're
    defined in the class that prepends the module.
diff --git a/doc/development/value_stream_analytics.md b/doc/development/value_stream_analytics.md
index 83cfb3fb3857ef27c571eff24986160ae02c1c17..5150c790410d766070162df63c4757aa6072db33 100644
--- a/doc/development/value_stream_analytics.md
+++ b/doc/development/value_stream_analytics.md
@@ -62,7 +62,7 @@ of the stage. Stages are configurable by the user within the pairing rules defin
 - End event column: uses the `merge_request_metrics.merged_at` timestamp column.
 - Stage event hash ID: a calculated hash for the pair of start and end event identifiers.
   - If two stages have the same configuration of start and end events, then their stage event hash.
-  IDs are identical.
+    IDs are identical.
   - The stage event hash ID is later used to store the aggregated data in partitioned database tables.
 
 Historically, value stream analytics defined [six stages](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/analytics/cycle_analytics/default_stages.rb)
diff --git a/doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md b/doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md
index 8794870f7b046e195fe91d9730e000cef3b43829..fcde6b7d23cd4c173c22c0d3c40937ff3500d73b 100644
--- a/doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md
+++ b/doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md
@@ -49,7 +49,7 @@ Benefits of the aggregated VSA backend:
 - Ready for keyset pagination which can be useful for exporting the data.
 - Possibility to implement more complex event definitions.
   - For example, the start event can be two timestamp columns where the earliest value would be
-  used by the system.
+    used by the system.
   - Example: `MIN(issues.created_at, issues.updated_at)`
 
 ### Example configuration