diff --git a/doc/administration/gitaly/troubleshooting.md b/doc/administration/gitaly/troubleshooting.md
index 9b41717bd0bf4daa1b273ccd9f787f01cc0c7bea..164f2d655942e7b80b4babde0cf23f65ec24f06c 100644
--- a/doc/administration/gitaly/troubleshooting.md
+++ b/doc/administration/gitaly/troubleshooting.md
@@ -630,7 +630,7 @@ Is [some cases](index.md#known-issues) the Praefect database can get out of sync
 a given repository is fully synced on all nodes, run the [`gitlab:praefect:replicas` Rake task](../raketasks/praefect.md#replica-checksums)
 that checksums the repository on all Gitaly nodes.
 
-The [Praefect dataloss](recovery.md#check-for-data-loss) command only checks the state of the repository in the Praefect database, and cannot
+The [Praefect `dataloss`](recovery.md#check-for-data-loss) command only checks the state of the repository in the Praefect database, and cannot
 be relied to detect sync problems in this scenario.
 
 ### Relation does not exist errors
diff --git a/doc/architecture/blueprints/pods/pods-feature-ci-runners.md b/doc/architecture/blueprints/pods/pods-feature-ci-runners.md
index 2c6746ab2e171be47aa4b7abee9336cf42566f90..b204ab3455f27a8530663df071528a41bf06c348 100644
--- a/doc/architecture/blueprints/pods/pods-feature-ci-runners.md
+++ b/doc/architecture/blueprints/pods/pods-feature-ci-runners.md
@@ -105,8 +105,8 @@ We can pick a design where all runners are always registered and local to a give
   database to rather prefer to use JWT tokens that would encode
 - The Admin Area showing registered Runners would have to be scoped to a Pod
 
-This model might be desired since it provides strong isolation guarnatees.
-This model does significantly increase maintanance overhead since each Pod is managed
+This model might be desired since it provides strong isolation guarantees.
+This model does significantly increase maintenance overhead since each Pod is managed
 separately.
 
 This model may require adjustments to runner tags feature so that projects have consistent runner experience across pods.
@@ -123,7 +123,7 @@ However, this requires significant overhaul of system and to change the followin
 - build queuing would have to be reworked to be two phase where each Pod would know of all pending
   and running builds, but the actual claim of a build would happen against a Pod containing data
 - likely `ci_pending_builds` and `ci_running_builds` would have to be made `cluster-wide` tables
-  increasing likelity of creating hotspots in a system related to CI queueing
+  increasing likelihood of creating hotspots in a system related to CI queueing
 
 This model makes it complex to implement from engineering side. Does make some data being shared
 between Pods. Creates hotspots / scalability issues in a system (ex. during abuse) that
@@ -132,14 +132,14 @@ might impact experience of organizations on other Pods.
 ### 3.5. GitLab CI Daemon
 
 Another potential solution to explore is to have a dedicated service responsible for builds queueing
-owning it's database and working in a model of either shareded or podded service. There were prior
+owning it's database and working in a model of either sharded or podded service. There were prior
 discussions about [CI/CD Daemon](https://gitlab.com/gitlab-org/gitlab/-/issues/19435).
 
 If the service would be sharded:
 
 - depending on a model if runners are cluster-wide or pod-local this service would have to fetch
   data from all Pods
-- if the sharded service would be used we could adapt a model of either sharing database containig
+- if the sharded service would be used we could adapt a model of either sharing database containing
   `ci_pending_builds/ci_running_builds` with the service
 - if the sharded service would be used we could consider a push model where each Pod pushes to CI/CD Daemon
   builds that should be picked by Runner
diff --git a/doc/architecture/blueprints/pods/pods-feature-container-registry.md b/doc/architecture/blueprints/pods/pods-feature-container-registry.md
index 4a2e885a8c7295cec288ceb44f22a1a688aa7a9c..954a9c294177266a479bf001e3bed1bb30cc70b5 100644
--- a/doc/architecture/blueprints/pods/pods-feature-container-registry.md
+++ b/doc/architecture/blueprints/pods/pods-feature-container-registry.md
@@ -50,7 +50,7 @@ via `https://registry.gitlab.com`.
 
 The main identifiable problems are:
 
-- the authentication reqest (`https://gitlab.com/jwt/auth`) that is processed by GitLab.com
+- the authentication request (`https://gitlab.com/jwt/auth`) that is processed by GitLab.com
 - the `https://registry.gitlab.com` that is run by external service and uses it's own data store
 - the data deduplication, the Pods architecture with registry run in a Pod would reduce
   efficiency of data storage
@@ -100,7 +100,7 @@ curl \
 
 Due to it's architecture it extensive architecture and in general highly scalable
 horizontal architecture it should be evaluated if the GitLab Container Registry
-should be run not in Pod, but in a Cluster and be scaled indepdently.
+should be run not in Pod, but in a Cluster and be scaled independently.
 
 This might be easier, but would definitely not offer the same amount of data isolation.
 
diff --git a/doc/architecture/blueprints/pods/pods-feature-data-migration.md b/doc/architecture/blueprints/pods/pods-feature-data-migration.md
index edd3e24a7ba70536dd1302923a1f2740f0e0917b..fbe97316dcc47a9c7db84ccddd365f1762d9b533 100644
--- a/doc/architecture/blueprints/pods/pods-feature-data-migration.md
+++ b/doc/architecture/blueprints/pods/pods-feature-data-migration.md
@@ -29,7 +29,7 @@ into smaller ones. This describes various approaches to provide this type of spl
 We also need to handle for cases where data is already violating the expected
 isolation constraints of Pods (ie. references cannot span multiple
 organizations). We know that existing features like linked issues allowed users
-to link issues across any projects regardless of their hierachy. There are many
+to link issues across any projects regardless of their hierarchy. There are many
 similar features. All of this data will need to be migrated in some way before
 it can be split across different pods. This may mean some data needs to be
 deleted, or the feature changed and modelled slightly differently before we can
@@ -66,7 +66,7 @@ physical replication, etc.
 1. The data of Pod 0 is live replicated to as many Pods it needs to be split.
 1. Once consensus is achieved between Pod 0 and N-Pods the organizations to be migrated away
    are marked as read-only cluster-wide.
-1. The `routes` is updated on for all organizations to be split to indicate an authorative
+1. The `routes` is updated on for all organizations to be split to indicate an authoritative
    Pod holding the most recent data, like `gitlab-org` on `pod-100`.
 1. The data for `gitlab-org` on Pod 0, and on other non-authoritative N-Pods are dormant
    and will be removed in the future.
diff --git a/doc/architecture/blueprints/pods/pods-feature-git-access.md b/doc/architecture/blueprints/pods/pods-feature-git-access.md
index ae996281d46d966ef08c5f125c8b52547e6b4ba2..9bda2d1de9c1ba680b81131b0740100c78347be7 100644
--- a/doc/architecture/blueprints/pods/pods-feature-git-access.md
+++ b/doc/architecture/blueprints/pods/pods-feature-git-access.md
@@ -15,7 +15,7 @@ we can document the reasons for not choosing this approach.
 # Pods: Git Access
 
 This document describes impact of Pods architecture on all Git access (over HTTPS and SSH)
-patterns providing explanantion of how potentially those features should be changed
+patterns providing explanation of how potentially those features should be changed
 to work well with Pods.
 
 ## 1. Definition
@@ -130,7 +130,7 @@ sequenceDiagram
 
 ## 3. Proposal
 
-The Pods stateless router proposal requires that any ambigious path (that is not routable)
+The Pods stateless router proposal requires that any ambiguous path (that is not routable)
 will be made to be routable. It means that at least the following paths will have to be updated
 do introduce a routable entity (project, group, or organization).
 
diff --git a/doc/architecture/blueprints/pods/pods-feature-graphql.md b/doc/architecture/blueprints/pods/pods-feature-graphql.md
index 5f8a39c0b3ff69bc16ac35b3952fea889ff0e768..87c8391fbb3f68e9462c91188a17b0a4c2ab12dc 100644
--- a/doc/architecture/blueprints/pods/pods-feature-graphql.md
+++ b/doc/architecture/blueprints/pods/pods-feature-graphql.md
@@ -23,7 +23,7 @@ we can document the reasons for not choosing this approach.
 
 # Pods: GraphQL
 
-GitLab exensively uses GraphQL to perform efficient data query operations.
+GitLab extensively uses GraphQL to perform efficient data query operations.
 GraphQL due to it's nature is not directly routable. The way how GitLab uses
 it calls the `/api/graphql` endpoint, and only query or mutation of body request
 might define where the data can be accessed.
diff --git a/doc/architecture/blueprints/pods/pods-feature-router-endpoints-classification.md b/doc/architecture/blueprints/pods/pods-feature-router-endpoints-classification.md
index c672342fff93085b87326d5b7fa900c62514361e..bf0969fcb385cde2721592139ea6c6b5daf0dcae 100644
--- a/doc/architecture/blueprints/pods/pods-feature-router-endpoints-classification.md
+++ b/doc/architecture/blueprints/pods/pods-feature-router-endpoints-classification.md
@@ -29,7 +29,7 @@ hitting load balancer of a GitLab installation to a Pod that can serve it.
 Each Pod should be able to decode each request and classify for which Pod
 it belongs to.
 
-GitLab currently implements houndreds of endpoints. This document tries
+GitLab currently implements hundreds of endpoints. This document tries
 to describe various techniques that can be implemented to allow the Rails
 to provide this information efficiently.
 
diff --git a/doc/architecture/blueprints/rate_limiting/index.md b/doc/architecture/blueprints/rate_limiting/index.md
index 7ecd3bc1469ab1b89950eca2316c8e97564f4fe6..22eb9a16824afcbb282210589a15b75ece585926 100644
--- a/doc/architecture/blueprints/rate_limiting/index.md
+++ b/doc/architecture/blueprints/rate_limiting/index.md
@@ -286,7 +286,7 @@ The GitLab Policy Service might be used in two different ways:
 1. The policy service feature will be used as a backend to store policies defined by users.
 
 These are two slightly different use-cases: first one is about using
-internally-defined policies to ensure the stability / availably of a GitLab
+internally-defined policies to ensure the stability / availability of a GitLab
 instance (GitLab.com or self-managed instance). The second use-case is about
 making GitLab Policy Service a feature that users will be able to build on top
 of.
@@ -303,7 +303,7 @@ the sections of this document above.
 It is possible that GitLab Policy Service and Decoupled Limits Service can
 actually be the same thing. It, however, depends on the implementation details
 that we can't predict yet, and the decision about merging these services
-together will need to be informed by subsequent interations' feedback.
+together will need to be informed by subsequent iterations' feedback.
 
 ## Hierarchical limits
 
diff --git a/doc/architecture/blueprints/runner_scaling/index.md b/doc/architecture/blueprints/runner_scaling/index.md
index 24c6820f94a03ae35eaa81eb9ee164aca0ca0d67..9d69f97841a45e4acf4d76b3d6768cc5698bd2c6 100644
--- a/doc/architecture/blueprints/runner_scaling/index.md
+++ b/doc/architecture/blueprints/runner_scaling/index.md
@@ -273,7 +273,7 @@ interfaces.
 
 Within the `docker+autoscaling` executor the [`machineExecutor`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/267f40d871cd260dd063f7fbd36a921fedc62241/executors/docker/machine/machine.go#L19)
 type has a [`Machine`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/267f40d871cd260dd063f7fbd36a921fedc62241/helpers/docker/machine.go#L7)
-interface which it uses to aquire a VM during the common [`Prepare`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/267f40d871cd260dd063f7fbd36a921fedc62241/executors/docker/machine/machine.go#L71)
+interface which it uses to acquire a VM during the common [`Prepare`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/267f40d871cd260dd063f7fbd36a921fedc62241/executors/docker/machine/machine.go#L71)
 phase. This abstraction primarily creates, accesses and deletes VMs.
 
 There is no current abstraction for the VM autoscaling logic. It is tightly
@@ -401,7 +401,7 @@ In order to make use of the new interface, the autoscaling logic is pulled
 out of the Docker Executor and placed into a new Taskscaler library.
 
 This places the concerns of VM lifecycle, VM shape and job routing within
-the plugin. It also places the conern of VM autoscaling into a separate
+the plugin. It also places the concern of VM autoscaling into a separate
 component so it can be used by multiple Runner Executors (not just `docker+autoscaling`).
 
 Rationale: [Description of the InstanceGroup / Fleeting proposal](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/28848#note_823430883)
diff --git a/doc/ci/yaml/index.md b/doc/ci/yaml/index.md
index dfd96e7d7f10cfd4ea6f7671b407547cece199b0..5d6f4e965ea3c28c2c5dc1429e5f8afeaba31808 100644
--- a/doc/ci/yaml/index.md
+++ b/doc/ci/yaml/index.md
@@ -1767,8 +1767,8 @@ deploy:
 
 **Additional details**:
 
-- Enviroments created from this job definition are assigned a [tier](../environments/index.md#deployment-tier-of-environments) based on this value.
-- Existing environments don't have their tier updated if this value is added later. Existing enviroments must have their tier updated via the [Environments API](../../api/environments.md#update-an-existing-environment).
+- Environments created from this job definition are assigned a [tier](../environments/index.md#deployment-tier-of-environments) based on this value.
+- Existing environments don't have their tier updated if this value is added later. Existing environments must have their tier updated via the [Environments API](../../api/environments.md#update-an-existing-environment).
 
 **Related topics**:
 
@@ -3008,7 +3008,7 @@ job:
 
 > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/363024) in GitLab 15.3. Supported by `release-cli` v0.12.0 or later.
 
-If the tag does not exist, the newly created tag is annotated with the message specifed by `tag_message`.
+If the tag does not exist, the newly created tag is annotated with the message specified by `tag_message`.
 If omitted, a lightweight tag is created.
 
 **Keyword type**: Job keyword. You can use it only as part of a job.
diff --git a/doc/development/application_slis/rails_request_apdex.md b/doc/development/application_slis/rails_request_apdex.md
index 2fa9f5f48690bb734ccad069b10cd0baddf5c466..8fcd725f74d2ff6031950667435ad7c9fe2c20c7 100644
--- a/doc/development/application_slis/rails_request_apdex.md
+++ b/doc/development/application_slis/rails_request_apdex.md
@@ -108,7 +108,7 @@ a case-by-case basis. Take the following into account:
    should try to keep as short as possible.
 
 1. Traffic characteristics should also be taken into account. If the
-   traffic to the endpoint is bursty, like CI traffic spinning up a
+   traffic to the endpoint sometimes bursts, like CI traffic spinning up a
    big batch of jobs hitting the same endpoint, then having these
    endpoints take five seconds is unacceptable from an infrastructure point of
    view. We cannot scale up the fleet fast enough to accommodate for
diff --git a/doc/development/caching.md b/doc/development/caching.md
index 4c91e8eba6e4c282fa07ce089fe354813b88f2c0..58ec7a775917098b9f95214caba32d599e6fd09f 100644
--- a/doc/development/caching.md
+++ b/doc/development/caching.md
@@ -80,7 +80,7 @@ indicates we have plenty of headroom.
    - Generic data can be cached for everyone.
    - You must keep this in mind when building new features.
 1. Try to preserve cache data as much as possible:
-   - Use nested caches to maintain as much cached data as possible across expiries.
+   - Use nested caches to maintain as much cached data as possible across expires.
 1. Perform as few requests to the cache as possible:
    - This reduces variable latency caused by network issues.
    - Lower overhead for each read on the cache.
diff --git a/doc/development/database/database_debugging.md b/doc/development/database/database_debugging.md
index 0d6e9955a190f1e33860d077b784db933e38e8b9..31355ef9707b8ded944733347ef8c69829797bf7 100644
--- a/doc/development/database/database_debugging.md
+++ b/doc/development/database/database_debugging.md
@@ -53,7 +53,7 @@ bundle exec rake db:reset RAILS_ENV=test
 - `bundle exec rake db:migrate:up:main VERSION=20170926203418 RAILS_ENV=development`: Set up a migration
 - `bundle exec rake db:migrate:redo:main VERSION=20170926203418 RAILS_ENV=development`: Re-run a specific migration
 
-Replace `main` in the above commands to execute agains the `ci` database instead of `main`.
+Replace `main` in the above commands to execute against the `ci` database instead of `main`.
 
 ## Manually access the database
 
diff --git a/doc/development/gemfile.md b/doc/development/gemfile.md
index 3c7dc19da8e8bbfa665ea893ece6890cb1e4466a..562b4e105707ca735cf8bb66fedc839378ae9bc7 100644
--- a/doc/development/gemfile.md
+++ b/doc/development/gemfile.md
@@ -56,7 +56,7 @@ This means that new dependencies should, at a minimum, meet the following criter
 
 - They have an active developer community. At the minimum a maintainer should still be active
   to merge change requests in case of emergencies.
-- There are no issues open that we know may impact the availablity or performance of GitLab.
+- There are no issues open that we know may impact the availability or performance of GitLab.
 - The project is tested using some form of test automation. The test suite must be passing
   using the Ruby version currently used by GitLab.
 - If the project uses a C extension, consider requesting an additional review from a C or MRI
diff --git a/doc/development/geo.md b/doc/development/geo.md
index 10747ea170e8d42cee1b3a92e2a94e5b36aa123d..f147574eaf5cdd373f3a5968041bb281b7f4634c 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -308,7 +308,7 @@ sequenceDiagram
   - Sidekiq queries `job_artifact_registry` in the [PostgreSQL Geo Tracking Database](#tracking-database) for the number of rows marked "pending verification" or "failed verification and ready to retry"
   - Sidekiq enqueues one or more `Geo::VerificationBatchWorker` jobs, limited by the "maximum verification concurrency" setting
 - Sidekiq picks up `Geo::VerificationBatchWorker` job
-  - Sidekiq queries `job_artifact_registry` in the PostgreSQL Geo Tracking Databasef for rows marked "pending verification"
+  - Sidekiq queries `job_artifact_registry` in the PostgreSQL Geo Tracking Database for rows marked "pending verification"
   - If the previous step yielded less than 10 rows, then Sidekiq queries `job_artifact_registry` for rows marked "failed verification and ready to retry"
   - For each row
     - Sidekiq marks it "started verification"
diff --git a/doc/development/gitlab_flavored_markdown/specification_guide/index.md b/doc/development/gitlab_flavored_markdown/specification_guide/index.md
index 17afebcf6ee7f2c728e76581df62a584df23b532..b72f3c143508f02c7d715f611275011b600bbbbb 100644
--- a/doc/development/gitlab_flavored_markdown/specification_guide/index.md
+++ b/doc/development/gitlab_flavored_markdown/specification_guide/index.md
@@ -458,7 +458,7 @@ You can see the RSpec shared context containing these fixtures in
 
 In some cases, fixtures may not be usable, because they do not provide control over the varying
 values. In these cases, we can introduce support for a environment variable into the production
-code, which allows us to override the randommness in our test environment when we are
+code, which allows us to override the randomness in our test environment when we are
 generating the HTML for footnote examples. Even though it is in the production code path, it has
 no effect unless it is explicitly set, therefore it is innocuous. It allows us to avoid
 the more-complex regex-based normalization described below.
@@ -1056,7 +1056,7 @@ allows control over other aspects of the snapshot example generation process.
   the example will only be run by `ee/spec/requests/api/markdown_snapshot_spec.rb`, not by
   `spec/requests/api/markdown_snapshot_spec.rb`.
 - The `api_request_override_path` field overrides the API endpoint path which is used to
-  generate the `static` HTML for the specifed example. Different endpoints can generate different
+  generate the `static` HTML for the specified example. Different endpoints can generate different
   HTML in some cases, so we want to be able to exercise different API endpoints for the same
   Markdown. By default, the `/markdown` endpoint is used.
 
diff --git a/doc/development/pipelines/internals.md b/doc/development/pipelines/internals.md
index 133d11986d552abd8e2b6bc8498283a9935ed28d..47e737df2a07b6d39bf50c59c52e73752309ec6f 100644
--- a/doc/development/pipelines/internals.md
+++ b/doc/development/pipelines/internals.md
@@ -107,7 +107,7 @@ automatically updates the Gitaly version used in the main project),
 [the Dependency proxy isn't accessible](https://gitlab.com/gitlab-org/gitlab/-/issues/332411#note_1130388163)
 and the job fails at the `Preparing the "docker+machine" executor` step.
 To work around that, we have a special workflow rule, that overrides the
-`${GITLAB_DEPENDENCY_PROXY_ADDRESS}` variable so that Depdendency proxy isn't used in that case:
+`${GITLAB_DEPENDENCY_PROXY_ADDRESS}` variable so that Dependency proxy isn't used in that case:
 
 ```yaml
 - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $GITLAB_USER_LOGIN =~ /project_\d+_bot\d*/'
diff --git a/doc/development/service_ping/troubleshooting.md b/doc/development/service_ping/troubleshooting.md
index f8fd45e6062f64e1686295d9e72f577d5b19043b..3b7cd092d9744f0e3485065cf9449ce09478ee64 100644
--- a/doc/development/service_ping/troubleshooting.md
+++ b/doc/development/service_ping/troubleshooting.md
@@ -30,7 +30,7 @@ For results about an investigation conducted into an unexpected drop in Service
 
 Check if the [export jobs](https://gitlab.com/gitlab-services/version-gitlab-com#data-export-using-pipeline-schedules) are successful.
 
-Check [Service Ping errors](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health?widget=14609989&udv=0) in the [Service Ping Health Dahsboard](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health).
+Check [Service Ping errors](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health?widget=14609989&udv=0) in the [Service Ping Health Dashboard](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health).
 
 ### Troubleshoot Google Storage layer
 
diff --git a/doc/development/spam_protection_and_captcha/exploratory_testing.md b/doc/development/spam_protection_and_captcha/exploratory_testing.md
index f6e3e6814a8e5242218116cc5de4bbdd1de83b95..1bcd336ce933bd87e6bb649190dff12dc584901b 100644
--- a/doc/development/spam_protection_and_captcha/exploratory_testing.md
+++ b/doc/development/spam_protection_and_captcha/exploratory_testing.md
@@ -353,7 +353,7 @@ GraphQL response:
 }
 ```
 
-### Scenario: allow_possible_spam feature flag enabled
+### Scenario: `allow_possible_spam` feature flag enabled
 
 With the `allow_possible_spam` feature flag enabled, the API returns a 200 response. Any
 valid request is successful and no CAPTCHA is presented, even if the request is considered
diff --git a/doc/development/testing_guide/flaky_tests.md b/doc/development/testing_guide/flaky_tests.md
index cc62a0ebf03a91f39347d35d65810df656b289d6..72172c4b570411c9f7bd4c889b1447ef7464fb76 100644
--- a/doc/development/testing_guide/flaky_tests.md
+++ b/doc/development/testing_guide/flaky_tests.md
@@ -75,7 +75,7 @@ difficult to achieve locally.
   any table has more than 500 columns. It could pass in the merge request, but fail later in
   `master` if the order of tests changes.
 - [Example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/91016/diffs): A test asserts
-  that trying to find a record with an unexisting ID retuns an error message. The test uses an
+  that trying to find a record with an nonexistent ID returns an error message. The test uses an
   hardcoded ID that's supposed to not exist (e.g. `42`). If the test is run early in the test
   suite, it might pass as not enough records were created before it, but as soon as it would run
   later in the suite, there could be a record that actually has the ID `42`, hence the test would
@@ -207,10 +207,10 @@ The `rspec/flaky/report-suite.json` report is:
 - [Sporadic RSpec failures due to `PG::UniqueViolation`](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/28307#note_24958837): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9846>
   - Follow-up: <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10688>
   - [Capybara.reset_session! should be called before requests are blocked](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/33779): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12224>
-- FFaker generates funky data that tests are not ready to handle (and tests should be predictable so that's bad!):
+- ffaker generates funky data that tests are not ready to handle (and tests should be predictable so that's bad!):
   - [Make `spec/mailers/notify_spec.rb` more robust](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/20121): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10015>
   - [Transient failure in `spec/requests/api/commits_spec.rb`](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/27988#note_25342521): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9944>
-  - [Replace FFaker factory data with sequences](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/29643): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10184>
+  - [Replace ffaker factory data with sequences](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/29643): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10184>
   - [Transient failure in spec/finders/issues_finder_spec.rb](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/30211#note_26707685): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10404>
 
 ### Order-dependent flaky tests
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index 87d8d4935125e94d2058c76ee9cceba641e7e1dc..647fb94c4f6ac2e99413fede0f6b28b84978d69e 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -534,7 +534,7 @@ Example
   });
 ```
 
-With [enableAutoDestroy](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/100389), it is no longer neccessary to manually call `wrapper.destroy()`.
+With [enableAutoDestroy](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/100389), it is no longer necessary to manually call `wrapper.destroy()`.
 However, some mocks, spies, and fixtures do need to be torn down, and we can leverage the `afterEach` hook.
 
 Example
diff --git a/doc/integration/advanced_search/elasticsearch.md b/doc/integration/advanced_search/elasticsearch.md
index f11a649e45090a92163932e92e5551aa19419b51..a55a56b30d69d926443fcd276ad5f32710baadbd 100644
--- a/doc/integration/advanced_search/elasticsearch.md
+++ b/doc/integration/advanced_search/elasticsearch.md
@@ -559,7 +559,7 @@ The following are some available Rake tasks:
 
 | Task                                                                                                                                                    | Description                                                                                                                                                                               |
 |:--------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [`sudo gitlab-rake gitlab:elastic:info`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/tasks/gitlab/elastic.rake)                            | Outputs debugging information for the Advanced Search intergation. |
+| [`sudo gitlab-rake gitlab:elastic:info`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/tasks/gitlab/elastic.rake)                            | Outputs debugging information for the Advanced Search integration. |
 | [`sudo gitlab-rake gitlab:elastic:index`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/tasks/gitlab/elastic.rake)                            | Enables Elasticsearch indexing and run `gitlab:elastic:create_empty_index`, `gitlab:elastic:clear_index_status`, `gitlab:elastic:index_projects`, and `gitlab:elastic:index_snippets`.                          |
 | [`sudo gitlab-rake gitlab:elastic:pause_indexing`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/tasks/gitlab/elastic.rake)                            | Pauses Elasticsearch indexing. Changes are still tracked. Useful for cluster/index migrations. |
 | [`sudo gitlab-rake gitlab:elastic:resume_indexing`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/tasks/gitlab/elastic.rake)                            | Resumes Elasticsearch indexing. |
diff --git a/doc/subscriptions/gitlab_com/index.md b/doc/subscriptions/gitlab_com/index.md
index bf4c3231ac0e8477716976903225f507fe1e498d..75fbd5b7b55378ebb124e425a8b72325636239e9 100644
--- a/doc/subscriptions/gitlab_com/index.md
+++ b/doc/subscriptions/gitlab_com/index.md
@@ -465,7 +465,7 @@ If your credit card is declined when purchasing a GitLab subscription, possible
 Check with your financial institution to confirm if any of these reasons apply. If they don't
 apply, contact [GitLab Support](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=360000071293).
 
-### Unable to link subcription to namespace
+### Unable to link subscription to namespace
 
 If you cannot link a subscription to your namespace, ensure that you have the Owner role
 for that namespace.
diff --git a/doc/user/application_security/dependency_scanning/index.md b/doc/user/application_security/dependency_scanning/index.md
index 2c0b762345d576737ca579a4be48adcfadf98dd2..64f5c7d02bd00457b6b50d0e93b8ab040918af90 100644
--- a/doc/user/application_security/dependency_scanning/index.md
+++ b/doc/user/application_security/dependency_scanning/index.md
@@ -109,7 +109,7 @@ maximum of two directory levels from the repository's root. For example, the
 `gemnasium-dependency_scanning` job is enabled if a repository contains either `Gemfile`,
 `api/Gemfile`, or `api/client/Gemfile`, but not if the only supported dependency file is `api/v1/client/Gemfile`.
 
-For Java and Python, when a supported depedency file is detected, Dependency Scanning attempts to build the project and execute some Java or Python commands to get the list of dependencies. For all other projects, the lock file is parsed to obtain the list of dependencies without needing to build the project first.
+For Java and Python, when a supported dependency file is detected, Dependency Scanning attempts to build the project and execute some Java or Python commands to get the list of dependencies. For all other projects, the lock file is parsed to obtain the list of dependencies without needing to build the project first.
 
 When a supported dependency file is detected, all dependencies, including transitive dependencies are analyzed. There is no limit to the depth of nested or transitive dependencies that are analyzed.
 
diff --git a/doc/user/application_security/index.md b/doc/user/application_security/index.md
index 22965fb6bbbf2cd6e440eef052bb7118355a3944..2bb481d9ecfa04e9e865c88f8c1775ac85755051 100644
--- a/doc/user/application_security/index.md
+++ b/doc/user/application_security/index.md
@@ -242,7 +242,7 @@ reports are available to download. To download a report, select
 
 A merge request contains a security widget which displays a summary of the new results. New results are determined by comparing the findings of the merge request against the findings of the most recent completed pipeline (`success`, `failed`, `canceled` or `skipped`) for the latest commit in the target branch.
 
-If security scans have not run for the most recent completed pipeline in the target branch there is no base for comparison. The vulnerabilties from the merge request findings will be listed as new in the merge request security widget. We recommend you run a scan of the `default` (target) branch before enabling feature branch scans for your developers.
+If security scans have not run for the most recent completed pipeline in the target branch there is no base for comparison. The vulnerabilities from the merge request findings will be listed as new in the merge request security widget. We recommend you run a scan of the `default` (target) branch before enabling feature branch scans for your developers.
 
 The merge request security widget displays only a subset of the vulnerabilities in the generated JSON artifact because it contains both new and existing findings.
 
diff --git a/doc/user/clusters/agent/vulnerabilities.md b/doc/user/clusters/agent/vulnerabilities.md
index 9aaa70d477d2c14e716e5519e7d5f55339aedbaa..d9a9981d2116868c1282b09afe66ad485f4bfe53 100644
--- a/doc/user/clusters/agent/vulnerabilities.md
+++ b/doc/user/clusters/agent/vulnerabilities.md
@@ -58,7 +58,7 @@ container_scanning:
 
 ## Enable via scan execution policies
 
-To enable scanning of all images within your Kubernetes cluster via scan execution poilicies, we can use the
+To enable scanning of all images within your Kubernetes cluster via scan execution policies, we can use the
 [scan execution policy editor](../../application_security/policies/scan-execution-policies.md#scan-execution-policy-editor)
 in order to create a new schedule rule.
 
diff --git a/doc/user/group/compliance_frameworks.md b/doc/user/group/compliance_frameworks.md
index 8b5cf4ba9358cfb856d80e782278d57398ce9ac9..0e976cec86643335f5d9beb219fd677446687b8c 100644
--- a/doc/user/group/compliance_frameworks.md
+++ b/doc/user/group/compliance_frameworks.md
@@ -205,7 +205,7 @@ When creating such an MR against a project with CF pipelines, the above snippet
 This is because in the context of the target project, `$CI_COMMIT_REF_NAME` evaluates to a non-existing branch name.
 
 To get the correct context, use `$CI_MERGE_REQUEST_SOURCE_PROJECT_PATH` instead of `$CI_PROJECT_PATH`.
-This variable is only availabe in
+This variable is only available in
 [merge request pipelines](../../ci/pipelines/merge_request_pipelines.md).
 
 For example, for a configuration that supports both merge request pipelines originating in project forks and branch pipelines,