diff --git a/doc/administration/admin_area.md b/doc/administration/admin_area.md
index 98d6028966336a6f30760ad33616c708a1810d9e..ac19cde8152881045175473303fa74403717b849 100644
--- a/doc/administration/admin_area.md
+++ b/doc/administration/admin_area.md
@@ -520,7 +520,7 @@ The Sidekiq dashboard consists of the following elements:
 
 ### Logs
 
-**Log** view has been removed from the **Admin** area dashboard since the logging does not work in multi-node setups and could cause confusion for administrators by displaying partial information.
+**Log** view has been removed from the **Admin** area dashboard because the logging does not work in multi-node setups and could cause confusion for administrators by displaying partial information.
 
 For multi-node systems we recommend ingesting the logs into services like Elasticsearch and Splunk.
 
diff --git a/doc/administration/backup_restore/backup_large_reference_architectures.md b/doc/administration/backup_restore/backup_large_reference_architectures.md
index 7927f0f4159e51cdef3662e72e901725bc54f00e..89b59fef21cf1c03084a7d6e7b33ccd4370cafd2 100644
--- a/doc/administration/backup_restore/backup_large_reference_architectures.md
+++ b/doc/administration/backup_restore/backup_large_reference_architectures.md
@@ -65,7 +65,7 @@ Configure AWS Backup to back up S3 data. This can be done at the same time when
 {{< tab title="Google" >}}
 
 1. [Create a backup bucket in GCS](https://cloud.google.com/storage/docs/creating-buckets).
-1. [Create Storage Transfer Service jobs](https://cloud.google.com/storage-transfer/docs/create-transfers) which copy each GitLab object storage bucket to a backup bucket. You can create these jobs once, and [schedule them to run daily](https://cloud.google.com/storage-transfer/docs/schedule-transfer-jobs). However this mixes new and old object storage data, so files that were deleted in GitLab will still exist in the backup. This wastes storage after restore, but it is otherwise not a problem. These files would be inaccessible to GitLab users since they do not exist in the GitLab database. You can delete [some of these orphaned files](../../raketasks/cleanup.md#clean-up-project-upload-files-from-object-storage) after restore, but this clean up Rake task only operates on a subset of files.
+1. [Create Storage Transfer Service jobs](https://cloud.google.com/storage-transfer/docs/create-transfers) which copy each GitLab object storage bucket to a backup bucket. You can create these jobs once, and [schedule them to run daily](https://cloud.google.com/storage-transfer/docs/schedule-transfer-jobs). However this mixes new and old object storage data, so files that were deleted in GitLab will still exist in the backup. This wastes storage after restore, but it is otherwise not a problem. These files would be inaccessible to GitLab users because they do not exist in the GitLab database. You can delete [some of these orphaned files](../../raketasks/cleanup.md#clean-up-project-upload-files-from-object-storage) after restore, but this clean up Rake task only operates on a subset of files.
    1. For `When to overwrite`, choose `Never`. GitLab object stored files are intended to be immutable. This selection could be helpful if a malicious actor succeeded at mutating GitLab files.
    1. For `When to delete`, choose `Never`. If you sync the backup bucket to source, then you cannot recover if files are accidentally or maliciously deleted from source.
 1. Alternatively, it is possible to backup object storage into buckets or subdirectories segregated by day. This avoids the problem of orphaned files after restore, and supports backup of file versions if needed. But it greatly increases backup storage costs. This can be done with [a Cloud Function triggered by Cloud Scheduler](https://cloud.google.com/scheduler/docs/tut-gcf-pub-sub), or with a script run by a cronjob. A partial example:
@@ -456,7 +456,7 @@ First, as part of [Restore object storage data](#restore-object-storage-data), y
 1. For added assurance, you can perform
    [an integrity check on the uploaded files](../raketasks/check.md#uploaded-files-integrity):
 
-   Since these commands can take a long time because they iterate over all rows, run the following commands the GitLab Rails node,
+   These commands can take a long time because they iterate over all rows. So, run the following commands in the GitLab Rails node,
    rather than a Toolbox pod:
 
    ```shell
diff --git a/doc/administration/geo/disaster_recovery/bring_primary_back.md b/doc/administration/geo/disaster_recovery/bring_primary_back.md
index 496fb96459fc2a882139b9ea591559b498bef439..17ff0b6e02465b9041f27f7cbb4ef0e79d8f3662 100644
--- a/doc/administration/geo/disaster_recovery/bring_primary_back.md
+++ b/doc/administration/geo/disaster_recovery/bring_primary_back.md
@@ -26,7 +26,7 @@ If you have any doubts about the consistency of the data on this site, we recomm
 
 ## Configure the former **primary** site to be a **secondary** site
 
-Since the former **primary** site is out of sync with the current **primary** site, the first step is to bring the former **primary** site up to date. Note, deletion of data stored on disk like
+Because the former **primary** site is out of sync with the current **primary** site, the first step is to bring the former **primary** site up to date. Note, deletion of data stored on disk like
 repositories and uploads is not replayed when bringing the former **primary** site back
 into sync, which may result in increased disk usage.
 Alternatively, you can [set up a new **secondary** GitLab instance](../setup/_index.md) to avoid this.
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
index bdf7c4a91236c70bbfed73b96bd520c9684cf9af..6b7022e64883d6a5afb3a11bef0673781bfd8620 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
@@ -60,7 +60,7 @@ What is not covered:
 {{< alert type="note" >}}
 
 Before following any of those steps, make sure you have `root` access to the
-**secondary** to promote it, since there isn't provided an automated way to
+**secondary** to promote it because there isn't an automated way to
 promote a Geo replica and perform a failover.
 
 {{< /alert >}}
@@ -223,7 +223,7 @@ follow these steps to avoid unnecessary data loss:
      {{< /alert >}}
 
    - If you do not have SSH access to the **primary** site, take the machine offline and
-     prevent it from rebooting. Since there are many ways you may prefer to accomplish
+     prevent it from rebooting. Because there are many ways you may prefer to accomplish
      this, we avoid a single recommendation. You may need to:
 
      - Reconfigure the load balancers.
diff --git a/doc/administration/geo/replication/selective_synchronization.md b/doc/administration/geo/replication/selective_synchronization.md
index 491b016362c6c6653b1ee7ad9359b08809cb270c..ed0f5902234ab7dbd157c93352cb985969078b65 100644
--- a/doc/administration/geo/replication/selective_synchronization.md
+++ b/doc/administration/geo/replication/selective_synchronization.md
@@ -31,7 +31,7 @@ Selective synchronization:
 1. Does not prevent users from viewing, interacting with, cloning, and pushing to project repositories that are not included in the selective sync.
    - For more details, see [Geo proxying for secondary sites](../secondary_proxy/_index.md).
 1. Does not hide project metadata from **secondary** sites.
-   - Since Geo relies on PostgreSQL replication, all project metadata
+   - Because Geo relies on PostgreSQL replication, all project metadata
      gets replicated to **secondary** sites, but repositories that have not been
      selected will not exist on the secondary site.
 1. Does not reduce the number of events generated for the Geo event log.
diff --git a/doc/administration/geo/replication/troubleshooting/common.md b/doc/administration/geo/replication/troubleshooting/common.md
index 6ea42cade8a9f195f09c9cca6294d02053d58af7..f7b20eaf489845f3109377e0ce1483d8c3bc0ba2 100644
--- a/doc/administration/geo/replication/troubleshooting/common.md
+++ b/doc/administration/geo/replication/troubleshooting/common.md
@@ -512,7 +512,7 @@ Geo cannot reuse an existing tracking database.
 It is safest to use a fresh secondary, or reset the whole secondary by following
 [Resetting Geo secondary site replication](synchronization_verification.md#resetting-geo-secondary-site-replication).
 
-It is risky to reuse a secondary site without resetting it because the secondary site may have missed some Geo events. For example, missed deletion events lead to the secondary site permanently having data that should be deleted. Similarly, losing an event which physically moves the location of data leads to data permanently orphaned in one location, and missing in the other location until it is re-verified. This is why GitLab switched to hashed storage, since it makes moving data unnecessary. There may be other unknown problems due to lost events.
+It is risky to reuse a secondary site without resetting it because the secondary site may have missed some Geo events. For example, missed deletion events lead to the secondary site permanently having data that should be deleted. Similarly, losing an event which physically moves the location of data leads to data permanently orphaned in one location, and missing in the other location until it is re-verified. This is why GitLab switched to hashed storage, which makes moving data unnecessary. There may be other unknown problems due to lost events.
 
 If these kinds of risks do not apply, for example in a test environment, or if you know that the main Postgres database still contains all Geo events since the Geo site was added, then you can bypass this health check:
 
diff --git a/doc/administration/geo/secondary_proxy/_index.md b/doc/administration/geo/secondary_proxy/_index.md
index 7eb2f452f598a9d3246dce14a659243be12163f9..e1d70d959844cc1b8670b0ace394681531dfc424 100644
--- a/doc/administration/geo/secondary_proxy/_index.md
+++ b/doc/administration/geo/secondary_proxy/_index.md
@@ -202,12 +202,12 @@ If your secondary site uses the same external URL as the primary site:
 Considering that web traffic is proxied to the primary, the behavior of the secondary sites differs when the primary
 site is inaccessible:
 
-- UI and API traffic return the same errors as the primary (or fail if the primary is not accessible at all), since they are proxied.
+- UI and API traffic return the same errors as the primary (or fail if the primary is not accessible at all) because they are proxied.
 - For repositories that are fully up-to-date on the specific secondary site being accessed, Git read operations still work as expected,
   including authentication through HTTP(s) or SSH. However, Git reads performed by GitLab Runners will fail.
 - Git operations for repositories that are not replicated to the secondary site return the same errors
-  as the primary site, since they are proxied.
-- All Git write operations return the same errors as the primary site, since they are proxied.
+  as the primary site because they are proxied.
+- All Git write operations return the same errors as the primary site because they are proxied.
 
 ## Features accelerated by secondary Geo sites
 
diff --git a/doc/administration/geo/setup/external_database.md b/doc/administration/geo/setup/external_database.md
index a73c0d9dcfece04e90afcd94a8b760e9e58828fc..5df88777cb59faebd7746f633dbfe45f43200dc8 100644
--- a/doc/administration/geo/setup/external_database.md
+++ b/doc/administration/geo/setup/external_database.md
@@ -193,7 +193,7 @@ To configure the connection to the external read-replica database and enable Log
    gitlab_rails['db_username'] = 'gitlab'
    gitlab_rails['db_host'] = '<database_read_replica_host>'
 
-   # Disable the bundled Omnibus PostgreSQL, since we are
+   # Disable the bundled Omnibus PostgreSQL because we are
    # using an external PostgreSQL
    postgresql['enable'] = false
    ```
diff --git a/doc/administration/geo/setup/two_single_node_external_services.md b/doc/administration/geo/setup/two_single_node_external_services.md
index 9551ff4b3bf75b76d6029c15325ce4ee5df64042..f8b148f2f94ad343843b46dbff5b0e49c3514f09 100644
--- a/doc/administration/geo/setup/two_single_node_external_services.md
+++ b/doc/administration/geo/setup/two_single_node_external_services.md
@@ -119,7 +119,7 @@ To configure the connection to the external read-replica database:
    gitlab_rails['db_username'] = 'gitlab'
    gitlab_rails['db_host'] = '<database_read_replica_host>'
 
-   # Disable the bundled Omnibus PostgreSQL, since we are
+   # Disable the bundled Omnibus PostgreSQL because we are
    # using an external PostgreSQL
    postgresql['enable'] = false
    ```
diff --git a/doc/administration/incoming_email.md b/doc/administration/incoming_email.md
index cda15f44bfbd67d7cbbd19cf777249afdee3b799..42a3f020ad74a393b044711ae8df0482f81247e4 100644
--- a/doc/administration/incoming_email.md
+++ b/doc/administration/incoming_email.md
@@ -555,7 +555,7 @@ incoming_email:
     ssl: true
 
     # If you are using Microsoft Graph instead of IMAP, set this to false to retain
-    # messages in the inbox since deleted messages are auto-expunged after some time.
+    # messages in the inbox because deleted messages are auto-expunged after some time.
     delete_after_delivery: true
 
     # Whether to expunge (permanently remove) messages from the mailbox when they are marked as deleted after delivery
@@ -622,7 +622,7 @@ incoming_email:
     ssl: true
 
     # If you are using Microsoft Graph instead of IMAP, set this to false to retain
-    # messages in the inbox since deleted messages are auto-expunged after some time.
+    # messages in the inbox because deleted messages are auto-expunged after some time.
     delete_after_delivery: true
 
     # Whether to expunge (permanently remove) messages from the mailbox when they are marked as deleted after delivery
diff --git a/doc/administration/object_storage.md b/doc/administration/object_storage.md
index 8d09dda53bdc915c134334829a982e1c35b8b136..366a6c146beed2770ca9f2263fcaa2e14f4bfc94 100644
--- a/doc/administration/object_storage.md
+++ b/doc/administration/object_storage.md
@@ -63,7 +63,7 @@ For GitLab Helm Charts, see how to [configure the consolidated form](https://doc
 
 Configuring the object storage using the consolidated form has a number of advantages:
 
-- It can simplify your GitLab configuration since the connection details are shared
+- It can simplify your GitLab configuration because the connection details are shared
   across object types.
 - It enables the use of [encrypted S3 buckets](#encrypted-s3-buckets).
 - It [uploads files to S3 with proper `Content-MD5` headers](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/222).
@@ -267,7 +267,7 @@ When configured either with an instance profile or with the consolidated
 form, GitLab Workhorse properly uploads files to S3
 buckets that have [SSE-S3 or SSE-KMS encryption enabled by default](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html).
 AWS KMS keys and SSE-C encryption are
-[not supported since this requires sending the encryption keys in every request](https://gitlab.com/gitlab-org/gitlab/-/issues/226006).
+[not supported because this requires sending the encryption keys in every request](https://gitlab.com/gitlab-org/gitlab/-/issues/226006).
 
 #### Server-side encryption headers
 
@@ -998,7 +998,7 @@ gitlab_rails['uploads_object_store_connection'] = { 'provider' => 'AWS', 'aws_ac
 
 Although this provides flexibility in that it makes it possible for GitLab
 to store objects across different cloud providers, it also creates
-additional complexity and unnecessary redundancy. Since both GitLab
+additional complexity and unnecessary redundancy. Because both GitLab
 Rails and Workhorse components need access to object storage, the
 consolidated form avoids excessive duplication of credentials.