diff --git a/doc/administration/geo/replication/troubleshooting/common.md b/doc/administration/geo/replication/troubleshooting/common.md
index 4b2fe6dfe0a940aff12922d083d0c34cd5451725..56aeac4d6d4f2aae0664c56041bcd6b6b7390318 100644
--- a/doc/administration/geo/replication/troubleshooting/common.md
+++ b/doc/administration/geo/replication/troubleshooting/common.md
@@ -366,7 +366,7 @@ generate an error because containers in Kubernetes do not have access to the hos
 Machine clock is synchronized ... Exception: getaddrinfo: Servname not supported for ai_socktype
 ```
 
-##### Message: `ActiveRecord::StatementInvalid: PG::ReadOnlySqlTransaction: ERROR:  cannot execute INSERT in a read-only transaction`
+##### Message: `cannot execute INSERT in a read-only transaction`
 
 When this error is encountered on a secondary site, it likely affects all usages of GitLab Rails such as `gitlab-rails` or `gitlab-rake` commands, as well the Puma, Sidekiq, and Geo Log Cursor services.
 
@@ -517,10 +517,11 @@ If these kinds of risks do not apply, for example in a test environment, or if y
 1. Under the secondary site select **Replication Details**.
 1. Select **Reverify all** for every data type.
 
-### Geo site has a database that is writable which is an indication it is not configured for replication with the primary site
+### Geo site has a database that is writable
 
 This error message refers to a problem with the database replica on a **secondary** site,
-which Geo expects to have access to. It usually means, either:
+which Geo expects to have access to. A secondary site database that is writable
+is an indication the database is not configured for replication with the primary site. It usually means, either:
 
 - An unsupported replication method was used (for example, logical replication).
 - The instructions to set up a [Geo database replication](../../setup/database.md) were not followed correctly.
diff --git a/doc/administration/geo/replication/troubleshooting/replication.md b/doc/administration/geo/replication/troubleshooting/replication.md
index e39d1dcbfac93487c49758a2ef5cb0edca360db5..2585c70eb32e4c2bf581b5310db9b7076a643ef7 100644
--- a/doc/administration/geo/replication/troubleshooting/replication.md
+++ b/doc/administration/geo/replication/troubleshooting/replication.md
@@ -77,10 +77,14 @@ increase this value if you have more **secondary** sites.
 Be sure to restart PostgreSQL for this to take effect. See the
 [PostgreSQL replication setup](../../setup/database.md#postgresql-replication) guide for more details.
 
-### Message: `FATAL:  could not start WAL streaming: ERROR:  replication slot "geo_secondary_my_domain_com" does not exist`?
+### Message: `replication slot "geo_secondary_my_domain_com" does not exist`
 
-This occurs when PostgreSQL does not have a replication slot for the
-**secondary** site by that name.
+This error occurs when PostgreSQL does not have a replication slot for the
+**secondary** site by that name:
+
+```plaintext
+FATAL:  could not start WAL streaming: ERROR:  replication slot "geo_secondary_my_domain_com" does not exist
+```
 
 You may want to rerun the [replication process](../../setup/database.md) on the **secondary** site .
 
@@ -143,7 +147,13 @@ sudo gitlab-ctl reconfigure
 To help us resolve this problem, consider commenting on
 [the issue](https://gitlab.com/gitlab-org/gitlab/-/issues/4489).
 
-### Message: `FATAL:  could not connect to the primary server: server certificate for "PostgreSQL" does not match host name`
+### Message: `server certificate for "PostgreSQL" does not match host name`
+
+If you see this error:
+
+```plaintext
+FATAL:  could not connect to the primary server: server certificate for "PostgreSQL" does not match host name
+```
 
 This happens because the PostgreSQL certificate that the Linux package automatically creates contains
 the Common Name `PostgreSQL`, but the replication is connecting to a different host and GitLab attempts to use
@@ -182,9 +192,10 @@ This happens when you have added IP addresses without a subnet mask in `postgres
 To fix this, add the subnet mask in `/etc/gitlab/gitlab.rb` under `postgresql['md5_auth_cidr_addresses']`
 to respect the CIDR format (for example, `10.0.0.1/32`).
 
-### Message: `Found data in the gitlabhq_production database!` when running `gitlab-ctl replicate-geo-database`
+### Message: `Found data in the gitlabhq_production database`
 
-This happens if data is detected in the `projects` table. When one or more projects are detected, the operation
+If you receive the error `Found data in the gitlabhq_production database!` when running
+`gitlab-ctl replicate-geo-database`, data was detected in the `projects` table. When one or more projects are detected, the operation
 is aborted to prevent accidental data loss. To bypass this message, pass the `--force` option to the command.
 
 ### Message: `FATAL:  could not map anonymous shared memory: Cannot allocate memory`
diff --git a/doc/administration/geo/replication/troubleshooting/synchronization.md b/doc/administration/geo/replication/troubleshooting/synchronization.md
index 98052d9976e67ff18a45e3bef0e4b4847bacf82a..23c891fe15d6ada87411cb3738bfa3b26984d9e4 100644
--- a/doc/administration/geo/replication/troubleshooting/synchronization.md
+++ b/doc/administration/geo/replication/troubleshooting/synchronization.md
@@ -181,10 +181,17 @@ To solve this:
 During a [backfill](../../index.md#backfill), failures are scheduled to be retried at the end
 of the backfill queue, therefore these failures only clear up **after** the backfill completes.
 
-## Message: curl 18 transfer closed with outstanding read data remaining & fetch-pack: unexpected disconnect while reading sideband packet
+## Message: `unexpected disconnect while reading sideband packet`
 
 Unstable networking conditions can cause Gitaly to fail when trying to fetch large repository
-data from the primary site. This is more likely to happen if a repository has to be
+data from the primary site. Those conditions can result in this error:
+
+```plaintext
+curl 18 transfer closed with outstanding read data remaining & fetch-pack:
+unexpected disconnect while reading sideband packet
+```
+
+This error is more likely to happen if a repository has to be
 replicated from scratch between sites.
 
 Geo retries several times, but if the transmission is consistently interrupted
diff --git a/doc/administration/gitaly/troubleshooting.md b/doc/administration/gitaly/troubleshooting.md
index ee1a7e74f1d2e133cb2eaf46e7b54e47515695c9..0d3c7ddabc08571eba63075f4d537b08c24321a0 100644
--- a/doc/administration/gitaly/troubleshooting.md
+++ b/doc/administration/gitaly/troubleshooting.md
@@ -344,9 +344,11 @@ continue to listen on the old address after a `sudo gitlab-ctl reconfigure`.
 When this occurs, run `sudo gitlab-ctl restart` to resolve the issue. This should no longer be
 necessary because [this issue](https://gitlab.com/gitlab-org/gitaly/-/issues/2521) is resolved.
 
-## Permission denied errors appearing in Gitaly logs when accessing repositories from a standalone Gitaly node
+## Errors in Gitaly logs when accessing repositories from a standalone Gitaly node
 
-If this error occurs even though file permissions are correct, it's likely that the Gitaly node is
+You might see permission-denied errors in the Gitaly logs when you access a repository
+from a standalone Gitaly node. This error occurs even though file permissions are correct.
+It's likely that the Gitaly node is
 experiencing [clock drift](https://en.wikipedia.org/wiki/Clock_drift).
 
 Ensure that the GitLab and Gitaly nodes are synchronized and use an NTP time
@@ -438,9 +440,14 @@ To resolve this, remove the `noexec` option from the file system mount. An alter
 1. Add `gitaly['runtime_dir'] = '<PATH_WITH_EXEC_PERM>'` to `/etc/gitlab/gitlab.rb` and specify a location without `noexec` set.
 1. Run `sudo gitlab-ctl reconfigure`.
 
-## Commit signing fails with `invalid argument: signing key is encrypted` or `invalid data: tag byte does not have MSB set.`
+## Commit signing fails with `invalid argument` or `invalid data`
 
-Because Gitaly commit signing is headless and not associated with a specific user, the GPG signing key must be created without a passphrase, or the passphrase must be removed before export.
+If commit signing fails with either of these errors:
+
+- `invalid argument: signing key is encrypted`
+- `invalid data: tag byte does not have MSB set`
+
+This error happens because Gitaly commit signing is headless and not associated with a specific user. The GPG signing key must be created without a passphrase, or the passphrase must be removed before export.
 
 ## Gitaly logs show errors in `info` messages