diff --git a/doc/development/database/client_side_connection_pool.md b/doc/development/database/client_side_connection_pool.md
index dc52a5514070aff5fe86e7036395ce1dc02f6713..3cd0e836a8dc9e0b7076bd1aaf47b638842b52e8 100644
--- a/doc/development/database/client_side_connection_pool.md
+++ b/doc/development/database/client_side_connection_pool.md
@@ -10,8 +10,8 @@ Ruby processes accessing the database through
 ActiveRecord, automatically calculate the connection-pool size for the
 process based on the concurrency.
 
-Because of the way [Ruby on Rails manages database
-connections](#connection-lifecycle), it is important that we have at
+Because of the way [Ruby on Rails manages database connections](#connection-lifecycle), 
+it is important that we have at
 least as many connections as we have threads. While there is a 'pool'
 setting in [`database.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/database.yml.postgresql), it is not very practical because you need to
 maintain it in tandem with the number of application threads. For this
@@ -28,9 +28,8 @@ because connections are instantiated lazily.
 
 ## Troubleshooting connection-pool issues
 
-The connection-pool usage can be seen per environment in the [connection-pool
-saturation
-dashboard](https://dashboards.gitlab.net/d/alerts-sat_rails_db_connection_pool/alerts-rails_db_connection_pool-saturation-detail?orgId=1).
+The connection-pool usage can be seen per environment in the 
+[connection-pool saturation dashboard](https://dashboards.gitlab.net/d/alerts-sat_rails_db_connection_pool/alerts-rails_db_connection_pool-saturation-detail?orgId=1).
 
 If the connection-pool is too small, this would manifest in
 `ActiveRecord::ConnectionTimeoutError`s from the application. Because we alert
@@ -41,8 +40,8 @@ hardcoded value (10).
 
 At this point, we need to investigate what is using more connections
 than we anticipated. To do that, we can use the
-`gitlab_ruby_threads_running_threads` metric. For example, [this
-graph](https://thanos.gitlab.net/graph?g0.range_input=1h&g0.max_source_resolution=0s&g0.expr=sum%20by%20(thread_name)%20(%20gitlab_ruby_threads_running_threads%7Buses_db_connection%3D%22yes%22%7D%20)&g0.tab=0)
+`gitlab_ruby_threads_running_threads` metric. For example,
+[this graph](https://thanos.gitlab.net/graph?g0.range_input=1h&g0.max_source_resolution=0s&g0.expr=sum%20by%20(thread_name)%20(%20gitlab_ruby_threads_running_threads%7Buses_db_connection%3D%22yes%22%7D%20)&g0.tab=0)
 shows all running threads that connect to the database by their
 name. Threads labeled `puma worker` or `sidekiq_worker_thread` are
 the threads that define `Gitlab::Runtime.max_threads` so those are
diff --git a/doc/development/database/loose_foreign_keys.md b/doc/development/database/loose_foreign_keys.md
index e28237ca5fca0c8180f6a21829f182cfce6dcfc9..8dbccf048d77614cd26a5053da8036f853db78ac 100644
--- a/doc/development/database/loose_foreign_keys.md
+++ b/doc/development/database/loose_foreign_keys.md
@@ -221,8 +221,8 @@ ON DELETE CASCADE;
 ```
 
 The migration must run after the `DELETE` trigger is installed and the loose
-foreign key definition is deployed. As such, it must be a [post-deployment
-migration](post_deployment_migrations.md) dated after the migration for the
+foreign key definition is deployed. As such, it must be a 
+[post-deployment migration](post_deployment_migrations.md) dated after the migration for the
 trigger. If the foreign key is deleted earlier, there is a good chance of
 introducing data inconsistency which needs manual cleanup:
 
diff --git a/doc/development/database/rename_database_tables.md b/doc/development/database/rename_database_tables.md
index cbcbd507204fd0abcc48588662763998f1c07b4e..4a3b9df0c33e7b0f6dbfa49383774642b7b8eb64 100644
--- a/doc/development/database/rename_database_tables.md
+++ b/doc/development/database/rename_database_tables.md
@@ -81,8 +81,8 @@ Execute a standard migration (not a post-migration):
 when naming indexes, so there is a possibility that not all indexes are properly renamed. After running
 the migration locally, check if there are inconsistently named indexes (`db/structure.sql`). Those can be
 renamed manually in a separate migration, which can be also part of the release M.N+1.
-- Foreign key columns might still contain the old table name. For smaller tables, follow our [standard column
-rename process](avoiding_downtime_in_migrations.md#renaming-columns)
+- Foreign key columns might still contain the old table name. For smaller tables, follow our
+  [standard column rename process](avoiding_downtime_in_migrations.md#renaming-columns)
 - Avoid renaming database tables which are using with triggers.
 - Table modifications (add or remove columns) are not allowed during the rename process, please make sure that all changes to the table happen before the rename migration is started (or in the next release).
 - As the index names might change, verify that the model does not use bulk insert
diff --git a/doc/development/database/strings_and_the_text_data_type.md b/doc/development/database/strings_and_the_text_data_type.md
index 73e023f8d454a2d065b37c0cbe1e1ef07873ef81..ee74e57ed32acb5ff53ae21b6181ccc1b27b2750 100644
--- a/doc/development/database/strings_and_the_text_data_type.md
+++ b/doc/development/database/strings_and_the_text_data_type.md
@@ -148,8 +148,9 @@ to update the `title_html` with a title that has more than 1024 characters, the
 a database error.
 
 Adding or removing a constraint to an existing attribute requires that any application changes are
-deployed _first_, [otherwise servers still in the old version of the application may try to update the
-attribute with invalid values](../multi_version_compatibility.md#ci-artifact-uploads-were-failing).
+deployed _first_, 
+otherwise servers still in the old version of the application
+[may try to update the attribute with invalid values](../multi_version_compatibility.md#ci-artifact-uploads-were-failing).
 For these reasons, `add_text_limit` should run in a post-deployment migration.
 
 Still in our example, for the 13.0 milestone (current), consider that the following validation
diff --git a/doc/development/database/understanding_explain_plans.md b/doc/development/database/understanding_explain_plans.md
index 49babde737a8daeedc0efc9b63499819ebedee42..446a84d5232f2f1df42573a225aeec92d33c647c 100644
--- a/doc/development/database/understanding_explain_plans.md
+++ b/doc/development/database/understanding_explain_plans.md
@@ -252,8 +252,8 @@ A scan on an index that required retrieving some data from the table.
 
 Bitmap scans fall between sequential scans and index scans. These are typically
 used when we would read too much data from an index scan, but too little to
-perform a sequential scan. A bitmap scan uses what is known as a [bitmap
-index](https://en.wikipedia.org/wiki/Bitmap_index) to perform its work.
+perform a sequential scan. A bitmap scan uses what is known as a 
+[bitmap index](https://en.wikipedia.org/wiki/Bitmap_index) to perform its work.
 
 The [source code of PostgreSQL](https://gitlab.com/postgres/postgres/blob/REL_11_STABLE/src/include/nodes/plannodes.h#L441)
 states the following on bitmap scans:
@@ -794,8 +794,8 @@ Execution time: 0.113 ms
 
 ### ChatOps
 
-[GitLab team members can also use our ChatOps solution, available in Slack using the
-`/chatops` slash command](../chatops_on_gitlabcom.md).
+GitLab team members can also use our ChatOps solution, available in Slack
+using the [`/chatops` slash command](../chatops_on_gitlabcom.md).
 
 NOTE:
 While ChatOps is still available, the recommended way to generate execution plans is to use [Database Lab Engine](#database-lab-engine).
diff --git a/doc/development/sql.md b/doc/development/sql.md
index 8553e2a55009a31f92c3712c2f27afb0492f9e57..7101bf7fb4b3dc55dc9170a81e298071c7407767 100644
--- a/doc/development/sql.md
+++ b/doc/development/sql.md
@@ -79,8 +79,9 @@ ON table_name
 USING GIN(column_name gin_trgm_ops);
 ```
 
-The key here is the `GIN(column_name gin_trgm_ops)` part. This creates a [GIN
-index](https://www.postgresql.org/docs/current/gin.html) with the operator class set to `gin_trgm_ops`. These indexes
+The key here is the `GIN(column_name gin_trgm_ops)` part. This creates a
+[GIN index](https://www.postgresql.org/docs/current/gin.html) 
+with the operator class set to `gin_trgm_ops`. These indexes
 _can_ be used by `ILIKE` / `LIKE` and can lead to greatly improved performance.
 One downside of these indexes is that they can easily get quite large (depending
 on the amount of data indexed).