diff --git a/doc/administration/backup_restore/backup_gitlab.md b/doc/administration/backup_restore/backup_gitlab.md
index af00831ca5487ab9a2d176781341c2f4c98c93ef..52f4a855ec49e694dda1269433a9281a0758aca4 100644
--- a/doc/administration/backup_restore/backup_gitlab.md
+++ b/doc/administration/backup_restore/backup_gitlab.md
@@ -168,10 +168,10 @@ including:
 - CI/CD job output logs
 - CI/CD job artifacts
 - LFS objects
-- Terraform states ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/331806) in GitLab 14.7)
+- Terraform states
 - Container registry images
 - GitLab Pages content
-- Packages ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/332006) in GitLab 14.7)
+- Packages
 - Snippets
 - [Group wikis](../../user/project/wiki/group.md)
 - Project-level Secure Files ([introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121142) in GitLab 16.1)
@@ -563,21 +563,16 @@ sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_SERVER_SIDE=tr
 
 #### Back up Git repositories concurrently
 
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37158) in GitLab 13.3.
-> - [Concurrent restore introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/69330) in GitLab 14.3
-
 When using [multiple repository storages](../repository_storage_paths.md),
 repositories can be backed up or restored concurrently to help fully use CPU time. The
 following variables are available to modify the default behavior of the Rake
 task:
 
 - `GITLAB_BACKUP_MAX_CONCURRENCY`: The maximum number of projects to back up at
-  the same time. Defaults to the number of logical CPUs (in GitLab 14.1 and
-  earlier, defaults to `1`).
+  the same time. Defaults to the number of logical CPUs.
 - `GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY`: The maximum number of projects to
   back up at the same time on each storage. This allows the repository backups
-  to be spread across storages. Defaults to `2` (in GitLab 14.1 and earlier,
-  defaults to `1`).
+  to be spread across storages. Defaults to `2`.
 
 For example, with 4 repository storages:
 
@@ -599,8 +594,6 @@ sudo -u git -H bundle exec rake gitlab:backup:create GITLAB_BACKUP_MAX_CONCURREN
 
 #### Incremental repository backups
 
-> - Introduced in GitLab 14.9 [with a flag](../feature_flags.md) named `incremental_repository_backup`. Disabled by default.
-> - [Enabled on self-managed](https://gitlab.com/gitlab-org/gitlab/-/issues/355945) in GitLab 14.10.
 > - `PREVIOUS_BACKUP` option [introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4184) in GitLab 15.0.
 > - Server-side support for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/6475) in GitLab 16.6.
 
@@ -617,26 +610,17 @@ support incremental backups for all subtasks.
 
 Incremental repository backups can be faster than full repository backups because they only pack changes since the last backup into the backup bundle for each repository.
 The incremental backup archives are not linked to each other: each archive is a self-contained backup of the instance. There must be an existing backup
-to create an incremental backup from:
+to create an incremental backup from.
 
-- In GitLab 14.9 and 14.10, use the `BACKUP=<backup-id>` option to choose the backup to use. The chosen previous backup is overwritten.
-- In GitLab 15.0 and later, use the `PREVIOUS_BACKUP=<backup-id>` option to choose the backup to use. By default, a backup file is created
-  as documented in the [Backup ID](index.md#backup-id) section. You can override the `<backup-id>` portion of the filename by setting the
-  [`BACKUP` environment variable](#backup-filename).
+Use the `PREVIOUS_BACKUP=<backup-id>` option to choose the backup to use. By default, a backup file is created
+as documented in the [Backup ID](index.md#backup-id) section. You can override the `<backup-id>` portion of the filename by setting the
+[`BACKUP` environment variable](#backup-filename).
 
 To create an incremental backup, run:
 
-- In GitLab 15.0 or later:
-
-  ```shell
-  sudo gitlab-backup create INCREMENTAL=yes PREVIOUS_BACKUP=<backup-id>
-  ```
-
-- In GitLab 14.9 and 14.10:
-
-  ```shell
-  sudo gitlab-backup create INCREMENTAL=yes BACKUP=<backup-id>
-  ```
+```shell
+sudo gitlab-backup create INCREMENTAL=yes PREVIOUS_BACKUP=<backup-id>
+```
 
 To create an [untarred](#skipping-tar-creation) incremental backup from a tarred backup, use `SKIP=tar`:
 
@@ -740,8 +724,6 @@ For Linux package (Omnibus):
 
 ##### S3 Encrypted Buckets
 
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/64765) in GitLab 14.3.
-
 AWS supports these [modes for server side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html):
 
 - Amazon S3-Managed Keys (SSE-S3)
@@ -982,8 +964,6 @@ For self-compiled installations:
 
 ##### Using Azure Blob storage
 
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/25877) in GitLab 13.4.
-
 ::Tabs
 
 :::TabTitle Linux package (Omnibus)
@@ -1334,10 +1314,6 @@ for more details on what these parameters do.
 
 #### `gitaly-backup` for repository backup and restore
 
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/333034) in GitLab 14.2.
-> - [Deployed behind a feature flag](../../user/feature_flags.md), enabled by default.
-> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/333034) in GitLab 14.10. [Feature flag `gitaly_backup`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/83254) removed.
-
 The `gitaly-backup` binary is used by the backup Rake task to create and restore repository backups from Gitaly.
 `gitaly-backup` replaces the previous backup method that directly calls RPCs on Gitaly from GitLab.
 
diff --git a/doc/administration/backup_restore/restore_gitlab.md b/doc/administration/backup_restore/restore_gitlab.md
index bd1e87b532ccea15a2670af48d0371ad84292321..e17330b183bbe4ece6b1130fb13da8b6a26223d8 100644
--- a/doc/administration/backup_restore/restore_gitlab.md
+++ b/doc/administration/backup_restore/restore_gitlab.md
@@ -137,7 +137,7 @@ sudo gitlab-ctl restart
 sudo gitlab-rake gitlab:check SANITIZE=true
 ```
 
-In GitLab 13.1 and later, check [database values can be decrypted](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
+Verify that the [database values can be decrypted](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
 especially if `/etc/gitlab/gitlab-secrets.json` was restored, or if a different server is
 the target for the restore.
 
@@ -334,8 +334,6 @@ The `force=yes` environment variable also disables these prompts.
 
 ### Excluding tasks on restore
 
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/19347) in GitLab 14.10.
-
 You can exclude specific tasks on restore by adding the environment variable `SKIP`, whose values are a comma-separated list of the following options:
 
 - `db` (database)
diff --git a/doc/administration/backup_restore/troubleshooting_backup_gitlab.md b/doc/administration/backup_restore/troubleshooting_backup_gitlab.md
index 6fc7ee4838f1993310bfb6318bf078c2fc21272a..3add449d9ed8eb9d1c9300a3e2865ece9602a561 100644
--- a/doc/administration/backup_restore/troubleshooting_backup_gitlab.md
+++ b/doc/administration/backup_restore/troubleshooting_backup_gitlab.md
@@ -63,25 +63,13 @@ after which users must reactivate 2FA.
 
 1. Enter the database console:
 
-   For the Linux package (Omnibus) GitLab 14.1 and earlier:
-
-   ```shell
-   sudo gitlab-rails dbconsole
-   ```
-
-   For the Linux package (Omnibus) GitLab 14.2 and later:
+   For the Linux package (Omnibus):
 
    ```shell
    sudo gitlab-rails dbconsole --database main
    ```
 
-   For self-compiled installations, GitLab 14.1 and earlier:
-
-   ```shell
-   sudo -u git -H bundle exec rails dbconsole -e production
-   ```
-
-   For self-compiled installations, GitLab 14.2 and later:
+   For self-compiled installations:
 
    ```shell
    sudo -u git -H bundle exec rails dbconsole -e production --database main
@@ -116,25 +104,13 @@ You may need to reconfigure or restart GitLab for the changes to take effect.
 
 1. Enter the database console:
 
-   For the Linux package (Omnibus) GitLab 14.1 and earlier:
-
-   ```shell
-   sudo gitlab-rails dbconsole
-   ```
-
-   For the Linux package (Omnibus) GitLab 14.2 and later:
+   For the Linux package (Omnibus):
 
    ```shell
    sudo gitlab-rails dbconsole --database main
    ```
 
-   For self-compiled installations, GitLab 14.1 and earlier:
-
-   ```shell
-   sudo -u git -H bundle exec rails dbconsole -e production
-   ```
-
-   For self-compiled installations, GitLab 14.2 and later:
+   For self-compiled installations:
 
    ```shell
    sudo -u git -H bundle exec rails dbconsole -e production --database main
@@ -165,25 +141,13 @@ You may need to reconfigure or restart GitLab for the changes to take effect.
 
 1. Enter the database console:
 
-   For the Linux package (Omnibus) GitLab 14.1 and earlier:
-
-   ```shell
-   sudo gitlab-rails dbconsole
-   ```
-
-   For the Linux package (Omnibus) GitLab 14.2 and later:
+   For the Linux package (Omnibus):
 
    ```shell
    sudo gitlab-rails dbconsole --database main
    ```
 
-   For self-compiled installations, GitLab 14.1 and earlier:
-
-   ```shell
-   sudo -u git -H bundle exec rails dbconsole -e production
-   ```
-
-   For self-compiled installations, GitLab 14.2 and later:
+   For self-compiled installations:
 
    ```shell
    sudo -u git -H bundle exec rails dbconsole -e production --database main
@@ -220,25 +184,13 @@ You should verify that the secrets are the root cause before deleting any data.
 
 1. Enter the database console:
 
-   For the Linux package (Omnibus) GitLab 14.1 and earlier:
-
-   ```shell
-   sudo gitlab-rails dbconsole
-   ```
-
-   For the Linux package (Omnibus) GitLab 14.2 and later:
+   For the Linux package (Omnibus):
 
    ```shell
    sudo gitlab-rails dbconsole --database main
    ```
 
-   For self-compiled installations, GitLab 14.1 and earlier:
-
-   ```shell
-   sudo -u git -H bundle exec rails dbconsole -e production
-   ```
-
-   For self-compiled installations, GitLab 14.2 and later:
+   For self-compiled installations:
 
    ```shell
    sudo -u git -H bundle exec rails dbconsole -e production --database main
@@ -355,30 +307,18 @@ Truncate the filenames in the `uploads` table:
 
 1. Enter the database console:
 
-   For the Linux package (Omnibus) GitLab 14.2 and later:
+   For the Linux package (Omnibus):
 
    ```shell
    sudo gitlab-rails dbconsole --database main
    ```
 
-   For the Linux package (Omnibus) GitLab 14.1 and earlier:
-
-   ```shell
-   sudo gitlab-rails dbconsole
-   ```
-
-   For self-compiled installations, GitLab 14.2 and later:
+   For self-compiled installations:
 
    ```shell
    sudo -u git -H bundle exec rails dbconsole -e production --database main
    ```
 
-   For self-compiled installations, GitLab 14.1 and earlier:
-
-   ```shell
-   sudo -u git -H bundle exec rails dbconsole -e production
-   ```
-
 1. Search the `uploads` table for filenames longer than 246 characters:
 
    The following query selects the `uploads` records with filenames longer than 246 characters in batches of 0 to 10000. This improves the performance on large GitLab instances with tables having thousand of records.
diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md
index 0d05f48fde8335c5a9b58d29eeaaff5900d6d019..57360f6a6e261a581beb97c755bd90b4edbf9f74 100644
--- a/doc/administration/geo/disaster_recovery/index.md
+++ b/doc/administration/geo/disaster_recovery/index.md
@@ -295,25 +295,6 @@ changing Git remotes and API URLs.
    This command uses the changed `external_url` configuration defined
    in `/etc/gitlab/gitlab.rb`.
 
-1. For GitLab 12.0 through 12.7, you may need to update the **primary**
-   site's name in the database. This bug has been fixed in GitLab 12.8.
-
-   To determine if you need to do this, search for the
-   `gitlab_rails["geo_node_name"]` setting in your `/etc/gitlab/gitlab.rb`
-   file. If it is commented out with `#` or not found at all, then you
-   need to update the **primary** site's name in the database. You can search for it
-   like so:
-
-   ```shell
-   grep "geo_node_name" /etc/gitlab/gitlab.rb
-   ```
-
-   To update the **primary** site's name in the database:
-
-   ```shell
-   gitlab-rails runner 'Gitlab::Geo.primary_node.update!(name: GeoNode.current_node_name)'
-   ```
-
 1. Verify you can connect to the newly promoted **primary** using its URL.
    If you updated the DNS records for the primary domain, these changes may
    not have yet propagated depending on the previous DNS records TTL.
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
index 5412a487cc5068a46a31e66feaaac7951f22b1a1..1574546e293b5eb6481291bcac0660f1c3608b9c 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
@@ -182,7 +182,7 @@ follow these steps to avoid unnecessary data loss:
      - Revoke object storage permissions from the **primary** site.
      - Physically disconnect a machine.
 
-### Promoting the **secondary** site running GitLab 14.5 and later
+### Promoting the **secondary** site
 
 1. SSH to every Sidekiq, PostgreSQL, and Gitaly node in the **secondary** site and run one of the following commands:
 
@@ -214,72 +214,8 @@ follow these steps to avoid unnecessary data loss:
 
 1. Verify you can connect to the newly promoted **primary** site using the URL used
    previously for the **secondary** site.
-1. If successful, the **secondary** site is now promoted to the **primary** site.
-
-### Promoting the **secondary** site running GitLab 14.4 and earlier
-
-WARNING:
-The `gitlab-ctl promote-to-primary-node` and `gitlab-ctl promoted-db` commands are
-deprecated in GitLab 14.5 and later, and [removed in GitLab 15.0](https://gitlab.com/gitlab-org/gitlab/-/issues/345207).
-Use `gitlab-ctl geo promote` instead.
-
-NOTE:
-A new **secondary** should not be added at this time. If you want to add a new
-**secondary**, do this after you have completed the entire process of promoting
-the **secondary** to the **primary**.
 
-WARNING:
-If you encounter an `ActiveRecord::RecordInvalid: Validation failed: Name has already been taken` error during this process, read
-[the troubleshooting advice](../../replication/troubleshooting/failover.md#fixing-errors-during-a-failover-or-when-promoting-a-secondary-to-a-primary-site).
-
-The `gitlab-ctl promote-to-primary-node` command cannot be used in
-conjunction with multiple servers, as it can only
-perform changes on a **secondary** with only a single machine. Instead, you must
-do this manually.
-
-WARNING:
-In GitLab 13.2 and 13.3, promoting a secondary site to a primary while the
-secondary is paused fails. Do not pause replication before promoting a
-secondary. If the site is paused, be sure to resume before promoting. This
-issue has been fixed in GitLab 13.4 and later.
-
-WARNING:
-If the secondary site [has been paused](../../../geo/index.md#pausing-and-resuming-replication), this performs
-a point-in-time recovery to the last known state.
-Data that was created on the primary while the secondary was paused is lost.
-
-1. SSH in to the PostgreSQL node in the **secondary** and promote PostgreSQL separately:
-
-   ```shell
-   sudo gitlab-ctl promote-db
-   ```
-
-1. Edit `/etc/gitlab/gitlab.rb` on every machine in the **secondary** to
-   reflect its new status as **primary** by removing any lines that enabled the
-   `geo_secondary_role`:
-
-   ```ruby
-   ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
-   geo_secondary_role['enable'] = true
-
-   ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
-   roles ['geo_secondary_role']
-   ```
-
-   After making these changes, [reconfigure GitLab](../../../restart_gitlab.md#reconfigure-a-linux-package-installation) each
-   machine so the changes take effect.
-
-1. Promote the **secondary** to **primary**. SSH into a single Rails node
-   server and execute:
-
-   ```shell
-   sudo gitlab-rake geo:set_secondary_as_primary
-   ```
-
-1. Verify you can connect to the newly promoted **primary** using the URL used
-   previously for the **secondary**.
-
-1. Success! The **secondary** has now been promoted to **primary**.
+1. If successful, the **secondary** site is now promoted to the **primary** site.
 
 ### Next steps
 
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
index 4510a410b5a05f0d4740be693a783616d1ffe62d..b93e6013988980ece64b404d23fd8a3186cf89fb 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
@@ -222,7 +222,7 @@ Note the following when promoting a secondary:
   error during this process, read
   [the troubleshooting advice](../../replication/troubleshooting/failover.md#fixing-errors-during-a-failover-or-when-promoting-a-secondary-to-a-primary-site).
 
-To promote the secondary site running GitLab 14.5 and later:
+To promote the secondary site:
 
 1. SSH in to your **secondary** site and run one of the following commands:
 
@@ -243,75 +243,6 @@ To promote the secondary site running GitLab 14.5 and later:
 
    If successful, the **secondary** site is now promoted to the **primary** site.
 
-To promote the secondary site running GitLab 14.4 and earlier:
-
-WARNING:
-The `gitlab-ctl promote-to-primary-node` and `gitlab-ctl promoted-db` commands are
-deprecated in GitLab 14.5 and later, and [removed in GitLab 15.0](https://gitlab.com/gitlab-org/gitlab/-/issues/345207).
-Use `gitlab-ctl geo promote` instead.
-
-1. SSH in to your **secondary** site and login as root:
-
-   ```shell
-   sudo -i
-   ```
-
-1. Edit `/etc/gitlab/gitlab.rb` to reflect its new status as **primary** by
-   removing any lines that enabled the `geo_secondary_role`:
-
-   ```ruby
-   ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
-   geo_secondary_role['enable'] = true
-
-   ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
-   roles ['geo_secondary_role']
-   ```
-
-1. Run the following command to list out all preflight checks and automatically
-   check if replication and verification are complete before scheduling a planned
-   failover to ensure the process goes smoothly:
-
-   NOTE:
-   In GitLab 13.7 and earlier, if you have a data type with zero items to sync,
-   this command reports `ERROR - Replication is not up-to-date` even if
-   replication is actually up-to-date. This bug was fixed in GitLab 13.8 and
-   later.
-
-   ```shell
-   gitlab-ctl promotion-preflight-checks
-   ```
-
-1. Promote the **secondary**:
-
-   NOTE:
-   In GitLab 13.7 and earlier, if you have a data type with zero items to sync,
-   this command reports `ERROR - Replication is not up-to-date` even if
-   replication is actually up-to-date. If replication and verification output
-   shows that it is complete, you can add `--skip-preflight-checks` to make the
-   command complete promotion. This bug was fixed in GitLab 13.8 and later.
-
-   ```shell
-   gitlab-ctl promote-to-primary-node
-   ```
-
-   If you have already run the [preflight checks](../planned_failover.md#preflight-checks)
-   or don't want to run them, you can skip them:
-
-   ```shell
-   gitlab-ctl promote-to-primary-node --skip-preflight-check
-   ```
-
-   You can also promote the secondary site to primary **without any further confirmation**, even when preflight checks fail:
-
-   ```shell
-   sudo gitlab-ctl promote-to-primary-node --force
-   ```
-
-1. Verify you can connect to the newly promoted **primary** site using the URL used
-   previously for the **secondary** site.
-
-   If successful, the **secondary** site is now promoted to the **primary** site.
-
 ### Next steps
 
 To regain geographic redundancy as quickly as possible, you should
diff --git a/doc/administration/geo/index.md b/doc/administration/geo/index.md
index 49bf05b76247d052627042abf114ffde3a66187d..0a95b9b163050f85e1aee5ee39a8fa12ddbb31f6 100644
--- a/doc/administration/geo/index.md
+++ b/doc/administration/geo/index.md
@@ -25,7 +25,7 @@ to clone and fetch large repositories, speeding up development and increasing th
 
 Geo secondary sites transparently proxy write requests to the primary site. All Geo sites can be configured to respond to a single GitLab URL, to deliver a consistent, seamless, and comprehensive experience whichever site the user lands on.
 
-To make sure you're using the right version of the documentation, go to [the Geo page on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/administration/geo/index.md) and choose the appropriate release from the **Switch branch/tag** dropdown list. For example, [`v13.7.6-ee`](https://gitlab.com/gitlab-org/gitlab/-/blob/v13.7.6-ee/doc/administration/geo/index.md).
+To make sure you're using the right version of the documentation, go to [the Geo page on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/administration/geo/index.md) and choose the appropriate release from the **Switch branch/tag** dropdown list. For example, [`v15.7.6-ee`](https://gitlab.com/gitlab-org/gitlab/-/blob/v15.7.6-ee/doc/administration/geo/index.md).
 
 Geo uses a set of defined terms that are described in the [Geo Glossary](glossary.md).
 Be sure to familiarize yourself with those terms.
@@ -237,8 +237,6 @@ For information on how to update your Geo sites to the latest GitLab version, se
 
 ### Pausing and resuming replication
 
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/35913) in GitLab 13.2.
-
 WARNING:
 Pausing and resuming of replication is only supported for Geo installations using a
 Linux package-managed database. External databases are not supported.
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index 20766fb73197346e26dae6a506097c442ce9a560..d55ff142b104bada25d831cbb57f48d6e3eac39a 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -405,8 +405,6 @@ Selective synchronization:
 
 ### Git operations on unreplicated repositories
 
-> - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/2562) in GitLab 12.10 for HTTP(S) and in GitLab 13.0 for SSH.
-
 Git clone, pull, and push operations over HTTP(S) and SSH are supported for repositories that
 exist on the **primary** site but not on **secondary** sites. This situation can occur
 when:
diff --git a/doc/administration/geo/replication/location_aware_git_url.md b/doc/administration/geo/replication/location_aware_git_url.md
index 66a195a8b17c19f6a42d8dae6a40c54b4aba7c5f..4255699af130e54e1ba58714de622a42337f4c87 100644
--- a/doc/administration/geo/replication/location_aware_git_url.md
+++ b/doc/administration/geo/replication/location_aware_git_url.md
@@ -11,7 +11,6 @@ DETAILS:
 **Offering:** Self-managed
 
 NOTE:
-Since GitLab 14.6,
 [GitLab Geo supports a location-aware URL including web UI and API traffic.](../secondary_proxy/location_aware_external_url.md)
 This configuration is recommended over the location-aware Git remote URL
 described in this document.
diff --git a/doc/administration/geo/replication/troubleshooting/common.md b/doc/administration/geo/replication/troubleshooting/common.md
index 33d5a0ce31178a5f9884f8e2738afd4eb6710147..74e2cde3f906377853570ff43a6f0d7e63f59aec 100644
--- a/doc/administration/geo/replication/troubleshooting/common.md
+++ b/doc/administration/geo/replication/troubleshooting/common.md
@@ -281,8 +281,6 @@ sudo gitlab-rake gitlab:geo:check
   Ensure you have added the secondary site in the Admin Area under **Geo > Sites** on the web interface for the **primary** site.
   Also ensure you entered the `gitlab_rails['geo_node_name']`
   when adding the secondary site in the Admin Area of the **primary** site.
-  In GitLab 12.3 and earlier, edit the secondary site in the Admin Area of the **primary**
-  site and ensure that there is a trailing `/` in the `Name` field.
 
 - Check returns `Exception: PG::UndefinedTable: ERROR:  relation "geo_nodes" does not exist`.
 
diff --git a/doc/administration/geo/replication/troubleshooting/failover.md b/doc/administration/geo/replication/troubleshooting/failover.md
index 8e3ff44a16cdd53d4753a2cfa3e2a084a8e4af5d..2e9c7c6252ebe8fbf937312f1195d35240e0eb1f 100644
--- a/doc/administration/geo/replication/troubleshooting/failover.md
+++ b/doc/administration/geo/replication/troubleshooting/failover.md
@@ -36,9 +36,7 @@ You successfully promoted this node!
 ```
 
 If you encounter this message when running `gitlab-rake geo:set_secondary_as_primary`
-or `gitlab-ctl promote-to-primary-node`, either:
-
-- Enter a Rails console and run:
+or `gitlab-ctl promote-to-primary-node`, enter a Rails console and run:
 
   ```ruby
   Rails.application.load_tasks; nil
@@ -46,10 +44,6 @@ or `gitlab-ctl promote-to-primary-node`, either:
   Rake::Task['geo:set_secondary_as_primary'].invoke
   ```
 
-- Upgrade to GitLab 12.6.3 or later if it is safe to do so. For example,
-  if the failover was just a test. A
-  [caching-related bug](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/22021) was fixed.
-
 ### Message: ``NoMethodError: undefined method `secondary?' for nil:NilClass``
 
 When [promoting a **secondary** site](../../disaster_recovery/index.md#step-3-promoting-a-secondary-site),
diff --git a/doc/administration/geo/replication/troubleshooting/replication.md b/doc/administration/geo/replication/troubleshooting/replication.md
index 8a96238b04f8a6cc393e4d845a1bfa42dc9f5ef6..691d8c2e5c7f6ea41df7a7159dee0aaf2c8c055a 100644
--- a/doc/administration/geo/replication/troubleshooting/replication.md
+++ b/doc/administration/geo/replication/troubleshooting/replication.md
@@ -187,10 +187,6 @@ to respect the CIDR format (for example, `10.0.0.1/32`).
 This happens if data is detected in the `projects` table. When one or more projects are detected, the operation
 is aborted to prevent accidental data loss. To bypass this message, pass the `--force` option to the command.
 
-In GitLab 13.4, a seed project is added when GitLab is first installed. This makes it necessary to pass `--force` even
-on a new Geo secondary site. There is an [issue to account for seed projects](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5618)
-when checking the database.
-
 ### Message: `FATAL:  could not map anonymous shared memory: Cannot allocate memory`
 
 If you see this message, it means that the secondary site's PostgreSQL tries to request memory that is higher than the available memory. There is an [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/381585) that tracks this problem.
@@ -380,8 +376,6 @@ This iterates over all package files on the secondary, looking at the
 and then calculate this value on the secondary to check if they match. This
 does not change anything in the UI.
 
-For GitLab 14.4 and later:
-
 ```ruby
 # Run on secondary
 status = {}
@@ -402,28 +396,6 @@ status.keys.each {|key| puts "#{key} count: #{status[key].count}"}
 status
 ```
 
-For GitLab 14.3 and earlier:
-
-```ruby
-# Run on secondary
-status = {}
-
-Packages::PackageFile.find_each do |package_file|
-  primary_checksum = package_file.verification_checksum
-  secondary_checksum = Packages::PackageFile.hexdigest(package_file.file.path)
-  verification_status = (primary_checksum == secondary_checksum)
-
-  status[verification_status.to_s] ||= []
-  status[verification_status.to_s] << package_file.id
-end
-
-# Count how many of each value we get
-status.keys.each {|key| puts "#{key} count: #{status[key].count}"}
-
-# See the output in its entirety
-status
-```
-
 ### Failed verification of Uploads on the primary Geo site
 
 If verification of some uploads is failing on the primary Geo site with `verification_checksum = nil` and with the ``verification_failure = Error during verification: undefined method `underscore' for NilClass:Class``, this can be due to orphaned Uploads. The parent record owning the Upload (the upload's model) has somehow been deleted, but the Upload record still exists. These verification failures are false.
diff --git a/doc/administration/geo/replication/troubleshooting/synchronization.md b/doc/administration/geo/replication/troubleshooting/synchronization.md
index 823480c5661cb5d85d22963dbe35bf2b1ad40706..98052d9976e67ff18a45e3bef0e4b4847bacf82a 100644
--- a/doc/administration/geo/replication/troubleshooting/synchronization.md
+++ b/doc/administration/geo/replication/troubleshooting/synchronization.md
@@ -181,89 +181,6 @@ To solve this:
 During a [backfill](../../index.md#backfill), failures are scheduled to be retried at the end
 of the backfill queue, therefore these failures only clear up **after** the backfill completes.
 
-## Sync failure message: "Verification failed with: Error during verification: File is not checksummable"
-
-### Missing files on the Geo primary site
-
-In GitLab 14.5 and earlier, certain data types which were missing on the Geo primary site were marked as "synced" on Geo secondary sites. This was because from the perspective of Geo secondary sites, the state matched the primary site and nothing more could be done on secondary sites.
-
-Secondaries would regularly try to sync these files again by using the "verification" feature:
-
-- Verification fails since the file doesn't exist.
-- The file is marked "sync failed".
-- Sync is retried.
-- The file is marked "sync succeeded".
-- The file is marked "needs verification".
-- Repeat until the file is available again on the primary site.
-
-This can be confusing to troubleshoot, since the registry entries are moved through a logical loop by various background jobs. Also, `last_sync_failure` and `verification_failure` are empty after "sync succeeded" but before verification is retried.
-
-If you see sync failures repeatedly and alternately increase, while successes decrease and vice versa, this is likely to be caused by missing files on the primary site. You can confirm this by searching `geo.log` on secondary sites for `File is not checksummable` affecting the same files over and over.
-
-After confirming this is the problem, the files on the primary site need to be fixed. Some possible causes:
-
-- An NFS share became unmounted.
-- A disk died or became corrupted.
-- Someone unintentionally deleted a file or directory.
-- Bugs in GitLab application:
-  - A file was moved when it shouldn't have been moved.
-  - A file wasn't moved when it should have been moved.
-  - A wrong path was generated in the code.
-- A non-atomic backup was restored.
-- Services or servers or network infrastructure was interrupted/restarted during use.
-
-The appropriate action sometimes depends on the cause. For example, you can remount an NFS share. Often, a root cause may not be apparent or not useful to discover. If you have regular backups, it may be expedient to look through them and pull files from there.
-
-In some cases, a file may be determined to be of low value, and so it may be worth deleting the record.
-
-Geo itself is an excellent mitigation for files missing on the primary. If a file disappears on the primary but it was already synced to the secondary, you can grab the secondary's file. In cases like this, the `File is not checksummable` error message does not occur on Geo secondary sites, and only the primary logs this error message.
-
-This problem is more likely to show up in Geo secondary sites which were set up long after the original GitLab site. In this case, Geo is only surfacing an existing problem.
-
-This behavior affects only the following data types through GitLab 14.6:
-
-| Data type                | From version |
-| ------------------------ | ------------ |
-| Package registry         | 13.10        |
-| CI Pipeline Artifacts    | 13.11        |
-| Terraform State Versions | 13.12        |
-| Infrastructure Registry (renamed to Terraform Module Registry in GitLab 15.11) | 14.0 |
-| External MR diffs        | 14.6         |
-| LFS Objects              | 14.6         |
-| Pages Deployments        | 14.6         |
-| Uploads                  | 14.6         |
-| CI Job Artifacts         | 14.6         |
-
-[Since GitLab 14.7, files that are missing on the primary site are now treated as sync failures](https://gitlab.com/gitlab-org/gitlab/-/issues/348745)
-to make Geo visibly surface data loss risks. The sync/verification loop is
-therefore short-circuited. `last_sync_failure` is now set to `The file is missing on the Geo primary site`.
-
-### Failed syncs with GitLab-managed object storage replication
-
-There is [an issue in GitLab 14.2 through 14.7](https://gitlab.com/gitlab-org/gitlab/-/issues/299819#note_822629467)
-that affects Geo when the GitLab-managed object storage replication is used, causing blob object types to fail synchronization.
-
-Since GitLab 14.2, verification failures result in synchronization failures and cause
-a re-synchronization of these objects.
-
-As verification is not implemented for files stored in object storage (see
-[issue 13845](https://gitlab.com/gitlab-org/gitlab/-/issues/13845) for more details), this
-results in a loop that consistently fails for all objects stored in object storage.
-
-You can work around this by marking the objects as synced and succeeded verification, however
-be aware that can also mark objects that may be
-[missing from the primary](#missing-files-on-the-geo-primary-site).
-
-To do that, enter the [Rails console](../../../operations/rails_console.md)
-and run:
-
-```ruby
-Gitlab::Geo.verification_enabled_replicator_classes.each do |klass|
-  updated = klass.registry_class.failed.where(last_sync_failure: "Verification failed with: Error during verification: File is not checksummable").update_all(verification_checksum: '0000000000000000000000000000000000000000', verification_state: 2, verification_failure: nil, verification_retry_at: nil, state: 2, last_sync_failure: nil, retry_at: nil, verification_retry_count: 0, retry_count: 0)
-  pp "Updated #{updated} #{klass.replicable_name_plural}"
-end
-```
-
 ## Message: curl 18 transfer closed with outstanding read data remaining & fetch-pack: unexpected disconnect while reading sideband packet
 
 Unstable networking conditions can cause Gitaly to fail when trying to fetch large repository
diff --git a/doc/administration/geo/secondary_proxy/index.md b/doc/administration/geo/secondary_proxy/index.md
index 5dd20db041b70542b71ae23aeaec13c05b6c657b..cc4d90cc586a75dc5bc912214fb07d9e2d4ba1c4 100644
--- a/doc/administration/geo/secondary_proxy/index.md
+++ b/doc/administration/geo/secondary_proxy/index.md
@@ -10,9 +10,6 @@ DETAILS:
 **Tier:** Premium, Ultimate
 **Offering:** Self-managed
 
-> - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/5914) in GitLab 14.4 [with a flag](../../feature_flags.md) named `geo_secondary_proxy`. Disabled by default.
-> - [Enabled by default for unified URLs](https://gitlab.com/gitlab-org/gitlab/-/issues/325732) in GitLab 14.6.
-> - [Disabled by default for different URLs](https://gitlab.com/gitlab-org/gitlab/-/issues/325732) in GitLab 14.6 [with a flag](../../feature_flags.md) named `geo_secondary_proxy_separate_urls`.
 > - [Enabled by default for different URLs](https://gitlab.com/gitlab-org/gitlab/-/issues/346112) in GitLab 15.1.
 
 Use Geo proxying to:
diff --git a/doc/administration/geo/setup/database.md b/doc/administration/geo/setup/database.md
index 5c9957d7a72496d8f5851aebd46c20e146add805..724970d23e6e0206cc51a8acb98210ee1427d949 100644
--- a/doc/administration/geo/setup/database.md
+++ b/doc/administration/geo/setup/database.md
@@ -616,15 +616,13 @@ On all GitLab Geo **secondary** sites:
 
 ## Multi-node database replication
 
-In GitLab 14.0, Patroni replaced `repmgr` as the supported
-[highly available PostgreSQL solution](../../postgresql/replication_and_failover.md).
-
 NOTE:
-If you still haven't [migrated from repmgr to Patroni](#migrating-from-repmgr-to-patroni) you're highly advised to do so.
+Patroni is the supported
+[highly available PostgreSQL solution](../../postgresql/replication_and_failover.md). If you still haven't [migrated from repmgr to Patroni](#migrating-from-repmgr-to-patroni), you're highly advised to do so.
 
 ### Migrating from repmgr to Patroni
 
-1. Before migrating, you should ensure there is no replication lag between the **primary** and **secondary** sites and that replication is paused. In GitLab 13.2 and later, you can pause and resume replication with `gitlab-ctl geo-replication-pause` and `gitlab-ctl geo-replication-resume` on a Geo secondary database node.
+1. Before migrating, you should ensure there is no replication lag between the **primary** and **secondary** sites and that replication is paused. You can pause and resume replication with `gitlab-ctl geo-replication-pause` and `gitlab-ctl geo-replication-resume` on a Geo secondary database node.
 1. Follow the [instructions to migrate repmgr to Patroni](../../postgresql/replication_and_failover.md#switching-from-repmgr-to-patroni). When configuring Patroni on each **primary** site database node, add `patroni['replication_slots'] = { '<slot_name>' => 'physical' }`
    to `gitlab.rb` where `<slot_name>` is the name of the replication slot for your **secondary** site. This ensures that Patroni recognizes the replication slot as permanent and doesn't drop it upon restarting.
 1. If database replication to the **secondary** site was paused before migration, resume replication after Patroni is confirmed as working on the **primary** site.
diff --git a/doc/administration/geo_sites.md b/doc/administration/geo_sites.md
index efdba56d253052404c4b8acf2ff1df9509716689..e9f0a81557dac7946ed34e7886422336f46a7f33 100644
--- a/doc/administration/geo_sites.md
+++ b/doc/administration/geo_sites.md
@@ -61,8 +61,6 @@ you can decrease them.
 
 ## Set up the internal URLs
 
-> - Setting up internal URLs in secondary sites was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/77179) in GitLab 14.7.
-
 You can set up a different URL for synchronization between the primary and secondary site.
 
 The **primary** site's Internal URL is used by **secondary** sites to contact it
@@ -90,12 +88,6 @@ breaking communication between **primary** and **secondary** sites when using
 HTTPS, customize your Internal URL to point to a load balancer with TLS
 terminated at the load balancer.
 
-WARNING:
-Starting with GitLab 13.3 and [until 13.11](https://gitlab.com/gitlab-org/gitlab/-/issues/325522),
-if you use an internal URL that is not accessible to the users, the
-OAuth authorization flow does not work properly, because users are redirected
-to the internal URL instead of the external one.
-
 ## Multiple secondary sites behind a load balancer
 
 **Secondary** sites can use identical external URLs if
diff --git a/doc/administration/maintenance_mode/index.md b/doc/administration/maintenance_mode/index.md
index 31afe3a6a4090e137566101f8c3f9be1af5ecd9d..6d95ddcc421a44898c4ae4936b02c865301dabff 100644
--- a/doc/administration/maintenance_mode/index.md
+++ b/doc/administration/maintenance_mode/index.md
@@ -10,8 +10,6 @@ DETAILS:
 **Tier:** Premium, Ultimate
 **Offering:** Self-managed
 
-> - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/2149) in GitLab 13.9.
-
 Maintenance Mode allows administrators to reduce write operations to a minimum while maintenance tasks are performed. The main goal is to block all external actions that change the internal state. The internal state includes the PostgreSQL database, but especially files, Git repositories, and Container repositories.
 
 When Maintenance Mode is enabled, in-progress actions finish relatively quickly because no new actions are coming in, and internal state changes are minimal.
diff --git a/doc/development/geo.md b/doc/development/geo.md
index 81390ffc492e797bd85560ef37d9ebda81bc899f..f196e78a4176d359666b64f5d5759b675e4125c0 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -634,44 +634,6 @@ If a new feature introduces a new kind of data which is not a Git repository, or
 
 As an example, container registry data does not easily fit into the above categories. It is backed by a registry service which owns the data, and GitLab interacts with the registry service's API. So a one off approach is required for Geo support of container registry. Still, we are able to reuse much of the glue code of [the Geo self-service framework](geo/framework.md#repository-replicator-strategy).
 
-## History of communication channel
-
-The communication channel has changed since first iteration, you can
-check here historic decisions and why we moved to new implementations.
-
-### Custom code (GitLab 8.6 and earlier)
-
-In GitLab versions before 8.6, custom code is used to handle
-notification from **primary** site to **secondary** sites by HTTP
-requests.
-
-### System hooks (GitLab 8.7 to 9.5)
-
-Later, it was decided to move away from custom code and begin using
-system hooks. More people were using them, so
-many would benefit from improvements made to this communication layer.
-
-There is a specific **internal** endpoint in our API code (Grape),
-that receives all requests from this System Hooks:
-`/api/v4/geo/receive_events`.
-
-We switch and filter from each event by the `event_name` field.
-
-### Geo Log Cursor (GitLab 10.0 and up)
-
-In GitLab 10.0 and later, [System Webhooks](#system-hooks-gitlab-87-to-95) are no longer
-used and [Geo Log Cursor](#geo-log-cursor-daemon) is used instead. The Log Cursor traverses the
-`Geo::EventLog` rows to see if there are changes since the last time
-the log was checked and will handle repository updates, deletes,
-changes, and renames.
-
-The table is within the replicated database. This has two advantages over the
-old method:
-
-- Replication is synchronous and we preserve the order of events.
-- Replication of the events happen at the same time as the changes in the
-  database.
-
 ## Self-service framework
 
 If you want to add easy Geo replication of a resource you're working
diff --git a/doc/update/versions/gitlab_14_changes.md b/doc/update/versions/gitlab_14_changes.md
index ec8d886c5c4be7372b4e88ff999a81dfb4dc21a6..3b00a5c7ee85f0f8d550923803f2050f6b82ebfb 100644
--- a/doc/update/versions/gitlab_14_changes.md
+++ b/doc/update/versions/gitlab_14_changes.md
@@ -229,7 +229,7 @@ DETAILS:
   results in a loop that consistently fails for all objects stored in object storage.
 
   For information on how to fix this, see
-  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](../../administration/geo/replication/troubleshooting/synchronization.md#failed-syncs-with-gitlab-managed-object-storage-replication).
+  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](https://archives.docs.gitlab.com/14.10/ee/administration/geo/replication/troubleshooting#failed-syncs-with-gitlab-managed-object-storage-replication).
 
 ## 14.6.0
 
@@ -255,7 +255,7 @@ DETAILS:
   results in a loop that consistently fails for all objects stored in object storage.
 
   For information on how to fix this, see
-  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](../../administration/geo/replication/troubleshooting/synchronization.md#failed-syncs-with-gitlab-managed-object-storage-replication).
+  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](https://archives.docs.gitlab.com/14.10/ee/administration/geo/replication/troubleshooting#failed-syncs-with-gitlab-managed-object-storage-replication).
 
 ## 14.5.0
 
@@ -340,7 +340,7 @@ DETAILS:
   results in a loop that consistently fails for all objects stored in object storage.
 
   For information on how to fix this, see
-  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](../../administration/geo/replication/troubleshooting/synchronization.md#failed-syncs-with-gitlab-managed-object-storage-replication).
+  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](https://archives.docs.gitlab.com/14.10/ee/administration/geo/replication/troubleshooting#failed-syncs-with-gitlab-managed-object-storage-replication).
 
 ## 14.4.4
 
@@ -424,7 +424,7 @@ DETAILS:
   results in a loop that consistently fails for all objects stored in object storage.
 
   For information on how to fix this, see
-  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](../../administration/geo/replication/troubleshooting/synchronization.md#failed-syncs-with-gitlab-managed-object-storage-replication).
+  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](https://archives.docs.gitlab.com/14.10/ee/administration/geo/replication/troubleshooting#failed-syncs-with-gitlab-managed-object-storage-replication).
 
 - There is [an issue in GitLab 14.4.0 through 14.4.2](#1440) that can affect
   Geo and other features that rely on cronjobs. We recommend upgrading to GitLab 14.4.3 or later.
@@ -594,7 +594,7 @@ DETAILS:
   results in a loop that consistently fails for all objects stored in object storage.
 
   For information on how to fix this, see
-  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](../../administration/geo/replication/troubleshooting/synchronization.md#failed-syncs-with-gitlab-managed-object-storage-replication).
+  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](https://archives.docs.gitlab.com/14.10/ee/administration/geo/replication/troubleshooting#failed-syncs-with-gitlab-managed-object-storage-replication).
 
 - We found an [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/336013) where the container registry replication
   wasn't fully working if you used multi-arch images. In case of a multi-arch image, only the primary architecture
@@ -699,7 +699,7 @@ DETAILS:
   results in a loop that consistently fails for all objects stored in object storage.
 
   For information on how to fix this, see
-  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](../../administration/geo/replication/troubleshooting/synchronization.md#failed-syncs-with-gitlab-managed-object-storage-replication).
+  [Troubleshooting - Failed syncs with GitLab-managed object storage replication](https://archives.docs.gitlab.com/14.10/ee/administration/geo/replication/troubleshooting#failed-syncs-with-gitlab-managed-object-storage-replication).
 
 - We found an [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/336013) where the container registry replication
   wasn't fully working if you used multi-arch images. In case of a multi-arch image, only the primary architecture