diff --git a/doc/administration/auditor_users.md b/doc/administration/auditor_users.md
index 96bfbd88ddf5c809f4ffb357e216a447dfd50a63..5f31ed709f284fcd705d99500d2f6617eb137910 100644
--- a/doc/administration/auditor_users.md
+++ b/doc/administration/auditor_users.md
@@ -53,17 +53,16 @@ helpful:
   you can create an Auditor user and then share the credentials with those users
   to which you want to grant access.
 
-## Adding an Auditor user
+## Add an Auditor user
 
-To create a new Auditor user:
+To create an Auditor user:
 
-1. Create a new user or edit an existing one by navigating to
-   **Admin Area > Users**. The option of the access level is located in
-   the 'Access' section.
-
-   ![Admin Area Form](img/auditor_access_form.png)
-
-1. Select **Save changes** or **Create user** for the changes to take effect.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Overview > Users**.
+1. Create a new user or edit an existing one, and in the **Access** section
+   select Auditor.
+1. Select **Create user** or **Save changes** if you created a new user or
+   edited an existing one respectively.
 
 To revoke Auditor permissions from a user, make them a regular user by
 following the previous steps.
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md
index c09daeec824c8036bebb49b75ffd125b331ce800..f03cd64c14e829dd716ea0cb3adcd55260aff24d 100644
--- a/doc/administration/geo/disaster_recovery/background_verification.md
+++ b/doc/administration/geo/disaster_recovery/background_verification.md
@@ -58,19 +58,25 @@ Feature.enable('geo_repository_verification')
 
 ## Repository verification
 
-Go to the **Admin Area > Geo** dashboard on the **primary** node and expand
-the **Verification information** section for that node to view automatic checksumming
-status for each data type. Successes are shown in green, pending work
-in gray, and failures in red.
+On the **primary** node:
 
-![Verification status](img/verification_status_primary_v14_0.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Expand **Verification information** tab for that node to view automatic checksumming
+   status for repositories and wikis. Successes are shown in green, pending work
+   in gray, and failures in red.
 
-Go to the **Admin Area > Geo** dashboard on the **secondary** node and expand
-the **Verification information** section for that node to view automatic verification
-status for each data type. As with checksumming, successes are shown in
-green, pending work in gray, and failures in red.
+   ![Verification status](img/verification_status_primary_v14_0.png)
 
-![Verification status](img/verification_status_secondary_v14_0.png)
+On the **secondary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Expand **Verification information** tab for that node to view automatic checksumming
+   status for repositories and wikis. Successes are shown in green, pending work
+   in gray, and failures in red.
+
+   ![Verification status](img/verification_status_secondary_v14_0.png)
 
 ## Using checksums to compare Geo nodes
 
@@ -92,11 +98,14 @@ data. The default and recommended re-verification interval is 7 days, though
 an interval as short as 1 day can be set. Shorter intervals reduce risk but
 increase load and vice versa.
 
-Go to the **Admin Area > Geo** dashboard on the **primary** node, and
-click the **Edit** button for the **primary** node to customize the minimum
-re-verification interval:
+On the **primary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Edit** for the **primary** node to customize the minimum
+   re-verification interval:
 
-![Re-verification interval](img/reverification-interval.png)
+   ![Re-verification interval](img/reverification-interval.png)
 
 The automatic background re-verification is enabled by default, but you can
 disable if you need. Run the following commands in a Rails console on the
@@ -141,17 +150,19 @@ sudo gitlab-rake geo:verification:wiki:reset
 
 If the **primary** and **secondary** nodes have a checksum verification mismatch, the cause may not be apparent. To find the cause of a checksum mismatch:
 
-1. Go to the **Admin Area > Overview > Projects** dashboard on the **primary** node, find the
-   project that you want to check the checksum differences and click on the
-   **Edit** button:
-   ![Projects dashboard](img/checksum-differences-admin-projects.png)
+1. On the **primary** node:
+   1. On the top bar, select **Menu >** **{admin}** **Admin**.
+   1. On the left sidebar, select **Overview > Projects**.
+   1. Find the project that you want to check the checksum differences and
+      select its name.
+   1. On the project administration page get the **Gitaly storage name**,
+      and **Gitaly relative path**.
 
-1. On the project administration page get the **Gitaly storage name**, and **Gitaly relative path**:
-   ![Project administration page](img/checksum-differences-admin-project-page.png)
+      ![Project administration page](img/checksum-differences-admin-project-page.png)
 
 1. Go to the project's repository directory on both **primary** and **secondary** nodes
    (the path is usually `/var/opt/gitlab/git-data/repositories`). Note that if `git_data_dirs`
-   is customized, check the directory layout on your server to be sure.
+   is customized, check the directory layout on your server to be sure:
 
    ```shell
    cd /var/opt/gitlab/git-data/repositories
diff --git a/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png b/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png
deleted file mode 100644
index 85759d903a43cffa83b48181ec5b02b0a2ecd357..0000000000000000000000000000000000000000
Binary files a/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png and /dev/null differ
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index 633d787473e678d2d75b7e176f88b225ff6cd39c..5c15523ac78e5da23effdcee14cefb52c36dcd2e 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -109,13 +109,16 @@ The maintenance window won't end until Geo replication and verification is
 completely finished. To keep the window as short as possible, you should
 ensure these processes are close to 100% as possible during active use.
 
-Go to the **Admin Area > Geo** dashboard on the **secondary** node to
-review status. Replicated objects (shown in green) should be close to 100%,
-and there should be no failures (shown in red). If a large proportion of
-objects aren't yet replicated (shown in gray), consider giving the node more
-time to complete
+On the **secondary** node:
 
-![Replication status](../replication/img/geo_node_dashboard_v14_0.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+   Replicated objects (shown in green) should be close to 100%,
+   and there should be no failures (shown in red). If a large proportion of
+   objects aren't yet replicated (shown in gray), consider giving the node more
+   time to complete
+
+   ![Replication status](../replication/img/geo_node_dashboard_v14_0.png)
 
 If any objects are failing to replicate, this should be investigated before
 scheduling the maintenance window. Following a planned failover, anything that
@@ -134,23 +137,26 @@ This [content was moved to another location](background_verification.md).
 
 ### Notify users of scheduled maintenance
 
-On the **primary** node, navigate to **Admin Area > Messages**, add a broadcast
-message. You can check under **Admin Area > Geo** to estimate how long it
-takes to finish syncing. An example message would be:
+On the **primary** node:
 
-> A scheduled maintenance takes place at XX:XX UTC. We expect it to take
-> less than 1 hour.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Messages**.
+1. Add a message notifying users on the maintenance window.
+   You can check under **Geo > Nodes** to estimate how long it
+   takes to finish syncing.
+1. Select **Add broadcast message**.
 
 ## Prevent updates to the **primary** node
 
 To ensure that all data is replicated to a secondary site, updates (write requests) need to
-be disabled on the primary site:
-
-1. Enable [maintenance mode](../../maintenance_mode/index.md).
-
-1. Disable non-Geo periodic background jobs on the **primary** node by navigating
-   to **Admin Area > Monitoring > Background Jobs > Cron**, pressing `Disable All`,
-   and then pressing `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
+be disabled on the **primary** site:
+
+1. Enable [maintenance mode](../../maintenance_mode/index.md) on the **primary** node.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Monitoring > Background Jobs**.
+1. On the Sidekiq dashboard, select **Cron**.
+1. Select `Disable All` to disable non-Geo periodic background jobs.
+1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
    This job re-enables several other cron jobs that are essential for planned
    failover to complete successfully.
 
@@ -158,23 +164,28 @@ be disabled on the primary site:
 
 1. If you are manually replicating any data not managed by Geo, trigger the
    final replication process now.
-1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
-   and wait for all queues except those with `geo` in the name to drop to 0.
-   These queues contain work that has been submitted by your users; failing over
-   before it is completed, causes the work to be lost.
-1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
-   following conditions to be true of the **secondary** node you are failing over to:
-
-   - All replication meters to each 100% replicated, 0% failures.
-   - All verification meters reach 100% verified, 0% failures.
-   - Database replication lag is 0ms.
-   - The Geo log cursor is up to date (0 events behind).
-
-1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
-   and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
-1. On the **secondary** node, use [these instructions](../../raketasks/check.md)
-   to verify the integrity of CI artifacts, LFS objects, and uploads in file
-   storage.
+1. On the **primary** node:
+   1. On the top bar, select **Menu >** **{admin}** **Admin**.
+   1. On the left sidebar, select **Monitoring > Background Jobs**.
+   1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
+      those with `geo` in the name to drop to 0.
+      These queues contain work that has been submitted by your users; failing over
+      before it is completed, causes the work to be lost.
+   1. On the left sidebar, select **Geo > Nodes** and wait for the
+      following conditions to be true of the **secondary** node you are failing over to:
+
+      - All replication meters reach 100% replicated, 0% failures.
+      - All verification meters reach 100% verified, 0% failures.
+      - Database replication lag is 0ms.
+      - The Geo log cursor is up to date (0 events behind).
+
+1. On the **secondary** node:
+   1. On the top bar, select **Menu >** **{admin}** **Admin**.
+   1. On the left sidebar, select **Monitoring > Background Jobs**.
+   1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
+      queues to drop to 0 queued and 0 running jobs.
+   1. [Run an integrity check](../../raketasks/check.md) to verify the integrity
+      of CI artifacts, LFS objects, and uploads in file storage.
 
 At this point, your **secondary** node contains an up-to-date copy of everything the
 **primary** node has, meaning nothing was lost when you fail over.
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
index e19aa671b899e1cac419cd7088a0b3c74a55b0e6..4cfe781c7a4837cb38fe6b467149b5c89671d6dc 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
@@ -63,13 +63,16 @@ Before following any of those steps, make sure you have `root` access to the
 **secondary** to promote it, since there isn't provided an automated way to
 promote a Geo replica and perform a failover.
 
-On the **secondary** node, navigate to the **Admin Area > Geo** dashboard to
-review its status. Replicated objects (shown in green) should be close to 100%,
-and there should be no failures (shown in red). If a large proportion of
-objects aren't yet replicated (shown in gray), consider giving the node more
-time to complete.
+On the **secondary** node:
 
-![Replication status](../../replication/img/geo_node_dashboard_v14_0.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes** to see its status.
+   Replicated objects (shown in green) should be close to 100%,
+   and there should be no failures (shown in red). If a large proportion of
+   objects aren't yet replicated (shown in gray), consider giving the node more
+   time to complete.
+
+   ![Replication status](../../replication/img/geo_node_dashboard_v14_0.png)
 
 If any objects are failing to replicate, this should be investigated before
 scheduling the maintenance window. After a planned failover, anything that
@@ -126,11 +129,14 @@ follow these steps to avoid unnecessary data loss:
       existing Git repository with an SSH remote URL. The server should refuse
       connection.
 
-   1. On the **primary** node, disable non-Geo periodic background jobs by navigating
-      to **Admin Area > Monitoring > Background Jobs > Cron**, clicking `Disable All`,
-      and then clicking `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
-      This job will re-enable several other cron jobs that are essential for planned
-      failover to complete successfully.
+   1. On the **primary** node:
+      1. On the top bar, select **Menu >** **{admin}** **Admin**.
+      1. On the left sidebar, select **Monitoring > Background Jobs**.
+      1. On the Sidekiq dhasboard, select **Cron**.
+      1. Select `Disable All` to disable any non-Geo periodic background jobs.
+      1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
+         This job will re-enable several other cron jobs that are essential for planned
+         failover to complete successfully.
 
 1. Finish replicating and verifying all data:
 
@@ -141,22 +147,28 @@ follow these steps to avoid unnecessary data loss:
    1. If you are manually replicating any
       [data not managed by Geo](../../replication/datatypes.md#limitations-on-replicationverification),
       trigger the final replication process now.
-   1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
-      and wait for all queues except those with `geo` in the name to drop to 0.
-      These queues contain work that has been submitted by your users; failing over
-      before it is completed will cause the work to be lost.
-   1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
-      following conditions to be true of the **secondary** node you are failing over to:
-      - All replication meters to each 100% replicated, 0% failures.
-      - All verification meters reach 100% verified, 0% failures.
-      - Database replication lag is 0ms.
-      - The Geo log cursor is up to date (0 events behind).
-
-   1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
-      and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
-   1. On the **secondary** node, use [these instructions](../../../raketasks/check.md)
-      to verify the integrity of CI artifacts, LFS objects, and uploads in file
-      storage.
+   1. On the **primary** node:
+      1. On the top bar, select **Menu >** **{admin}** **Admin**.
+      1. On the left sidebar, select **Monitoring > Background Jobs**.
+      1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
+         those with `geo` in the name to drop to 0.
+         These queues contain work that has been submitted by your users; failing over
+         before it is completed, causes the work to be lost.
+      1. On the left sidebar, select **Geo > Nodes** and wait for the
+         following conditions to be true of the **secondary** node you are failing over to:
+
+         - All replication meters reach 100% replicated, 0% failures.
+         - All verification meters reach 100% verified, 0% failures.
+         - Database replication lag is 0ms.
+         - The Geo log cursor is up to date (0 events behind).
+
+   1. On the **secondary** node:
+      1. On the top bar, select **Menu >** **{admin}** **Admin**.
+      1. On the left sidebar, select **Monitoring > Background Jobs**.
+      1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
+         queues to drop to 0 queued and 0 running jobs.
+      1. [Run an integrity check](../../../raketasks/check.md) to verify the integrity
+         of CI artifacts, LFS objects, and uploads in file storage.
 
    At this point, your **secondary** node will contain an up-to-date copy of everything the
    **primary** node has, meaning nothing will be lost when you fail over.
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
index 9b5c3f00040ba14a29de711db36187250cf0a23e..6caeddad51aab11987d6cf4925c2095bd52e3318 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
@@ -114,11 +114,14 @@ follow these steps to avoid unnecessary data loss:
       existing Git repository with an SSH remote URL. The server should refuse
       connection.
 
-   1. On the **primary** node, disable non-Geo periodic background jobs by navigating
-      to **Admin Area > Monitoring > Background Jobs > Cron**, clicking `Disable All`,
-      and then clicking `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
-      This job will re-enable several other cron jobs that are essential for planned
-      failover to complete successfully.
+   1. On the **primary** node:
+      1. On the top bar, select **Menu >** **{admin}** **Admin**.
+      1. On the left sidebar, select **Monitoring > Background Jobs**.
+      1. On the Sidekiq dhasboard, select **Cron**.
+      1. Select `Disable All` to disable any non-Geo periodic background jobs.
+      1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
+         This job will re-enable several other cron jobs that are essential for planned
+         failover to complete successfully.
 
 1. Finish replicating and verifying all data:
 
@@ -129,22 +132,28 @@ follow these steps to avoid unnecessary data loss:
    1. If you are manually replicating any
       [data not managed by Geo](../../replication/datatypes.md#limitations-on-replicationverification),
       trigger the final replication process now.
-   1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
-      and wait for all queues except those with `geo` in the name to drop to 0.
-      These queues contain work that has been submitted by your users; failing over
-      before it is completed will cause the work to be lost.
-   1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
-      following conditions to be true of the **secondary** node you are failing over to:
-      - All replication meters to each 100% replicated, 0% failures.
-      - All verification meters reach 100% verified, 0% failures.
-      - Database replication lag is 0ms.
-      - The Geo log cursor is up to date (0 events behind).
-
-   1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
-      and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
-   1. On the **secondary** node, use [these instructions](../../../raketasks/check.md)
-      to verify the integrity of CI artifacts, LFS objects, and uploads in file
-      storage.
+   1. On the **primary** node:
+      1. On the top bar, select **Menu >** **{admin}** **Admin**.
+      1. On the left sidebar, select **Monitoring > Background Jobs**.
+      1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
+         those with `geo` in the name to drop to 0.
+         These queues contain work that has been submitted by your users; failing over
+         before it is completed, causes the work to be lost.
+      1. On the left sidebar, select **Geo > Nodes** and wait for the
+         following conditions to be true of the **secondary** node you are failing over to:
+
+         - All replication meters reach 100% replicated, 0% failures.
+         - All verification meters reach 100% verified, 0% failures.
+         - Database replication lag is 0ms.
+         - The Geo log cursor is up to date (0 events behind).
+
+   1. On the **secondary** node:
+      1. On the top bar, select **Menu >** **{admin}** **Admin**.
+      1. On the left sidebar, select **Monitoring > Background Jobs**.
+      1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
+         queues to drop to 0 queued and 0 running jobs.
+      1. [Run an integrity check](../../../raketasks/check.md) to verify the integrity
+         of CI artifacts, LFS objects, and uploads in file storage.
 
    At this point, your **secondary** node will contain an up-to-date copy of everything the
    **primary** node has, meaning nothing will be lost when you fail over.
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index 8a1ea0ad3f2903722513ef5626e5e84af4f991c7..926c4c565aa65a574fe743defe728d601b1d593d 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -196,9 +196,9 @@ keys must be manually replicated to the **secondary** node.
    gitlab-ctl reconfigure
    ```
 
-1. Visit the **primary** node's **Admin Area > Geo**
-   (`/admin/geo/nodes`) in your browser.
-1. Click the **New node** button.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **New node**.
    ![Add secondary node](img/adding_a_secondary_node_v13_3.png)
 1. Fill in **Name** with the `gitlab_rails['geo_node_name']` in
    `/etc/gitlab/gitlab.rb`. These values must always match *exactly*, character
@@ -209,7 +209,7 @@ keys must be manually replicated to the **secondary** node.
 1. Optionally, choose which groups or storage shards should be replicated by the
    **secondary** node. Leave blank to replicate all. Read more in
    [selective synchronization](#selective-synchronization).
-1. Click the **Add node** button to add the **secondary** node.
+1. Select **Add node** to add the **secondary** node.
 1. SSH into your GitLab **secondary** server and restart the services:
 
    ```shell
@@ -252,18 +252,22 @@ on the **secondary** node.
 Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone
 method to be enabled. This is enabled by default, but if converting an existing node to Geo it should be checked:
 
-1. Go to **Admin Area > Settings** (`/admin/application_settings/general`) on the **primary** node.
-1. Expand "Visibility and access controls".
+On the **primary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Settings > General**.
+1. Expand **Visibility and access controls**.
 1. Ensure "Enabled Git access protocols" is set to either "Both SSH and HTTP(S)" or "Only HTTP(S)".
 
 ### Step 6. Verify proper functioning of the **secondary** node
 
-Your **secondary** node is now configured!
+You can sign in to the **secondary** node with the same credentials you used with
+the **primary** node. After you sign in:
 
-You can sign in to the _secondary_ node with the same credentials you used with
-the _primary_ node. Visit the _secondary_ node's **Admin Area > Geo**
-(`/admin/geo/nodes`) in your browser to determine if it's correctly identified
-as a _secondary_ Geo node, and if Geo is enabled.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Verify that it's correctly identified as a **secondary** Geo node, and that
+   Geo is enabled.
 
 The initial replication, or 'backfill', is probably still in progress. You
 can monitor the synchronization process on each Geo node from the **primary**
diff --git a/doc/administration/geo/replication/disable_geo.md b/doc/administration/geo/replication/disable_geo.md
index c71cf80d0c1dbad9ba9a4c97f03f4ded4ba9002c..ba01c55a157f902314c5f3b9420bcd6e53e94741 100644
--- a/doc/administration/geo/replication/disable_geo.md
+++ b/doc/administration/geo/replication/disable_geo.md
@@ -33,9 +33,12 @@ to do that.
 
 ## Remove the primary site from the UI
 
-1. Go to **Admin Area > Geo** (`/admin/geo/nodes`).
-1. Click the **Remove** button for the **primary** node.
-1. Confirm by clicking **Remove** when the prompt appears.
+To remove the **primary** site:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Remove** for the **primary** node.
+1. Confirm by selecting **Remove** when the prompt appears.
 
 ## Remove secondary replication slots
 
diff --git a/doc/administration/geo/replication/docker_registry.md b/doc/administration/geo/replication/docker_registry.md
index ad890a0788397cacf3099ad397dc08d5ad31df70..8300776721581e941269183d2c6ba69404a95652 100644
--- a/doc/administration/geo/replication/docker_registry.md
+++ b/doc/administration/geo/replication/docker_registry.md
@@ -127,7 +127,10 @@ For each application and Sidekiq node on the **secondary** site:
 
 ### Verify replication
 
-To verify Container Registry replication is working, go to **Admin Area > Geo**
-(`/admin/geo/nodes`) on the **secondary** site.
-The initial replication, or "backfill", is probably still in progress.
+To verify Container Registry replication is working, on the **secondary** site:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+   The initial replication, or "backfill", is probably still in progress.
+
 You can monitor the synchronization process on each Geo site from the **primary** site's **Geo Nodes** dashboard in your browser.
diff --git a/doc/administration/geo/replication/object_storage.md b/doc/administration/geo/replication/object_storage.md
index 7dd831092a3b8f351ee0c1de2c299077d6a48094..90a41ed3e1c8125bfbfd0c9f6075aee4c0a9658a 100644
--- a/doc/administration/geo/replication/object_storage.md
+++ b/doc/administration/geo/replication/object_storage.md
@@ -21,7 +21,7 @@ To have:
 
 [Read more about using object storage with GitLab](../../object_storage.md).
 
-## Enabling GitLab managed object storage replication
+## Enabling GitLab-managed object storage replication
 
 > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/10586) in GitLab 12.4.
 
@@ -31,10 +31,11 @@ This is a [**beta** feature](https://about.gitlab.com/handbook/product/#beta) an
 **Secondary** sites can replicate files stored on the **primary** site regardless of
 whether they are stored on the local file system or in object storage.
 
-To enable GitLab replication, you must:
+To enable GitLab replication:
 
-1. Go to **Admin Area > Geo**.
-1. Press **Edit** on the **secondary** site.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Edit** on the **secondary** site.
 1. In the **Synchronization Settings** section, find the **Allow this secondary node to replicate content on Object Storage**
    checkbox to enable it.
 
diff --git a/doc/administration/geo/replication/remove_geo_site.md b/doc/administration/geo/replication/remove_geo_site.md
index a42a4c4eb4767984ca5c96c612d8a1a98f8cc0d8..274eb28dbc9d156995d642bd8cd407c4f6c3c5c0 100644
--- a/doc/administration/geo/replication/remove_geo_site.md
+++ b/doc/administration/geo/replication/remove_geo_site.md
@@ -9,7 +9,8 @@ type: howto
 
 **Secondary** sites can be removed from the Geo cluster using the Geo administration page of the **primary** site. To remove a **secondary** site:
 
-1. Go to **Admin Area > Geo** (`/admin/geo/nodes`).
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
 1. Select the **Remove** button for the **secondary** site you want to remove.
 1. Confirm by selecting **Remove** when the prompt appears.
 
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index 7c1f7cf7a8dc3e439bc23c5bb96d8de4e50323e2..c00f523957ceae65ae4629d3c390f0a3d0937745 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -25,8 +25,12 @@ Before attempting more advanced troubleshooting:
 
 ### Check the health of the **secondary** node
 
-Visit the **primary** node's **Admin Area > Geo** (`/admin/geo/nodes`) in
-your browser. We perform the following health checks on each **secondary** node
+On the **primary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+
+We perform the following health checks on each **secondary** node
 to help identify if something is wrong:
 
 - Is the node running?
@@ -129,7 +133,8 @@ Geo finds the current machine's Geo node name in `/etc/gitlab/gitlab.rb` by:
 - Using the `gitlab_rails['geo_node_name']` setting.
 - If that is not defined, using the `external_url` setting.
 
-This name is used to look up the node with the same **Name** in **Admin Area > Geo**.
+This name is used to look up the node with the same **Name** in the **Geo Nodes**
+dashboard.
 
 To check if the current machine has a node name that matches a node in the
 database, run the check task:
@@ -739,8 +744,11 @@ If you are able to log in to the **primary** node, but you receive this error
 when attempting to log into a **secondary**, you should check that the Geo
 node's URL matches its external URL.
 
-1. On the primary, visit **Admin Area > Geo**.
-1. Find the affected **secondary** and click **Edit**.
+On the **primary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Find the affected **secondary** site and select **Edit**.
 1. Ensure the **URL** field matches the value found in `/etc/gitlab/gitlab.rb`
    in `external_url "https://gitlab.example.com"` on the frontend server(s) of
    the **secondary** node.
diff --git a/doc/administration/geo/replication/tuning.md b/doc/administration/geo/replication/tuning.md
index a4aad3dec68a2272106e0a0fa936bd0bc9ea7b8c..9807f3e64448c4422a4d9404b9f50ef41ca07ef9 100644
--- a/doc/administration/geo/replication/tuning.md
+++ b/doc/administration/geo/replication/tuning.md
@@ -7,20 +7,28 @@ type: howto
 
 # Tuning Geo **(PREMIUM SELF)**
 
-## Changing the sync/verification capacity values
+You can limit the number of concurrent operations the nodes can run
+in the background.
 
-In **Admin Area > Geo** (`/admin/geo/nodes`),
-there are several variables that can be tuned to improve performance of Geo:
+## Changing the sync/verification concurrency values
 
-- Repository sync capacity
-- File sync capacity
-- Container repositories sync capacity
-- Verification capacity
+On the **primary** site:
 
-Increasing capacity values will increase the number of jobs that are scheduled.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Edit** of the secondary node you want to tune.
+1. Under **Tuning settings**, there are several variables that can be tuned to
+   improve the performance of Geo:
+
+   - Repository synchronization concurrency limit
+   - File synchronization concurrency limit
+   - Container repositories synchronization concurrency limit
+   - Verification concurrency limit
+
+Increasing the concurrency values will increase the number of jobs that are scheduled.
 However, this may not lead to more downloads in parallel unless the number of
-available Sidekiq threads is also increased. For example, if repository sync
-capacity is increased from 25 to 50, you may also want to increase the number
+available Sidekiq threads is also increased. For example, if repository synchronization
+concurrency is increased from 25 to 50, you may also want to increase the number
 of Sidekiq threads from 25 to 50. See the
 [Sidekiq concurrency documentation](../../operations/extra_sidekiq_processes.md#number-of-threads)
 for more details.
diff --git a/doc/administration/housekeeping.md b/doc/administration/housekeeping.md
index 9668b7277c2c25e1724e575dd8ea5351ab75cd9b..a89e8a2bad5e3fb201e109610c03a03521f454b5 100644
--- a/doc/administration/housekeeping.md
+++ b/doc/administration/housekeeping.md
@@ -9,25 +9,27 @@ info: To determine the technical writer assigned to the Stage/Group associated w
 GitLab supports and automates housekeeping tasks within your current repository,
 such as compressing file revisions and removing unreachable objects.
 
-## Automatic housekeeping
+## Configure housekeeping
 
 GitLab automatically runs `git gc` and `git repack` on repositories
-after Git pushes. You can change how often this happens or turn it off in
-**Admin Area > Settings > Repository** (`/admin/application_settings/repository`).
+after Git pushes.
 
-## Manual housekeeping
+You can change how often this happens or turn it off:
 
-The housekeeping function runs `repack` or `gc` depending on the
-**Housekeeping** settings configured in **Admin Area > Settings > Repository**.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Settings > Repository**.
+1. Expand **Repository maintenance**.
+1. Configure the Housekeeping options.
+1. Select **Save changes**.
 
-For example in the following scenario a `git repack -d` will be executed:
+For example, in the following scenario a `git repack -d` will be executed:
 
 - Project: pushes since GC counter (`pushes_since_gc`) = `10`
 - Git GC period = `200`
 - Full repack period = `50`
 
 When the `pushes_since_gc` value is 50 a `repack -A -d --pack-kept-objects` runs, similarly when
-the `pushes_since_gc` value is 200 a `git gc` runs.
+the `pushes_since_gc` value is 200 a `git gc` runs:
 
 - `git gc` ([man page](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-gc.html)) runs a number of housekeeping tasks,
   such as compressing file revisions (to reduce disk space and increase performance)
@@ -38,12 +40,6 @@ the `pushes_since_gc` value is 200 a `git gc` runs.
 Housekeeping also [removes unreferenced LFS files](../raketasks/cleanup.md#remove-unreferenced-lfs-files)
 from your project on the same schedule as the `git gc` operation, freeing up storage space for your project.
 
-To manually start the housekeeping process:
-
-1. In your project, go to **Settings > General**.
-1. Expand the **Advanced** section.
-1. Select **Run housekeeping**.
-
 ## How housekeeping handles pool repositories
 
 Housekeeping for pool repositories is handled differently from standard repositories.
diff --git a/doc/administration/img/auditor_access_form.png b/doc/administration/img/auditor_access_form.png
deleted file mode 100644
index c179a7d3b0a2b9cea41b9df94ee2dd47e722884b..0000000000000000000000000000000000000000
Binary files a/doc/administration/img/auditor_access_form.png and /dev/null differ
diff --git a/doc/administration/maintenance_mode/index.md b/doc/administration/maintenance_mode/index.md
index c73a49287db348cb57f1228d09ae328ffeafef4f..2f5d366f9272dc109e0138c46d16f4576771bf3f 100644
--- a/doc/administration/maintenance_mode/index.md
+++ b/doc/administration/maintenance_mode/index.md
@@ -21,10 +21,11 @@ Maintenance Mode allows most external actions that do not change internal state.
 There are three ways to enable Maintenance Mode as an administrator:
 
 - **Web UI**:
-  1. Go to **Admin Area > Settings > General**, expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
+  1. On the top bar, select **Menu >** **{admin}** **Admin**.
+  1. On the left sidebar, select **Settings > General**.
+  1. Expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
      You can optionally add a message for the banner as well.
-
-  1. Click **Save** for the changes to take effect.
+  1. Select **Save changes**.
 
 - **API**:
 
@@ -44,9 +45,11 @@ There are three ways to enable Maintenance Mode as an administrator:
 There are three ways to disable Maintenance Mode:
 
 - **Web UI**:
-  1. Go to **Admin Area > Settings > General**, expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
-
-  1. Click **Save** for the changes to take effect.
+  1. On the top bar, select **Menu >** **{admin}** **Admin**.
+  1. On the left sidebar, select **Settings > General**.
+  1. Expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
+     You can optionally add a message for the banner as well.
+  1. Select **Save changes**.
 
 - **API**:
 
@@ -166,7 +169,10 @@ Background jobs (cron jobs, Sidekiq) continue running as is, because background
 [During a planned Geo failover](../geo/disaster_recovery/planned_failover.md#prevent-updates-to-the-primary-node),
 it is recommended that you disable all cron jobs except for those related to Geo.
 
-You can monitor queues and disable jobs in **Admin Area > Monitoring > Background Jobs**.
+To monitor queues and disable jobs:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Monitoring > Background Jobs**.
 
 ### Incident management
 
diff --git a/doc/administration/operations/extra_sidekiq_processes.md b/doc/administration/operations/extra_sidekiq_processes.md
index ed89d11da75e7420caca728f77bb1d23a49814cc..b910a789d29197ef5e238ea813ff0a6edeefa52d 100644
--- a/doc/administration/operations/extra_sidekiq_processes.md
+++ b/doc/administration/operations/extra_sidekiq_processes.md
@@ -87,10 +87,10 @@ To start multiple processes:
    sudo gitlab-ctl reconfigure
    ```
 
-After the extra Sidekiq processes are added, navigate to
-**Admin Area > Monitoring > Background Jobs** (`/admin/background_jobs`) in GitLab.
+To view the Sidekiq processes in GitLab:
 
-![Multiple Sidekiq processes](img/sidekiq-cluster.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Monitoring > Background Jobs**.
 
 ## Negate settings
 
diff --git a/doc/administration/operations/fast_ssh_key_lookup.md b/doc/administration/operations/fast_ssh_key_lookup.md
index 8acc40da4ab33d8d2793befe72ccbb40447f315e..bb0756cf94823d88a6448c94eee9116f4dd48dc4 100644
--- a/doc/administration/operations/fast_ssh_key_lookup.md
+++ b/doc/administration/operations/fast_ssh_key_lookup.md
@@ -104,11 +104,13 @@ In the case of lookup failures (which are common), the `authorized_keys`
 file is still scanned. So Git SSH performance would still be slow for many
 users as long as a large file exists.
 
-You can disable any more writes to the `authorized_keys` file by unchecking
-`Write to "authorized_keys" file` in the **Admin Area > Settings > Network > Performance optimization** of your GitLab
-installation.
+To disable any more writes to the `authorized_keys` file:
 
-![Write to authorized keys setting](img/write_to_authorized_keys_setting.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Settings > Network**.
+1. Expand **Performance optimization**.
+1. Clear the **Write to "authorized_keys" file** checkbox.
+1. Select **Save changes**.
 
 Again, confirm that SSH is working by removing your user's SSH key in the UI,
 adding a new one, and attempting to pull a repository.
diff --git a/doc/administration/operations/img/sidekiq-cluster.png b/doc/administration/operations/img/sidekiq-cluster.png
deleted file mode 100644
index 3899385eb8f7d8cfb354d43599ec01342eae50d1..0000000000000000000000000000000000000000
Binary files a/doc/administration/operations/img/sidekiq-cluster.png and /dev/null differ
diff --git a/doc/administration/operations/img/write_to_authorized_keys_setting.png b/doc/administration/operations/img/write_to_authorized_keys_setting.png
deleted file mode 100644
index f6227a6057b0e4dd314912df68323b8d8288b771..0000000000000000000000000000000000000000
Binary files a/doc/administration/operations/img/write_to_authorized_keys_setting.png and /dev/null differ
diff --git a/doc/administration/polling.md b/doc/administration/polling.md
index f6732b8edc6cb2c074b886003b113dec587c6db1..d3f558eeaaab8375d7f615f29b46235a3d2c40f9 100644
--- a/doc/administration/polling.md
+++ b/doc/administration/polling.md
@@ -9,23 +9,24 @@ info: To determine the technical writer assigned to the Stage/Group associated w
 The GitLab UI polls for updates for different resources (issue notes, issue
 titles, pipeline statuses, etc.) on a schedule appropriate to the resource.
 
-In **[Admin Area](../user/admin_area/index.md) > Settings > Preferences > Real-time features**,
-you can configure "Polling
-interval multiplier". This multiplier is applied to all resources at once,
-and decimal values are supported. For the sake of the examples below, we will
-say that issue notes poll every 2 seconds, and issue titles poll every 5
-seconds; these are _not_ the actual values.
+To configure the polling interval multiplier:
 
-- 1 is the default, and recommended for most installations. (Issue notes poll
-  every 2 seconds, and issue titles poll every 5 seconds.)
-- 0 disables UI polling completely. (On the next poll, clients stop
-  polling for updates.)
-- A value greater than 1 slows polling down. If you see issues with
-  database load from lots of clients polling for updates, increasing the
-  multiplier from 1 can be a good compromise, rather than disabling polling
-  completely. (For example: If this is set to 2, then issue notes poll every 4
-  seconds, and issue titles poll every 10 seconds.)
-- A value between 0 and 1 makes the UI poll more frequently (so updates
-  show in other sessions faster), but is **not recommended**. 1 should be
-  fast enough. (For example, if this is set to 0.5, then issue notes poll every
-  1 second, and issue titles poll every 2.5 seconds.)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Settings > Preferences**.
+1. Expand **Real-time features**.
+1. Set a value for the polling interval multiplier. This multiplier is applied
+   to all resources at once, and decimal values are supported:
+
+   - `1.0` is the default, and recommended for most installations.
+   - `0` disables UI polling completely. On the next poll, clients stop
+     polling for updates.
+   - A value greater than `1` slows polling down. If you see issues with
+     database load from lots of clients polling for updates, increasing the
+     multiplier from 1 can be a good compromise, rather than disabling polling
+     completely. For example, if you set the value to `2`, all polling intervals
+     are multiplied by 2, which means that polling happens half as frequently.
+   - A value between `0` and `1` makes the UI poll more frequently (so updates
+     show in other sessions faster), but is **not recommended**. `1` should be
+     fast enough.
+
+1. Select **Save changes**.
diff --git a/doc/administration/raketasks/check.md b/doc/administration/raketasks/check.md
index 7f344a00f729bdf8abb55a18e391e24c0533f8ca..f7c91aa6b47151292a0bd1807c76142a4e83fe9d 100644
--- a/doc/administration/raketasks/check.md
+++ b/doc/administration/raketasks/check.md
@@ -207,8 +207,7 @@ above.
 ### Dangling commits
 
 `gitlab:git:fsck` can find dangling commits. To fix them, try
-[manually triggering housekeeping](../housekeeping.md#manual-housekeeping)
-for the affected project(s).
+[enabling housekeeping](../housekeeping.md).
 
 If the issue persists, try triggering `gc` via the
 [Rails Console](../operations/rails_console.md#starting-a-rails-console-session):
diff --git a/doc/administration/raketasks/project_import_export.md b/doc/administration/raketasks/project_import_export.md
index cd6ffc957b13315761838169309f1354c964bea3..80321d75d669d3ee95a6352db61dda5c293d236a 100644
--- a/doc/administration/raketasks/project_import_export.md
+++ b/doc/administration/raketasks/project_import_export.md
@@ -50,8 +50,13 @@ Note the following:
 
 - Importing is only possible if the version of the import and export GitLab instances are
   compatible as described in the [Version history](../../user/project/settings/import_export.md#version-history).
-- The project import option must be enabled in
-  application settings (`/admin/application_settings/general`) under **Import sources**, which is available
-  under **Admin Area > Settings > Visibility and access controls**.
+- The project import option must be enabled:
+
+  1. On the top bar, select **Menu >** **{admin}** **Admin**.
+  1. On the left sidebar, select **Settings > General**.
+  1. Expand **Visibility and access controls**.
+  1. Under **Import sources**, check the "Project export enabled" option.
+  1. Select **Save changes**.
+
 - The exports are stored in a temporary directory and are deleted every
   24 hours by a specific worker.
diff --git a/doc/administration/raketasks/storage.md b/doc/administration/raketasks/storage.md
index 5b6d4e16d8d60bf5c2dd4a7b37df92db701330d5..cee63a6cae5cd710cee1eb8f1832b11d8caac56c 100644
--- a/doc/administration/raketasks/storage.md
+++ b/doc/administration/raketasks/storage.md
@@ -107,12 +107,15 @@ to project IDs 50 to 100 in an Omnibus GitLab installation:
 sudo gitlab-rake gitlab:storage:migrate_to_hashed ID_FROM=50 ID_TO=100
 ```
 
-You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
-There is a specific queue you can watch to see how long it will take to finish:
-`hashed_storage:hashed_storage_project_migrate`.
+To monitor the progress in GitLab:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Monitoring > Background Jobs**.
+1. Watch how long the `hashed_storage:hashed_storage_project_migrate` queue
+   will take to finish. After it reaches zero, you can confirm every project
+   has been migrated by running the commands above.
 
-After it reaches zero, you can confirm every project has been migrated by running the commands above.
-If you find it necessary, you can run this migration script again to schedule missing projects.
+If you find it necessary, you can run the previous migration script again to schedule missing projects.
 
 Any error or warning is logged in Sidekiq's log file.
 
@@ -120,7 +123,7 @@ If [Geo](../geo/index.md) is enabled, each project that is successfully migrated
 generates an event to replicate the changes on any **secondary** nodes.
 
 You only need the `gitlab:storage:migrate_to_hashed` Rake task to migrate your repositories, but there are
-[additional commands(#list-projects-and-attachments) to help you inspect projects and attachments in both legacy and hashed storage.
+[additional commands](#list-projects-and-attachments) to help you inspect projects and attachments in both legacy and hashed storage.
 
 ## Rollback from hashed storage to legacy storage
 
diff --git a/doc/install/azure/index.md b/doc/install/azure/index.md
index 0d62e4d1215f4a52de7091e46f3594d6152a59cc..1351489642eb830a6ca9486f2497dc899ed96735 100644
--- a/doc/install/azure/index.md
+++ b/doc/install/azure/index.md
@@ -238,9 +238,11 @@ in this section whenever you need to update GitLab.
 
 ### Check the current version
 
-To determine the version of GitLab you're currently running,
-go to the **{admin}** **Admin Area**, and find the version
-under the **Components** table.
+To determine the version of GitLab you're currently running:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Overview > Dashboard**.
+1. Find the version under the **Components** table.
 
 If there's a newer available version of GitLab that contains one or more
 security fixes, GitLab displays an **Update asap** notification message that
diff --git a/doc/user/admin_area/geo_nodes.md b/doc/user/admin_area/geo_nodes.md
index 32b1555c33d49ff214ce4f0ed1c315f4d670cb04..19a76d0938b7fd33d042e9f32e6353285730e9fe 100644
--- a/doc/user/admin_area/geo_nodes.md
+++ b/doc/user/admin_area/geo_nodes.md
@@ -10,7 +10,10 @@ type: howto
 You can configure various settings for GitLab Geo nodes. For more information, see
 [Geo documentation](../../administration/geo/index.md).
 
-On the primary node, go to **Admin Area > Geo**. On secondary nodes, go to **Admin Area > Geo > Nodes**.
+On either the primary or secondary node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
 
 ## Common settings
 
@@ -61,8 +64,13 @@ The **primary** node's Internal URL is used by **secondary** nodes to contact it
 [External URL](https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab)
 which is used by users. Internal URL does not need to be a private address.
 
-Internal URL defaults to External URL, but you can customize it under
-**Admin Area > Geo > Nodes**.
+Internal URL defaults to external URL, but you can also customize it:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Edit** on the node you want to customize.
+1. Edit the internal URL.
+1. Select **Save changes**.
 
 WARNING:
 We recommend using an HTTPS connection while configuring the Geo nodes. To avoid