diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md
new file mode 100644
index 0000000000000000000000000000000000000000..7f31f336251611ba4e770cbba05e5d162aa22b02
--- /dev/null
+++ b/doc/administration/reference_architectures/10k_users.md
@@ -0,0 +1,79 @@
+# Reference architecture: up to 10,000 users
+
+This page describes GitLab reference architecture for up to 10,000 users.
+For a full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
+
+> - **Supported users (approximate):** 10,000
+> - **High Availability:** True
+> - **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS
+
+| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP            | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
+|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------|
+| GitLab Rails ([1](#footnotes))                               | 3     | 32 vCPU, 28.8GB Memory          | n1-highcpu-32  | c5.9xlarge            | F32s v2        |
+| PostgreSQL                                                   | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 16 vCPU, 60GB Memory            | n1-standard-16 | m5.4xlarge            | D16s v3        |
+| Redis ([3](#footnotes)) - Cache                              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| Redis ([3](#footnotes)) - Queues / Shared State              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| Redis Sentinel ([3](#footnotes)) - Cache                     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
+| Redis Sentinel ([3](#footnotes)) - Queues / Shared State     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
+| Consul                                                       | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Sidekiq                                                      | 4     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| Object Storage ([4](#footnotes))                             | -     | -                               | -              | -                     | -              |
+| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
+| Monitoring node                                              | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
+| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Internal load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+
+## Footnotes
+
+1. In our architectures we run each GitLab Rails node using the Puma webserver
+   and have its number of workers set to 90% of available CPUs along with four threads. For
+   nodes that are running Rails with other components the worker value should be reduced
+   accordingly where we've found 50% achieves a good balance but this is dependent
+   on workload.
+
+1. Gitaly node requirements are dependent on customer data, specifically the number of
+   projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
+   and at least four nodes should be used when supporting 50,000 or more users.
+   We also recommend that each Gitaly node should store no more than 5TB of data
+   and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
+   set to 20% of available CPUs. Additional nodes should be considered in conjunction
+   with a review of expected data size and spread based on the recommendations above.
+
+1. Recommended Redis setup differs depending on the size of the architecture.
+   For smaller architectures (less than 3,000 users) a single instance should suffice.
+   For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
+   classes and that Redis Sentinel is hosted alongside Consul.
+   For larger architectures (10,000 users or more) we suggest running a separate
+   [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+   and another for the Queues and Shared State classes respectively. We also recommend
+   that you run the Redis Sentinel clusters separately for each Redis Cluster.
+
+1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
+   over NFS where possible, due to better performance and availability.
+
+1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
+   object storage but this isn't typically recommended for performance reasons. Note however it is required for
+   [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
+
+1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
+   as the load balancer. Although other load balancers with similar feature sets
+   could also be used, those load balancers have not been validated.
+
+1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
+   HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
+   as these components have heavy I/O. These IOPS values are recommended only as a starter
+   as with time they may be adjusted higher or lower depending on the scale of your
+   environment's workload. If you're running the environment on a Cloud provider
+   you may need to refer to their documentation on how configure IOPS correctly.
+
+1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+   CPU platform on GCP. On different hardware you may find that adjustments, either lower
+   or higher, are required for your CPU or Node counts accordingly. For more information, a
+   [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+   [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+1. AWS-equivalent and Azure-equivalent configurations are rough suggestions
+   and may change in the future. They have not yet been tested and validated.
diff --git a/doc/administration/reference_architectures/1k_users.md b/doc/administration/reference_architectures/1k_users.md
new file mode 100644
index 0000000000000000000000000000000000000000..615da2b14c982d67e569a51b649982f021444cc4
--- /dev/null
+++ b/doc/administration/reference_architectures/1k_users.md
@@ -0,0 +1,82 @@
+# Reference architecture: up to 1,000 users
+
+This page describes GitLab reference architecture for up to 1,000 users.
+For a full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
+
+> - **Supported users (approximate):** 1,000
+> - **High Availability:** False
+
+| Users | Configuration([8](#footnotes)) | GCP type      | AWS type([9](#footnotes)) |
+|-------|--------------------------------|---------------|---------------------------|
+| 100   | 2 vCPU, 7.2GB Memory           | n1-standard-2 | c5.2xlarge                |
+| 500   | 4 vCPU, 15GB Memory            | n1-standard-4 | m5.xlarge                 |
+| 1000  | 8 vCPU, 30GB Memory            | n1-standard-8 | m5.2xlarge                |
+
+For situations where you need to serve up to 1,000 users, a single-node
+solution with [frequent backups](index.md#automated-backups-core-only) is appropriate
+for many organizations. With automatic backup of the GitLab repositories,
+configuration, and the database, if you don't have strict availability
+requirements, this is the ideal solution.
+
+## Setup instructions
+
+- For this default reference architecture, use the standard [installation instructions](../../install/README.md) to install GitLab.
+
+NOTE: **Note:**
+You can also optionally configure GitLab to use an
+[external PostgreSQL service](../external_database.md) or an
+[external object storage service](../high_availability/object_storage.md) for
+added performance and reliability at a reduced complexity cost.
+
+## Footnotes
+
+1. In our architectures we run each GitLab Rails node using the Puma webserver
+   and have its number of workers set to 90% of available CPUs along with four threads. For
+   nodes that are running Rails with other components the worker value should be reduced
+   accordingly where we've found 50% achieves a good balance but this is dependent
+   on workload.
+
+1. Gitaly node requirements are dependent on customer data, specifically the number of
+   projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
+   and at least four nodes should be used when supporting 50,000 or more users.
+   We also recommend that each Gitaly node should store no more than 5TB of data
+   and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
+   set to 20% of available CPUs. Additional nodes should be considered in conjunction
+   with a review of expected data size and spread based on the recommendations above.
+
+1. Recommended Redis setup differs depending on the size of the architecture.
+   For smaller architectures (less than 3,000 users) a single instance should suffice.
+   For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
+   classes and that Redis Sentinel is hosted alongside Consul.
+   For larger architectures (10,000 users or more) we suggest running a separate
+   [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+   and another for the Queues and Shared State classes respectively. We also recommend
+   that you run the Redis Sentinel clusters separately for each Redis Cluster.
+
+1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
+   over NFS where possible, due to better performance and availability.
+
+1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
+   object storage but this isn't typically recommended for performance reasons. Note however it is required for
+   [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
+
+1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
+   as the load balancer. Although other load balancers with similar feature sets
+   could also be used, those load balancers have not been validated.
+
+1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
+   HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
+   as these components have heavy I/O. These IOPS values are recommended only as a starter
+   as with time they may be adjusted higher or lower depending on the scale of your
+   environment's workload. If you're running the environment on a Cloud provider
+   you may need to refer to their documentation on how configure IOPS correctly.
+
+1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+   CPU platform on GCP. On different hardware you may find that adjustments, either lower
+   or higher, are required for your CPU or Node counts accordingly. For more information, a
+   [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+   [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+1. AWS-equivalent and Azure-equivalent configurations are rough suggestions
+   and may change in the future. They have not yet been tested and validated.
diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ee692d635c499ecd7dc73c5387b27aaeab2188a
--- /dev/null
+++ b/doc/administration/reference_architectures/25k_users.md
@@ -0,0 +1,79 @@
+# Reference architecture: up to 25,000 users
+
+This page describes GitLab reference architecture for up to 25,000 users.
+For a full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
+
+> - **Supported users (approximate):** 25,000
+> - **High Availability:** True
+> - **Test RPS rates:** API: 500 RPS, Web: 50 RPS, Git: 50 RPS
+
+| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP            | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
+|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------|
+| GitLab Rails ([1](#footnotes))                               | 5     | 32 vCPU, 28.8GB Memory          | n1-highcpu-32  | c5.9xlarge            | F32s v2        |
+| PostgreSQL                                                   | 3     | 8 vCPU, 30GB Memory             | n1-standard-8  | m5.2xlarge            | D8s v3         |
+| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 32 vCPU, 120GB Memory           | n1-standard-32 | m5.8xlarge            | D32s v3        |
+| Redis ([3](#footnotes)) - Cache                              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| Redis ([3](#footnotes)) - Queues / Shared State              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| Redis Sentinel ([3](#footnotes)) - Cache                     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
+| Redis Sentinel ([3](#footnotes)) - Queues / Shared State     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
+| Consul                                                       | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Sidekiq                                                      | 4     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| Object Storage ([4](#footnotes))                             | -     | -                               | -              | -                     | -              |
+| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
+| Monitoring node                                              | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
+| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Internal load balancing node ([6](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
+
+## Footnotes
+
+1. In our architectures we run each GitLab Rails node using the Puma webserver
+   and have its number of workers set to 90% of available CPUs along with four threads. For
+   nodes that are running Rails with other components the worker value should be reduced
+   accordingly where we've found 50% achieves a good balance but this is dependent
+   on workload.
+
+1. Gitaly node requirements are dependent on customer data, specifically the number of
+   projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
+   and at least four nodes should be used when supporting 50,000 or more users.
+   We also recommend that each Gitaly node should store no more than 5TB of data
+   and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
+   set to 20% of available CPUs. Additional nodes should be considered in conjunction
+   with a review of expected data size and spread based on the recommendations above.
+
+1. Recommended Redis setup differs depending on the size of the architecture.
+   For smaller architectures (less than 3,000 users) a single instance should suffice.
+   For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
+   classes and that Redis Sentinel is hosted alongside Consul.
+   For larger architectures (10,000 users or more) we suggest running a separate
+   [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+   and another for the Queues and Shared State classes respectively. We also recommend
+   that you run the Redis Sentinel clusters separately for each Redis Cluster.
+
+1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
+   over NFS where possible, due to better performance and availability.
+
+1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
+   object storage but this isn't typically recommended for performance reasons. Note however it is required for
+   [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
+
+1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
+   as the load balancer. Although other load balancers with similar feature sets
+   could also be used, those load balancers have not been validated.
+
+1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
+   HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
+   as these components have heavy I/O. These IOPS values are recommended only as a starter
+   as with time they may be adjusted higher or lower depending on the scale of your
+   environment's workload. If you're running the environment on a Cloud provider
+   you may need to refer to their documentation on how configure IOPS correctly.
+
+1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+   CPU platform on GCP. On different hardware you may find that adjustments, either lower
+   or higher, are required for your CPU or Node counts accordingly. For more information, a
+   [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+   [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+1. AWS-equivalent and Azure-equivalent configurations are rough suggestions
+   and may change in the future. They have not yet been tested and validated.
diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md
new file mode 100644
index 0000000000000000000000000000000000000000..874e00e6722d8b14ae3681e022c0bc2acff377a3
--- /dev/null
+++ b/doc/administration/reference_architectures/2k_users.md
@@ -0,0 +1,90 @@
+# Reference architecture: up to 2,000 users
+
+This page describes GitLab reference architecture for up to 2,000 users.
+For a full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
+
+> - **Supported users (approximate):** 2,000
+> - **High Availability:** False
+> - **Test RPS rates:** API: 40 RPS, Web: 4 RPS, Git: 4 RPS
+
+| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP           | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
+|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|----------------|
+| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| Object Storage ([4](#footnotes))                             | -     | -                               | -             | -                     | -              |
+| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4  | c5.xlarge             | F4s v2         |
+| PostgreSQL                                                   | 1     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
+| Redis ([3](#footnotes))                                      | 1     | 1 vCPU, 3.75GB Memory           | n1-standard-1 | m5.large              | D2s v3         |
+| Gitaly ([5](#footnotes)) ([7](#footnotes))    | X ([2](#footnotes))  | 4 vCPU, 15GB Memory             | n1-standard-4 | m5.xlarge             | D4s v3         |
+| GitLab Rails ([1](#footnotes))                               | 2     | 8 vCPU, 7.2GB Memory            | n1-highcpu-8  | c5.2xlarge            | F8s v2         |
+| Monitoring node                                              | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+
+## Setup instructions
+
+1. [Configure the external load balancing node](../high_availability/load_balancer.md)
+   that will handle the load balancing of the two GitLab application services nodes.
+1. [Configure the Object Storage](../object_storage.md) ([4](#footnotes)) used for shared data objects.
+1. (Optional) [Configure NFS](../high_availability/nfs.md) to have
+   shared disk storage service as an alternative to Gitaly and/or
+   [Object Storage](../object_storage.md) (although not recommended).
+   NFS is required for GitLab Pages, you can skip this step if you're not using that feature.
+1. [Configure PostgreSQL](../high_availability/load_balancer.md), the database for GitLab.
+1. [Configure Redis](../high_availability/redis.md).
+1. [Configure Gitaly](../gitaly/index.md#running-gitaly-on-its-own-server),
+   which is used to provide access to the Git repositories.
+1. [Configure the main GitLab Rails application](../high_availability/gitlab.md)
+   to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all
+   frontend requests (UI, API, Git over HTTP/SSH).
+1. [Configure Prometheus](../high_availability/monitoring_node.md) to monitor your GitLab environment.
+
+## Footnotes
+
+1. In our architectures we run each GitLab Rails node using the Puma webserver
+   and have its number of workers set to 90% of available CPUs along with four threads. For
+   nodes that are running Rails with other components the worker value should be reduced
+   accordingly where we've found 50% achieves a good balance but this is dependent
+   on workload.
+
+1. Gitaly node requirements are dependent on customer data, specifically the number of
+   projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
+   and at least four nodes should be used when supporting 50,000 or more users.
+   We also recommend that each Gitaly node should store no more than 5TB of data
+   and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
+   set to 20% of available CPUs. Additional nodes should be considered in conjunction
+   with a review of expected data size and spread based on the recommendations above.
+
+1. Recommended Redis setup differs depending on the size of the architecture.
+   For smaller architectures (less than 3,000 users) a single instance should suffice.
+   For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
+   classes and that Redis Sentinel is hosted alongside Consul.
+   For larger architectures (10,000 users or more) we suggest running a separate
+   [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+   and another for the Queues and Shared State classes respectively. We also recommend
+   that you run the Redis Sentinel clusters separately for each Redis Cluster.
+
+1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
+   over NFS where possible, due to better performance and availability.
+
+1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
+   object storage but this isn't typically recommended for performance reasons. Note however it is required for
+   [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
+
+1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
+   as the load balancer. Although other load balancers with similar feature sets
+   could also be used, those load balancers have not been validated.
+
+1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
+   HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
+   as these components have heavy I/O. These IOPS values are recommended only as a starter
+   as with time they may be adjusted higher or lower depending on the scale of your
+   environment's workload. If you're running the environment on a Cloud provider
+   you may need to refer to their documentation on how configure IOPS correctly.
+
+1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+   CPU platform on GCP. On different hardware you may find that adjustments, either lower
+   or higher, are required for your CPU or Node counts accordingly. For more information, a
+   [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+   [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+1. AWS-equivalent and Azure-equivalent configurations are rough suggestions
+   and may change in the future. They have not yet been tested and validated.
diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md
new file mode 100644
index 0000000000000000000000000000000000000000..bd429fbc4b44dc825d91fd57cc523572c1085276
--- /dev/null
+++ b/doc/administration/reference_architectures/3k_users.md
@@ -0,0 +1,82 @@
+# Reference architecture: up to 3,000 users
+
+This page describes GitLab reference architecture for up to 3,000 users.
+For a full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
+
+NOTE: **Note:** The 3,000-user reference architecture documented below is
+designed to help your organization achieve a highly-available GitLab deployment.
+If you do not have the expertise or need to maintain a highly-available
+environment, you can have a simpler and less costly-to-operate environment by
+following the [2,000-user reference architecture](2k_users.md).
+
+> - **Supported users (approximate):** 3,000
+> - **High Availability:** True
+> - **Test RPS rates:** API: 60 RPS, Web: 6 RPS, Git: 6 RPS
+
+| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP           | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
+|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|------------------------|
+| GitLab Rails ([1](#footnotes))                               | 3     | 8 vCPU, 7.2GB Memory            | n1-highcpu-8  | c5.2xlarge            | F8s v2         |
+| PostgreSQL                                                   | 3     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
+| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 4 vCPU, 15GB Memory             | n1-standard-4 | m5.xlarge             | D4s v3         |
+| Redis ([3](#footnotes))                                      | 3     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
+| Consul + Sentinel ([3](#footnotes))                          | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| Sidekiq                                                      | 4     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
+| Object Storage ([4](#footnotes))                             | -     | -                               | -             | -                     | -              |
+| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4  | c5.xlarge             | F4s v2         |
+| Monitoring node                                              | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| Internal load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+
+## Footnotes
+
+1. In our architectures we run each GitLab Rails node using the Puma webserver
+   and have its number of workers set to 90% of available CPUs along with four threads. For
+   nodes that are running Rails with other components the worker value should be reduced
+   accordingly where we've found 50% achieves a good balance but this is dependent
+   on workload.
+
+1. Gitaly node requirements are dependent on customer data, specifically the number of
+   projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
+   and at least four nodes should be used when supporting 50,000 or more users.
+   We also recommend that each Gitaly node should store no more than 5TB of data
+   and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
+   set to 20% of available CPUs. Additional nodes should be considered in conjunction
+   with a review of expected data size and spread based on the recommendations above.
+
+1. Recommended Redis setup differs depending on the size of the architecture.
+   For smaller architectures (less than 3,000 users) a single instance should suffice.
+   For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
+   classes and that Redis Sentinel is hosted alongside Consul.
+   For larger architectures (10,000 users or more) we suggest running a separate
+   [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+   and another for the Queues and Shared State classes respectively. We also recommend
+   that you run the Redis Sentinel clusters separately for each Redis Cluster.
+
+1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
+   over NFS where possible, due to better performance and availability.
+
+1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
+   object storage but this isn't typically recommended for performance reasons. Note however it is required for
+   [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
+
+1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
+   as the load balancer. Although other load balancers with similar feature sets
+   could also be used, those load balancers have not been validated.
+
+1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
+   HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
+   as these components have heavy I/O. These IOPS values are recommended only as a starter
+   as with time they may be adjusted higher or lower depending on the scale of your
+   environment's workload. If you're running the environment on a Cloud provider
+   you may need to refer to their documentation on how configure IOPS correctly.
+
+1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+   CPU platform on GCP. On different hardware you may find that adjustments, either lower
+   or higher, are required for your CPU or Node counts accordingly. For more information, a
+   [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+   [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+1. AWS-equivalent and Azure-equivalent configurations are rough suggestions
+   and may change in the future. They have not yet been tested and validated.
diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md
new file mode 100644
index 0000000000000000000000000000000000000000..67f773a021fec47a2e79ab73672716b9c069dc4e
--- /dev/null
+++ b/doc/administration/reference_architectures/50k_users.md
@@ -0,0 +1,79 @@
+# Reference architecture: up to 50,000 users
+
+This page describes GitLab reference architecture for up to 50,000 users.
+For a full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
+
+> - **Supported users (approximate):** 50,000
+> - **High Availability:** True
+> - **Test RPS rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS
+
+| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP            | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
+|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------|
+| GitLab Rails ([1](#footnotes))                               | 12    | 32 vCPU, 28.8GB Memory          | n1-highcpu-32  | c5.9xlarge            | F32s v2        |
+| PostgreSQL                                                   | 3     | 16 vCPU, 60GB Memory            | n1-standard-16 | m5.4xlarge            | D16s v3        |
+| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 64 vCPU, 240GB Memory           | n1-standard-64 | m5.16xlarge           | D64s v3        |
+| Redis ([3](#footnotes)) - Cache                              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| Redis ([3](#footnotes)) - Queues / Shared State              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| Redis Sentinel ([3](#footnotes)) - Cache                     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
+| Redis Sentinel ([3](#footnotes)) - Queues / Shared State     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
+| Consul                                                       | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Sidekiq                                                      | 4     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
+| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
+| Object Storage ([4](#footnotes))                             | -     | -                               | -              | -                     | -              |
+| Monitoring node                                              | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
+| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
+| Internal load balancing node ([6](#footnotes))               | 1     | 8 vCPU, 7.2GB Memory            | n1-highcpu-8   | c5.2xlarge            | F8s v2         |
+
+## Footnotes
+
+1. In our architectures we run each GitLab Rails node using the Puma webserver
+   and have its number of workers set to 90% of available CPUs along with four threads. For
+   nodes that are running Rails with other components the worker value should be reduced
+   accordingly where we've found 50% achieves a good balance but this is dependent
+   on workload.
+
+1. Gitaly node requirements are dependent on customer data, specifically the number of
+   projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
+   and at least four nodes should be used when supporting 50,000 or more users.
+   We also recommend that each Gitaly node should store no more than 5TB of data
+   and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
+   set to 20% of available CPUs. Additional nodes should be considered in conjunction
+   with a review of expected data size and spread based on the recommendations above.
+
+1. Recommended Redis setup differs depending on the size of the architecture.
+   For smaller architectures (less than 3,000 users) a single instance should suffice.
+   For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
+   classes and that Redis Sentinel is hosted alongside Consul.
+   For larger architectures (10,000 users or more) we suggest running a separate
+   [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+   and another for the Queues and Shared State classes respectively. We also recommend
+   that you run the Redis Sentinel clusters separately for each Redis Cluster.
+
+1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
+   over NFS where possible, due to better performance and availability.
+
+1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
+   object storage but this isn't typically recommended for performance reasons. Note however it is required for
+   [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
+
+1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
+   as the load balancer. Although other load balancers with similar feature sets
+   could also be used, those load balancers have not been validated.
+
+1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
+   HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
+   as these components have heavy I/O. These IOPS values are recommended only as a starter
+   as with time they may be adjusted higher or lower depending on the scale of your
+   environment's workload. If you're running the environment on a Cloud provider
+   you may need to refer to their documentation on how configure IOPS correctly.
+
+1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+   CPU platform on GCP. On different hardware you may find that adjustments, either lower
+   or higher, are required for your CPU or Node counts accordingly. For more information, a
+   [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+   [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+1. AWS-equivalent and Azure-equivalent configurations are rough suggestions
+   and may change in the future. They have not yet been tested and validated.
diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md
new file mode 100644
index 0000000000000000000000000000000000000000..41ef6f369c2375859985cfa3abb745f0011978b4
--- /dev/null
+++ b/doc/administration/reference_architectures/5k_users.md
@@ -0,0 +1,76 @@
+# Reference architecture: up to 5,000 users
+
+This page describes GitLab reference architecture for up to 5,000 users.
+For a full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
+
+> - **Supported users (approximate):** 5,000
+> - **High Availability:** True
+> - **Test RPS rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS
+
+| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP           | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
+|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|------------------------|
+| GitLab Rails ([1](#footnotes))                               | 3     | 16 vCPU, 14.4GB Memory          | n1-highcpu-16 | c5.4xlarge            | F16s v2        |
+| PostgreSQL                                                   | 3     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
+| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 8 vCPU, 30GB Memory             | n1-standard-8 | m5.2xlarge            | D8s v3         |
+| Redis ([3](#footnotes))                                      | 3     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
+| Consul + Sentinel ([3](#footnotes))                          | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| Sidekiq                                                      | 4     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
+| Object Storage ([4](#footnotes))                             | -     | -                               | -             | -                     | -              |
+| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4  | c5.xlarge             | F4s v2         |
+| Monitoring node                                              | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+| Internal load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
+
+## Footnotes
+
+1. In our architectures we run each GitLab Rails node using the Puma webserver
+   and have its number of workers set to 90% of available CPUs along with four threads. For
+   nodes that are running Rails with other components the worker value should be reduced
+   accordingly where we've found 50% achieves a good balance but this is dependent
+   on workload.
+
+1. Gitaly node requirements are dependent on customer data, specifically the number of
+   projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
+   and at least four nodes should be used when supporting 50,000 or more users.
+   We also recommend that each Gitaly node should store no more than 5TB of data
+   and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
+   set to 20% of available CPUs. Additional nodes should be considered in conjunction
+   with a review of expected data size and spread based on the recommendations above.
+
+1. Recommended Redis setup differs depending on the size of the architecture.
+   For smaller architectures (less than 3,000 users) a single instance should suffice.
+   For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
+   classes and that Redis Sentinel is hosted alongside Consul.
+   For larger architectures (10,000 users or more) we suggest running a separate
+   [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+   and another for the Queues and Shared State classes respectively. We also recommend
+   that you run the Redis Sentinel clusters separately for each Redis Cluster.
+
+1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
+   over NFS where possible, due to better performance and availability.
+
+1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
+   object storage but this isn't typically recommended for performance reasons. Note however it is required for
+   [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
+
+1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
+   as the load balancer. Although other load balancers with similar feature sets
+   could also be used, those load balancers have not been validated.
+
+1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
+   HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
+   as these components have heavy I/O. These IOPS values are recommended only as a starter
+   as with time they may be adjusted higher or lower depending on the scale of your
+   environment's workload. If you're running the environment on a Cloud provider
+   you may need to refer to their documentation on how configure IOPS correctly.
+
+1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+   CPU platform on GCP. On different hardware you may find that adjustments, either lower
+   or higher, are required for your CPU or Node counts accordingly. For more information, a
+   [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+   [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+1. AWS-equivalent and Azure-equivalent configurations are rough suggestions
+   and may change in the future. They have not yet been tested and validated.
diff --git a/doc/administration/reference_architectures/index.md b/doc/administration/reference_architectures/index.md
index 1a5478442176d11226614b0d414ab13fb7eb04de..fe64d39a362db67a7d6c401b9f978faf2135add0 100644
--- a/doc/administration/reference_architectures/index.md
+++ b/doc/administration/reference_architectures/index.md
@@ -48,187 +48,17 @@ how much automation you use, mirroring, and repository/change size. Additionally
 displayed memory values are provided by [GCP machine types](https://cloud.google.com/compute/docs/machine-types).
 For different cloud vendors, attempt to select options that best match the provided architecture.
 
-## Up to 1,000 users
-
-> - **Supported users (approximate):** 1,000
-> - **High Availability:** False
-
-| Users | Configuration([8](#footnotes)) | GCP type      | AWS type([9](#footnotes)) |
-|-------|--------------------------------|---------------|---------------------------|
-| 100   | 2 vCPU, 7.2GB Memory           | n1-standard-2 | c5.2xlarge                |
-| 500   | 4 vCPU, 15GB Memory            | n1-standard-4 | m5.xlarge                 |
-| 1000  | 8 vCPU, 30GB Memory            | n1-standard-8 | m5.2xlarge                |
-
-For situations where you need to serve up to 1,000 users, a single-node
-solution with [frequent backups](#automated-backups-core-only) is appropriate
-for many organizations. With automatic backup of the GitLab repositories,
-configuration, and the database, if you don't have strict availability
-requirements, this is the ideal solution.
-
-### Setup instructions
-
-- For this default reference architecture, use the standard [installation instructions](../../install/README.md) to install GitLab.
-
-NOTE: **Note:**
-You can also optionally configure GitLab to use an
-[external PostgreSQL service](../external_database.md) or an
-[external object storage service](../high_availability/object_storage.md) for
-added performance and reliability at a reduced complexity cost.
-
-## Up to 2,000 users
-
-> - **Supported users (approximate):** 2,000
-> - **High Availability:** False
-> - **Test RPS rates:** API: 40 RPS, Web: 4 RPS, Git: 4 RPS
-
-| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP           | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
-|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|----------------|
-| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| Object Storage ([4](#footnotes))                             | -     | -                               | -             | -                     | -              |
-| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4  | c5.xlarge             | F4s v2         |
-| PostgreSQL                                                   | 1     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
-| Redis ([3](#footnotes))                                      | 1     | 1 vCPU, 3.75GB Memory           | n1-standard-1 | m5.large              | D2s v3         |
-| Gitaly ([5](#footnotes)) ([7](#footnotes))    | X ([2](#footnotes))  | 4 vCPU, 15GB Memory             | n1-standard-4 | m5.xlarge             | D4s v3         |
-| GitLab Rails ([1](#footnotes))                               | 2     | 8 vCPU, 7.2GB Memory            | n1-highcpu-8  | c5.2xlarge            | F8s v2         |
-| Monitoring node                                              | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-
-### Setup instructions
-
-1. [Configure the external load balancing node](../high_availability/load_balancer.md)
-   that will handle the load balancing of the two GitLab application services nodes.
-1. [Configure the Object Storage](../object_storage.md) ([4](#footnotes)) used for shared data objects.
-1. (Optional) [Configure NFS](../high_availability/nfs.md) to have
-   shared disk storage service as an alternative to Gitaly and/or
-   [Object Storage](../object_storage.md) (although not recommended).
-   NFS is required for GitLab Pages, you can skip this step if you're not using that feature.
-1. [Configure PostgreSQL](../high_availability/load_balancer.md), the database for GitLab.
-1. [Configure Redis](../high_availability/redis.md).
-1. [Configure Gitaly](../gitaly/index.md#running-gitaly-on-its-own-server),
-   which is used to provide access to the Git repositories.
-1. [Configure the main GitLab Rails application](../high_availability/gitlab.md)
-   to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all
-   frontend requests (UI, API, Git over HTTP/SSH).
-1. [Configure Prometheus](../high_availability/monitoring_node.md) to monitor your GitLab environment.
-
-## Up to 3,000 users
-
-NOTE: **Note:** The 3,000-user reference architecture documented below is
-designed to help your organization achieve a highly-available GitLab deployment.
-If you do not have the expertise or need to maintain a highly-available
-environment, you can have a simpler and less costly-to-operate environment by
-following the [2,000-user reference architecture](#up-to-2000-users).
-
-> - **Supported users (approximate):** 3,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 60 RPS, Web: 6 RPS, Git: 6 RPS
-
-| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP           | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
-|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|------------------------|
-| GitLab Rails ([1](#footnotes))                               | 3     | 8 vCPU, 7.2GB Memory            | n1-highcpu-8  | c5.2xlarge            | F8s v2         |
-| PostgreSQL                                                   | 3     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
-| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 4 vCPU, 15GB Memory             | n1-standard-4 | m5.xlarge             | D4s v3         |
-| Redis ([3](#footnotes))                                      | 3     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
-| Consul + Sentinel ([3](#footnotes))                          | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| Sidekiq                                                      | 4     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
-| Object Storage ([4](#footnotes))                             | -     | -                               | -             | -                     | -              |
-| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4  | c5.xlarge             | F4s v2         |
-| Monitoring node                                              | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| Internal load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-
-## Up to 5,000 users
-
-> - **Supported users (approximate):** 5,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS
-
-| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP           | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
-|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|------------------------|
-| GitLab Rails ([1](#footnotes))                               | 3     | 16 vCPU, 14.4GB Memory          | n1-highcpu-16 | c5.4xlarge            | F16s v2        |
-| PostgreSQL                                                   | 3     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
-| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 8 vCPU, 30GB Memory             | n1-standard-8 | m5.2xlarge            | D8s v3         |
-| Redis ([3](#footnotes))                                      | 3     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
-| Consul + Sentinel ([3](#footnotes))                          | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| Sidekiq                                                      | 4     | 2 vCPU, 7.5GB Memory            | n1-standard-2 | m5.large              | D2s v3         |
-| Object Storage ([4](#footnotes))                             | -     | -                               | -             | -                     | -              |
-| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4  | c5.xlarge             | F4s v2         |
-| Monitoring node                                              | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-| Internal load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2  | c5.large              | F2s v2         |
-
-## Up to 10,000 users
-
-> - **Supported users (approximate):** 10,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS
-
-| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP            | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
-|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------|
-| GitLab Rails ([1](#footnotes))                               | 3     | 32 vCPU, 28.8GB Memory          | n1-highcpu-32  | c5.9xlarge            | F32s v2        |
-| PostgreSQL                                                   | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 16 vCPU, 60GB Memory            | n1-standard-16 | m5.4xlarge            | D16s v3        |
-| Redis ([3](#footnotes)) - Cache                              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| Redis ([3](#footnotes)) - Queues / Shared State              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| Redis Sentinel ([3](#footnotes)) - Cache                     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
-| Redis Sentinel ([3](#footnotes)) - Queues / Shared State     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
-| Consul                                                       | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Sidekiq                                                      | 4     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| Object Storage ([4](#footnotes))                             | -     | -                               | -              | -                     | -              |
-| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
-| Monitoring node                                              | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
-| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Internal load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-
-## Up to 25,000 users
-
-> - **Supported users (approximate):** 25,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 500 RPS, Web: 50 RPS, Git: 50 RPS
-
-| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP            | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
-|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------|
-| GitLab Rails ([1](#footnotes))                               | 5     | 32 vCPU, 28.8GB Memory          | n1-highcpu-32  | c5.9xlarge            | F32s v2        |
-| PostgreSQL                                                   | 3     | 8 vCPU, 30GB Memory             | n1-standard-8  | m5.2xlarge            | D8s v3         |
-| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 32 vCPU, 120GB Memory           | n1-standard-32 | m5.8xlarge            | D32s v3        |
-| Redis ([3](#footnotes)) - Cache                              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| Redis ([3](#footnotes)) - Queues / Shared State              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| Redis Sentinel ([3](#footnotes)) - Cache                     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
-| Redis Sentinel ([3](#footnotes)) - Queues / Shared State     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
-| Consul                                                       | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Sidekiq                                                      | 4     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| Object Storage ([4](#footnotes))                             | -     | -                               | -              | -                     | -              |
-| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
-| Monitoring node                                              | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
-| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Internal load balancing node ([6](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
-
-## Up to 50,000 users
-
-> - **Supported users (approximate):** 50,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS
-
-| Service                                                      | Nodes | Configuration ([8](#footnotes)) | GCP            | AWS ([9](#footnotes)) | Azure([9](#footnotes)) |
-|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------|
-| GitLab Rails ([1](#footnotes))                               | 12    | 32 vCPU, 28.8GB Memory          | n1-highcpu-32  | c5.9xlarge            | F32s v2        |
-| PostgreSQL                                                   | 3     | 16 vCPU, 60GB Memory            | n1-standard-16 | m5.4xlarge            | D16s v3        |
-| PgBouncer                                                    | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X     | 64 vCPU, 240GB Memory           | n1-standard-64 | m5.16xlarge           | D64s v3        |
-| Redis ([3](#footnotes)) - Cache                              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| Redis ([3](#footnotes)) - Queues / Shared State              | 3     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| Redis Sentinel ([3](#footnotes)) - Cache                     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
-| Redis Sentinel ([3](#footnotes)) - Queues / Shared State     | 3     | 1 vCPU, 1.7GB Memory            | g1-small       | t2.small              | B1MS           |
-| Consul                                                       | 3     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Sidekiq                                                      | 4     | 4 vCPU, 15GB Memory             | n1-standard-4  | m5.xlarge             | D4s v3         |
-| NFS Server ([5](#footnotes)) ([7](#footnotes))               | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
-| Object Storage ([4](#footnotes))                             | -     | -                               | -              | -                     | -              |
-| Monitoring node                                              | 1     | 4 vCPU, 3.6GB Memory            | n1-highcpu-4   | c5.xlarge             | F4s v2         |
-| External load balancing node ([6](#footnotes))               | 1     | 2 vCPU, 1.8GB Memory            | n1-highcpu-2   | c5.large              | F2s v2         |
-| Internal load balancing node ([6](#footnotes))               | 1     | 8 vCPU, 7.2GB Memory            | n1-highcpu-8   | c5.2xlarge            | F8s v2         |
+## Available reference architectures
+
+The following reference architectures are available:
+
+- [Up to 1,000 users](1k_users.md)
+- [Up to 2,000 users](2k_users.md)
+- [Up to 3,000 users](3k_users.md)
+- [Up to 5,000 users](5k_users.md)
+- [Up to 10,000 users](10k_users.md)
+- [Up to 25,000 users](25k_users.md)
+- [Up to 50,000 users](50k_users.md)
 
 ## Availability complexity