diff --git a/doc/administration/self_hosted_models/configure_duo_features.md b/doc/administration/self_hosted_models/configure_duo_features.md
new file mode 100644
index 0000000000000000000000000000000000000000..f47caa0038c584daef6fef03b337e74fb70fbd96
--- /dev/null
+++ b/doc/administration/self_hosted_models/configure_duo_features.md
@@ -0,0 +1,54 @@
+---
+stage: AI-Powered
+group: Custom Models
+description: Configure your GitLab instance with Switchboard.
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Configure your GitLab Duo features
+
+DETAILS:
+**Tier:** Premium, Ultimate
+**Offering:** Self-managed
+
+To configure your GitLab instance to access the available large language models (LLMs) in your infrastructure:
+
+1. Configure the self-hosted model.
+1. Configure the AI-powered features to use specific self-hosted models.
+
+## Configure the self-hosted model
+
+Prerequisites:
+
+- You must be an administrator.
+
+1. On the left sidebar, at the bottom, select **Admin Area**.
+1. Select **AI-Powered Features**.
+   - If the **AI-Powered Features** menu item is not available, synchronize your subscription after purchase:
+     1. On the left sidebar, select **Subscription**.
+     1. In **Subscription details**, to the right of **Last sync**, select synchronize subscription (**{retry}**).
+1. Select **Models**.
+1. Set your model details by selecting **New Self-Hosted Model**. Complete the fields:
+   - **Name the deployment (must be unique):** Enter the model name, for example, Mistral.
+   - **Choose the model from the Model dropdown list:** Only GitLab-approved models are listed here.
+   - **Endpoint:** The self-hosted model endpoint, for example, the server hosting the model.
+   - **API token (if needed):** Optional. Complete if you need an API key to access the model.
+   - Select **Create Self-Hosted Model** to save the model details.
+
+## Configure the features to use specific models
+
+Prerequisites:
+
+- You must be an administrator.
+
+1. On the left sidebar, at the bottom, select **Admin Area**.
+1. Select **AI-Powered Features**.
+   - If the **AI-Powered Features** menu item is not available, synchronize your subscription after purchase:
+     1. On the left sidebar, select **Subscription**.
+     1. In **Subscription details**, to the right of **Last sync**, select synchronize subscription (**{retry}**).
+1. Select **Features**.
+1. Set your feature model by selecting **Edit** for the feature you want to set. For example, **Code Generation**.
+1. Select the Model Provider for the feature:
+   - Select **Self-Hosted Model** in the list.
+   - Choose the self-hosted model you would like to use, for example, Mistral.
+   - Select **Save Changes** to set the feature to use this specific model.
diff --git a/doc/administration/self_hosted_models/index.md b/doc/administration/self_hosted_models/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..cdb49952c4fca3e434c48b8b859043a0b145a2b9
--- /dev/null
+++ b/doc/administration/self_hosted_models/index.md
@@ -0,0 +1,61 @@
+---
+stage: AI-Powered
+group: Custom Models
+description: Get started with Self-Hosted AI Models.
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Deploy a self-hosted large language model
+
+DETAILS:
+**Tier:** Premium, Ultimate
+**Offering:** Self-managed
+
+Deploying a self-hosted large language model (LLM) allows customers to:
+
+- Manage the end-to-end transmission of requests to enterprise-hosted LLM backends for GitLab Duo features.
+- Keep all of these requests within their enterprise network, ensuring no calls to external architecture.
+
+Self-hosted models serve sophisticated customers capable of managing their own LLM infrastructure. GitLab provides the option to connect supported models to LLM features. Model-specific prompts and GitLab Duo feature support is provided by the self-hosted models feature. For more information about this offering, see the [subscription page](../../subscriptions/self_managed/index.md).
+
+## Advantages
+
+- Choice of GitLab-approved LLM models.
+- Ability to keep all data and request/response logs within their own domain.
+- Ability to select specific GitLab Duo Features for their users.
+- Non-reliance on the GitLab shared AI Gateway.
+
+## Self Hosted models vs the default GitLab AI Vendor architecture
+
+```mermaid
+sequenceDiagram
+    actor User
+    participant GitLab
+    participant AIGateway as AI Gateway
+    participant SelfHostedModel as Self Hosted Model
+    participant GitLabAIVendor as GitLab AI Vendor
+
+    User ->> GitLab: Send request
+    GitLab ->> GitLab: Check if self-hosted model is configured
+    alt Self-hosted model configured
+        GitLab ->> AIGateway: Create prompt and send request
+        AIGateway ->> SelfHostedModel: Perform API request to AI model
+        SelfHostedModel -->> AIGateway: Respond to the prompt
+        AIGateway -->> GitLab: Forward AI response
+    else
+        GitLab ->> AIGateway: Create prompt and send request
+        AIGateway ->> GitLabAIVendor: Perform API request to AI model
+        GitLabAIVendor -->> AIGateway: Respond to the prompt
+        AIGateway -->> GitLab: Forward AI response
+    end
+    GitLab -->> User: Forward AI response
+```
+
+With AI self-hosted models, your GitLab instance, AI Gateway, and self-hosted AI model are fully isolated within your own environment. This setup ensures complete privacy and high security for using AI features, with no reliance on public services.
+
+## Get started
+
+To deploy a self-hosted large language model:
+
+1. [Install your self-hosted model deployment infrastructure](../../administration/self_hosted_models/install_infrastructure.md) and connect it to your GitLab instance.
+1. [Configure your self-hosted model deployment](../../administration/self_hosted_models/configure_duo_features.md) using instance and group level settings.
diff --git a/doc/administration/self_hosted_models/install_infrastructure.md b/doc/administration/self_hosted_models/install_infrastructure.md
new file mode 100644
index 0000000000000000000000000000000000000000..c545baaf4ba574be7af5f80f94216ee5ed681d22
--- /dev/null
+++ b/doc/administration/self_hosted_models/install_infrastructure.md
@@ -0,0 +1,152 @@
+---
+stage: AI-Powered
+group: Custom Models
+description: Setup your Self-Hosted Model Deployment infrastructure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Set up your self-hosted model deployment infrastructure
+
+DETAILS:
+**Tier:** Premium, Ultimate
+**Offering:** Self-managed
+
+By self-hosting the model, AI gateway, and GitLab instance, there are no calls to external architecture, ensuring maximum levels of security.
+
+To set up your self-hosted model deployment infrastructure:
+
+1. Install the large language model (LLM) serving infrastructure.
+1. Install the GitLab AI Gateway.
+
+## Step 1: Install LLM serving infrastructure
+
+Install one of the following GitLab-approved LLM models:
+
+- [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
+- [Mixtral-8x7B-instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
+- [Mixtral 8x22B](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1).
+- [CodeGemma 7B IT](https://huggingface.co/google/codegemma-7b-it).
+- [CodeGemma 2B](https://huggingface.co/google/codegemma-2b).
+
+### Recommended serving architectures
+
+For Mistral, you should use one of the following architectures:
+
+- [vLLM](https://docs.vllm.ai/en/stable/)
+- [TensorRT-LLM](https://docs.mistral.ai/deployment/self-deployment/overview/)
+
+## Step 2: Install the GitLab AI Gateway
+
+### Install using Docker
+
+The GitLab AI Gateway Docker image contains all necessary code and dependencies in a single container.
+
+Find the GitLab official Docker image at:
+
+- [AI Gateway Docker image on Container Registry](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/container_registry/).
+- [Release process for self-hosted AI Gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/release.md).
+
+WARNING:
+Docker for Windows is not officially supported. There are known issues with volume
+permissions, and potentially other unknown issues. If you are trying to run on Docker
+for Windows, see the [getting help page](https://about.gitlab.com/get-help/) for links
+to community resources (such as IRC or forums) to seek help from other users.
+
+#### Set up the volumes location
+
+Create a directory where the logs will reside on the Docker host. It can be under your user's home directory (for example
+`~/gitlab-agw`), or in a directory like `/srv/gitlab-agw`. To create that directory, run:
+
+```shell
+sudo mkdir -p /srv/gitlab-agw
+```
+
+If you're running Docker with a user other than `root`, ensure appropriate
+permissions have been granted to that directory.
+
+#### Find the AI Gateway Release
+
+In a production environment, you should pin your deployment to a specific
+GitLab AI Gateway release. Find the release to use in [GitLab AI Gateway Releases](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/releases), for example: `7d5f58e1` where `7d5f58e1` is the AI Gateway released version.
+
+To pin your deployment to the latest stable release, use the `latest` tag to run the latest stable release:
+
+```shell
+docker run -p 5000:500 registry.gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/model-gateway:latest`
+```
+
+NOTE:
+We do not yet support multi-arch image, only `linux/amd64`. If you try to run this on Apple chip, adding `--platform linux/amd64` to the `docker run` command will help.
+
+#### Prerequisites
+
+To use the GitLab Docker images:
+
+- You must [install Docker](https://docs.docker.com/engine/install/#server).
+- You should use a valid hostname accessible within your network. Do not use `localhost`.
+
+#### Install using Docker Engine
+
+1. For the AI Gateway to know where the GitLab instance is located so it can access the API, set the environment variable `AIGW_GITLAB_API_URL`.
+
+   For example, run:
+
+   ```shell
+   AIGW_GITLAB_API_URL=https://YOUR_GITLAB_DOMAIN/api/v4/
+   ```
+
+1. For the GitLab instance to know where AI Gateway is located so it can access the gateway, set the environment variable `AI_GATEWAY_URL`
+   inside your GitLab instance environment variables.
+
+   For example, run:
+
+   ```shell
+   AI_GATEWAY_URL=https://YOUR_AI_GATEWAY_DOMAIN
+   ```
+
+1. After you've set up the environment variables, run the image. For example:
+
+   ```shell
+   docker run -p 5000:500 registry.gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/model-gateway:latest
+   ```
+
+   This command downloads and starts a AI Gateway container, and
+   [publishes ports](https://docs.docker.com/network/#published-ports) needed to
+   access SSH, HTTP and HTTPS.
+
+1. Track the initialization process:
+
+   ```shell
+   sudo docker logs -f gitlab-aigw
+   ```
+
+After starting the container, visit `gitlab-aigw.example.com`. It might take
+a while before the Docker container starts to respond to queries.
+
+#### Upgrade
+
+To upgrade the AI Gateway, download the newest Docker image tag.
+
+1. Stop the running container:
+
+   ```shell
+   sudo docker stop gitlab-aigw
+   ```
+
+1. Remove the existing container:
+
+   ```shell
+   sudo docker rm gitlab-aigw
+   ```
+
+1. Pull the new image:
+
+   ```shell
+   docker run -p 5000:500 registry.gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/model-gateway:latest
+   ```
+
+1. Ensure that the environment variables are all set correctly
+
+### Alternative installation methods
+
+For information on alternative ways to install the AI Gateway, see [issue 463773](https://gitlab.com/gitlab-org/gitlab/-/issues/463773).