diff --git a/doc/ci/docker/authenticate_registry.md b/doc/ci/docker/authenticate_registry.md
index 52cc3071fda05312d9c868e4cd30e7e9a517d550..28edb5140dd798b6d1fe757880e92f97173a0f29 100644
--- a/doc/ci/docker/authenticate_registry.md
+++ b/doc/ci/docker/authenticate_registry.md
@@ -18,9 +18,9 @@ login`:
 
 ```yaml
 default:
-  image: docker:20.10.16
+  image: docker:24.0.5
   services:
-    - docker:20.10.16-dind
+    - docker:24.0.5-dind
 
 variables:
   DOCKER_TLS_CERTDIR: "/certs"
@@ -42,7 +42,7 @@ empty or remove it.
 If you are an administrator for GitLab Runner, you can mount a file
 with the authentication configuration to `~/.docker/config.json`.
 Then every job that the runner picks up is already authenticated. If you
-are using the official `docker:20.10.16` image, the home directory is
+are using the official `docker:24.0.5` image, the home directory is
 under `/root`.
 
 If you mount the configuration file, any `docker` command
@@ -126,9 +126,9 @@ The same commands apply for any solution you implement.
 
 ```yaml
 default:
-  image: docker:20.10.16
+  image: docker:24.0.5
   services:
-    - docker:20.10.16-dind
+    - docker:24.0.5-dind
 
 variables:
   DOCKER_TLS_CERTDIR: "/certs"
diff --git a/doc/ci/docker/docker_layer_caching.md b/doc/ci/docker/docker_layer_caching.md
index 861bea70cb7658143a13a4c27cf1bcc1016fa9fa..b34b58ce9641d44537cc3f95f2eeb346e1822c0c 100644
--- a/doc/ci/docker/docker_layer_caching.md
+++ b/doc/ci/docker/docker_layer_caching.md
@@ -29,9 +29,9 @@ This example `.gitlab-ci.yml` file shows how to use Docker caching:
 
 ```yaml
 default:
-  image: docker:20.10.16
+  image: docker:24.0.5
   services:
-    - docker:20.10.16-dind
+    - docker:24.0.5-dind
   before_script:
     - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
 
diff --git a/doc/ci/docker/using_docker_build.md b/doc/ci/docker/using_docker_build.md
index ef7491cb6098b46dc0bd5a733bf70d5ae7744ce6..6ab0b24f09bc2c939aab079b86a697d666fbea97 100644
--- a/doc/ci/docker/using_docker_build.md
+++ b/doc/ci/docker/using_docker_build.md
@@ -80,7 +80,8 @@ For more information, see [security of the `docker` group](https://blog.zopyx.co
 
 "Docker-in-Docker" (`dind`) means:
 
-- Your registered runner uses the [Docker executor](https://docs.gitlab.com/runner/executors/docker.html) or the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes.html).
+- Your registered runner uses the [Docker executor](https://docs.gitlab.com/runner/executors/docker.html) or
+  the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes.html).
 - The executor uses a [container image of Docker](https://hub.docker.com/_/docker/), provided
   by Docker, to run your CI/CD jobs.
 
@@ -90,7 +91,7 @@ the job script in context of the image in privileged mode.
 You should use Docker-in-Docker with TLS enabled,
 which is supported by [GitLab.com shared runners](../runners/index.md).
 
-You should always pin a specific version of the image, like `docker:20.10.16`.
+You should always pin a specific version of the image, like `docker:24.0.5`.
 If you use a tag like `docker:latest`, you have no control over which version is used.
 This can cause incompatibility problems when new versions are released.
 
@@ -121,12 +122,12 @@ To use Docker-in-Docker with TLS enabled:
      --registration-token REGISTRATION_TOKEN \
      --executor docker \
      --description "My Docker Runner" \
-     --docker-image "docker:20.10.16" \
+     --docker-image "docker:24.0.5" \
      --docker-privileged \
      --docker-volumes "/certs/client"
    ```
 
-   - This command registers a new runner to use the `docker:20.10.16` image.
+   - This command registers a new runner to use the `docker:24.0.5` image (if none is specified at the job level).
      To start the build and service containers, it uses the `privileged` mode.
      If you want to use Docker-in-Docker,
      you must always use `privileged = true` in your Docker containers.
@@ -143,7 +144,7 @@ To use Docker-in-Docker with TLS enabled:
      executor = "docker"
      [runners.docker]
        tls_verify = false
-       image = "docker:20.10.16"
+       image = "docker:24.0.5"
        privileged = true
        disable_cache = false
        volumes = ["/certs/client", "/cache"]
@@ -153,13 +154,13 @@ To use Docker-in-Docker with TLS enabled:
    ```
 
 1. You can now use `docker` in the job script. You should include the
-   `docker:20.10.16-dind` service:
+   `docker:24.0.5-dind` service:
 
    ```yaml
    default:
-     image: docker:20.10.16
+     image: docker:24.0.5
      services:
-       - docker:20.10.16-dind
+       - docker:24.0.5-dind
      before_script:
        - docker info
 
@@ -202,7 +203,7 @@ Assuming that the runner's `config.toml` is similar to:
   executor = "docker"
   [runners.docker]
     tls_verify = false
-    image = "docker:20.10.16"
+    image = "docker:24.0.5"
     privileged = true
     disable_cache = false
     volumes = ["/cache"]
@@ -212,13 +213,13 @@ Assuming that the runner's `config.toml` is similar to:
 ```
 
 You can now use `docker` in the job script. You should include the
-`docker:20.10.16-dind` service:
+`docker:24.0.5-dind` service:
 
 ```yaml
 default:
-  image: docker:20.10.16
+  image: docker:24.0.5
   services:
-    - docker:20.10.16-dind
+    - docker:24.0.5-dind
   before_script:
     - docker info
 
@@ -276,13 +277,13 @@ To use Docker-in-Docker with TLS enabled in Kubernetes:
    ```
 
 1. You can now use `docker` in the job script. You should include the
-   `docker:20.10.16-dind` service:
+   `docker:24.0.5-dind` service:
 
    ```yaml
    default:
-     image: docker:20.10.16
+     image: docker:24.0.5
      services:
-       - docker:20.10.16-dind
+       - docker:24.0.5-dind
      before_script:
        - docker info
 
@@ -330,7 +331,7 @@ Docker-in-Docker is the recommended configuration, but you should be aware of th
 - **Storage drivers**: By default, earlier versions of Docker use the `vfs` storage driver,
   which copies the file system for each job. Docker 17.09 and later use `--storage-driver overlay2`, which is
   the recommended storage driver. See [Using the OverlayFS driver](#use-the-overlayfs-driver) for details.
-- **Root file system**: Because the `docker:20.10.16-dind` container and the runner container do not share their
+- **Root file system**: Because the `docker:24.0.5-dind` container and the runner container do not share their
   root file system, you can use the job's working directory as a mount point for
   child containers. For example, if you have files you want to share with a
   child container, you could create a subdirectory under `/builds/$CI_PROJECT_PATH`
@@ -352,7 +353,7 @@ container. Docker is then available in the context of the image.
 
 If you bind the Docker socket and you are
 [using GitLab Runner 11.11 or later](https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests/1261),
-you can no longer use `docker:20.10.16-dind` as a service. Volume bindings also affect services,
+you can no longer use `docker:24.0.5-dind` as a service. Volume bindings also affect services,
 making them incompatible.
 
 To make Docker available in the context of the image, you need to mount
@@ -369,7 +370,7 @@ Your configuration should look similar to this example:
   executor = "docker"
   [runners.docker]
     tls_verify = false
-    image = "docker:20.10.16"
+    image = "docker:24.0.5"
     privileged = false
     disable_cache = false
     volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
@@ -385,7 +386,7 @@ sudo gitlab-runner register -n \
   --registration-token REGISTRATION_TOKEN \
   --executor docker \
   --description "My Docker Runner" \
-  --docker-image "docker:20.10.16" \
+  --docker-image "docker:24.0.5" \
   --docker-volumes /var/run/docker.sock:/var/run/docker.sock
 ```
 
@@ -408,7 +409,7 @@ mirror:
 
 ```yaml
 services:
-  - name: docker:20.10.16-dind
+  - name: docker:24.0.5-dind
     command: ["--registry-mirror", "https://registry-mirror.example.com"]  # Specify the registry mirror to use
 ```
 
@@ -431,7 +432,7 @@ Docker:
     ...
     privileged = true
     [[runners.docker.services]]
-      name = "docker:20.10.16-dind"
+      name = "docker:24.0.5-dind"
       command = ["--registry-mirror", "https://registry-mirror.example.com"]
 ```
 
@@ -445,7 +446,7 @@ Kubernetes:
     ...
     privileged = true
     [[runners.kubernetes.services]]
-      name = "docker:20.10.16-dind"
+      name = "docker:24.0.5-dind"
       command = ["--registry-mirror", "https://registry-mirror.example.com"]
 ```
 
@@ -552,12 +553,12 @@ the implications of this method are:
    docker run --rm -t -i -v $(pwd)/src:/home/app/src test-image:latest run_app_tests
    ```
 
-You do not need to include the `docker:20.10.16-dind` service, like you do when
+You do not need to include the `docker:24.0.5-dind` service, like you do when
 you use the Docker-in-Docker executor:
 
 ```yaml
 default:
-  image: docker:20.10.16
+  image: docker:24.0.5
   before_script:
     - docker info
 
@@ -638,7 +639,9 @@ To build Docker images without enabling privileged mode on the runner, you can
 use one of these alternatives:
 
 - [`kaniko`](using_kaniko.md).
-- [`buildah`](https://github.com/containers/buildah). There is a [known issue](https://github.com/containers/buildah/issues/4049) with running as non-root, you might need this [workaround](https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html#configure-setfcap) if you are using OpenShift Runner.
+- [`buildah`](https://github.com/containers/buildah). There is a [known issue](https://github.com/containers/buildah/issues/4049)
+  with running as non-root, you might need this [workaround](https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html#configure-setfcap)
+  if you are using OpenShift Runner.
 
 For example, with `buildah`:
 
@@ -647,7 +650,7 @@ For example, with `buildah`:
 
 build:
   stage: build
-  image: quay.io/buildah/stable
+  image: quay.io/buildah/stable:v1.31.0
   variables:
     # Use vfs with buildah. Docker offers overlayfs as a default, but buildah
     # cannot stack overlayfs on top of another overlayfs filesystem.
@@ -702,9 +705,9 @@ This issue can occur when the service's image name
 
 ```yaml
 default:
-  image: docker:20.10.16
+  image: docker:24.0.5
   services:
-    - registry.hub.docker.com/library/docker:20.10.16-dind
+    - registry.hub.docker.com/library/docker:24.0.5-dind
 ```
 
 A service's hostname is [derived from the full image name](../../ci/services/index.md#accessing-the-services).
@@ -713,9 +716,9 @@ To allow service resolution and access, add an explicit alias for the service na
 
 ```yaml
 default:
-  image: docker:20.10.16
+  image: docker:24.0.5
   services:
-    - name: registry.hub.docker.com/library/docker:20.10.16-dind
+    - name: registry.hub.docker.com/library/docker:24.0.5-dind
       alias: docker
 ```
 
@@ -783,3 +786,24 @@ This indicates the GitLab Runner does not have permission to start the
 1. Check that `privileged = true` is set in the `config.toml`.
 1. Make sure the CI job has the right Runner tags to use these
 privileged runners.
+
+### Error: `cgroups: cgroup mountpoint does not exist: unknown`
+
+There is a known incompatibility introduced by Docker Engine 20.10.
+
+When the host uses Docker Engine 20.10 or newer, then the `docker:dind` service in a version older than 20.10 does
+not work as expected.
+
+While the service itself will start without problems, trying to build the container image results in the error:
+
+```plaintext
+cgroups: cgroup mountpoint does not exist: unknown
+```
+
+To resolve this issue, update the `docker:dind` container to version at least 20.10.x,
+for example `docker:24.0.5-dind`.
+
+The opposite configuration (`docker:24.0.5-dind` service and Docker Engine on the host in version
+19.06.x or older) works without problems. For the best strategy, you should to frequently test and update
+job environment versions to the newest. This brings new features, improved security and - for this specific
+case - makes the upgrade on the underlying Docker Engine on the runner's host transparent for the job.
diff --git a/doc/ci/docker/using_kaniko.md b/doc/ci/docker/using_kaniko.md
index 568f4977c2f51001eb700a9c88a9d2dcefaa58b9..8ab13c7154d094737327306f1c556dc94648ba4c 100644
--- a/doc/ci/docker/using_kaniko.md
+++ b/doc/ci/docker/using_kaniko.md
@@ -58,7 +58,7 @@ project's Container Registry while tagging it with the Git tag:
 build:
   stage: build
   image:
-    name: gcr.io/kaniko-project/executor:v1.9.0-debug
+    name: gcr.io/kaniko-project/executor:v1.14.0-debug
     entrypoint: [""]
   script:
     - /kaniko/executor
@@ -96,7 +96,7 @@ build:
     https_proxy: <your-proxy>
     no_proxy: <your-no-proxy>
   image:
-    name: gcr.io/kaniko-project/executor:v1.9.0-debug
+    name: gcr.io/kaniko-project/executor:v1.14.0-debug
     entrypoint: [""]
   script:
     - /kaniko/executor
@@ -158,3 +158,25 @@ on what other GitLab CI patterns are demonstrated are available at the project p
 If you receive this error, it might be due to an outside proxy. Setting the `http_proxy`
 and `https_proxy` [environment variables](../../administration/packages/container_registry.md#running-the-docker-daemon-with-a-proxy)
 can fix the problem.
+
+### Error: `kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue`
+
+There is a known incompatibility introduced by Docker Engine 20.10.
+
+When the host uses Docker Engine 20.10 or newer, then the `gcr.io/kaniko-project/executor:debug` image in a version
+older than v1.9.0 does not work as expected.
+
+When you try to build the image, Kaniko fails with:
+
+```plaintext
+kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue
+```
+
+To resolve this issue, update the `gcr.io/kaniko-project/executor:debug` container to version at least v1.9.0,
+for example `gcr.io/kaniko-project/executor:v1.14.0-debug`.
+
+The opposite configuration (`gcr.io/kaniko-project/executor:v1.14.0-debug` image and Docker Engine
+on the host in version 19.06.x or older) works without problems. For the best strategy, you should
+frequently test and update job environment versions to the newest. This brings new features, improved
+security and - for this specific case - makes the upgrade on underlying Docker Engine on the runner's
+host transparent for the job.