diff --git a/doc/administration/docs_self_host.md b/doc/administration/docs_self_host.md index 095baafab7021007da8adaa231d2d82bcada1b4e..932ffb872883f0c0b6253006c27c3b5c850b3d9a 100644 --- a/doc/administration/docs_self_host.md +++ b/doc/administration/docs_self_host.md @@ -74,7 +74,7 @@ To run the GitLab product documentation website in a Docker container: docker-compose up -d ``` -1. Visit `http://0.0.0.0:4000` to view the documentation website and verify that +1. Visit `http://0.0.0.0:4000` to view the documentation website and verify that it works. 1. [Redirect the help links to the new Docs site](#redirect-the-help-links-to-the-new-docs-site). diff --git a/doc/administration/instance_limits.md b/doc/administration/instance_limits.md index 7d10fdf770fef98b0bc4266bfb18468609da84e1..2b2eefdb17cb00dd0a6a6aac6b10502f55a7da46 100644 --- a/doc/administration/instance_limits.md +++ b/doc/administration/instance_limits.md @@ -290,7 +290,7 @@ Plan.default.actual_limits.update!(group_hooks: 100) Set the limit to `0` to disable it. -The default maximum number of webhooks is `100` per project and `50` per group. Webhooks in a child group do not count towards the webhook limit of their parent group. +The default maximum number of webhooks is `100` per project and `50` per group. Webhooks in a child group do not count towards the webhook limit of their parent group. For GitLab.com, see the [webhook limits for GitLab.com](../user/gitlab_com/index.md#webhooks). diff --git a/doc/architecture/blueprints/clickhouse_usage/index.md b/doc/architecture/blueprints/clickhouse_usage/index.md index 390097a4cca28122fa9c1e9a21bcfbb8c2c34791..2781ea15a55cd595798ffc15a9fc6dd2db978a38 100644 --- a/doc/architecture/blueprints/clickhouse_usage/index.md +++ b/doc/architecture/blueprints/clickhouse_usage/index.md @@ -34,11 +34,11 @@ As ClickHouse has already been selected for use at GitLab, our main goal now is The following are links to proposals in the form of blueprints that address technical challenges to using ClickHouse across a wide variety of features. 1. Scalable data ingestion pipeline. - - How do we ingest large volumes of data from GitLab into ClickHouse either directly or by replicating existing data? + - How do we ingest large volumes of data from GitLab into ClickHouse either directly or by replicating existing data? 1. Supporting ClickHouse for self-managed installations. - For which use-cases and scales does it make sense to run ClickHouse for self-managed and what are the associated costs? - How can we best support self-managed installation of ClickHouse for different types/sizes of environments? - - Consider using the [Opstrace ClickHouse operator](https://gitlab.com/gitlab-org/opstrace/opstrace/-/tree/main/clickhouse-operator) as the basis for a canonical distribution. + - Consider using the [Opstrace ClickHouse operator](https://gitlab.com/gitlab-org/opstrace/opstrace/-/tree/main/clickhouse-operator) as the basis for a canonical distribution. - Consider exposing Clickhouse backend as [GitLab Plus](https://gitlab.com/groups/gitlab-org/-/epics/308) to combine benefits of using self-managed instance and GitLab-managed database. - Should we develop abstractions for querying and data ingestion to avoid requiring ClickHouse for small-scale installations? 1. Abstraction layer for features to leverage both ClickHouse or PostreSQL. diff --git a/doc/development/contributing/merge_request_workflow.md b/doc/development/contributing/merge_request_workflow.md index e384b41f6f2e67c303c48792c7d34c069e2673c4..f39d93a39bcf2feeeebcb3d1f0a0bf04a07242d7 100644 --- a/doc/development/contributing/merge_request_workflow.md +++ b/doc/development/contributing/merge_request_workflow.md @@ -201,7 +201,7 @@ Example commit message template that can be used on your machine that embodies t To make sure that your merge request can be approved, please ensure that it meets the contribution acceptance criteria below: -1. The change is as small as possible. +1. The change is as small as possible. 1. If the merge request contains more than 500 changes: - Explain the reason - Mention a maintainer diff --git a/doc/subscriptions/gitlab_dedicated/index.md b/doc/subscriptions/gitlab_dedicated/index.md index b196aae549a277f74f8feeaa44b94c5bcbe387ce..0c2b6375d7aba188e322a58ecca4b87da60762f4 100644 --- a/doc/subscriptions/gitlab_dedicated/index.md +++ b/doc/subscriptions/gitlab_dedicated/index.md @@ -62,7 +62,7 @@ During onboarding, you can specify an AWS KMS encryption key stored in your own GitLab Dedicated offers the following [compliance certifications](https://about.gitlab.com/security/): -- SOC 2 Type 1 Report (Security and Confidentiality criteria) +- SOC 2 Type 1 Report (Security and Confidentiality criteria) - ISO/IEC 27001:2013 - ISO/IEC 27017:2015 - ISO/IEC 27018:2019 diff --git a/doc/user/application_security/dast/browser_based.md b/doc/user/application_security/dast/browser_based.md index d9938aaa94ac3f9c5c395bf9037a5178db13ddb7..88be88ad00e52f1c683427d877f15d61c6fb9016 100644 --- a/doc/user/application_security/dast/browser_based.md +++ b/doc/user/application_security/dast/browser_based.md @@ -172,7 +172,7 @@ For authentication CI/CD variables, see [Authentication](authentication.md). | `DAST_BROWSER_ACTION_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `7s` | The maximum amount of time to wait for a browser to complete an action. | | `DAST_BROWSER_ALLOWED_HOSTS` | List of strings | `site.com,another.com` | Hostnames included in this variable are considered in scope when crawled. By default the `DAST_WEBSITE` hostname is included in the allowed hosts list. Headers set using `DAST_REQUEST_HEADERS` are added to every request made to these hostnames. | | `DAST_BROWSER_COOKIES` | dictionary | `abtesting_group:3,region:locked` | A cookie name and value to be added to every request. | -| `DAST_BROWSER_CRAWL_GRAPH` | boolean | `true` | Set to `true` to generate an SVG graph of navigation paths visited during crawl phase of the scan. You must also define `gl-dast-crawl-graph.svg` as a CI job artifact to be able to access the generated graph. | +| `DAST_BROWSER_CRAWL_GRAPH` | boolean | `true` | Set to `true` to generate an SVG graph of navigation paths visited during crawl phase of the scan. You must also define `gl-dast-crawl-graph.svg` as a CI job artifact to be able to access the generated graph. | | `DAST_BROWSER_DEVTOOLS_LOG` | string | `Default:messageAndBody,truncate:2000` | Set to log protocol messages between DAST and the Chromium browser. | | | `DAST_BROWSER_ELEMENT_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `600ms` | The maximum amount of time to wait for an element before determining it is ready for analysis. | | `DAST_BROWSER_EXCLUDED_ELEMENTS` | selector | `a[href='2.html'],css:.no-follow` | Comma-separated list of selectors that are ignored when scanning. |