diff --git a/doc/administration/sidekiq/sidekiq_memory_killer.md b/doc/administration/sidekiq/sidekiq_memory_killer.md index 3a7a61bf6f4444f961de78d982932977666b57f6..6c93b4409532780875b7d3306632664e58bc366e 100644 --- a/doc/administration/sidekiq/sidekiq_memory_killer.md +++ b/doc/administration/sidekiq/sidekiq_memory_killer.md @@ -6,22 +6,12 @@ title: Reducing memory use --- The Sidekiq memory killer automatically manages background job processes that -consume too much memory. +consume too much memory. This feature monitors worker processes and restarts them before +the Linux memory killer steps in, which allows background jobs to run to completion +before gracefully shutting down. By logging these events, we make it easier to +identify jobs that lead to high memory use. -This feature monitors worker processes and restarts them before they crash your instance. -Background jobs continue processing with minimal interruption. - -The detailed logging makes troubleshooting easier by identifying which jobs trigger -high memory usage. - -## Memory management - -The GitLab Rails application code suffers from memory leaks. For web requests -this problem is made manageable using a [supervision thread](../operations/puma.md#reducing-memory-use) -that automatically restarts workers if they exceed a given resident set size (RSS) threshold -for a certain amount of time. -We use the same approach to the Sidekiq processes used by GitLab -to process background jobs. +## How we monitor Sidekiq memory GitLab monitors the available RSS limit by default only for Linux package or Docker installations. The reason for this is that GitLab relies on runit to restart Sidekiq after a memory-induced shutdown, and self-compiled and Helm chart