Skip to content
代码片段 群组 项目
该项目从 https://gitlab.com/gitlab-org/gitlab.git 镜像。 拉取镜像更新于
  1. 12月 08, 2016
  2. 12月 07, 2016
  3. 12月 05, 2016
  4. 12月 01, 2016
  5. 11月 25, 2016
  6. 11月 24, 2016
  7. 11月 22, 2016
  8. 11月 04, 2016
  9. 11月 03, 2016
  10. 10月 27, 2016
  11. 10月 26, 2016
  12. 10月 14, 2016
  13. 10月 13, 2016
  14. 10月 12, 2016
  15. 10月 06, 2016
  16. 10月 05, 2016
    • Jacob Vosmaer's avatar
      Version 0.8.3 · 1ce06acc
      Jacob Vosmaer 创作于
      1ce06acc
    • Jacob Vosmaer's avatar
      Merge branch 'queue-requests' into 'master' · f3f03271
      Jacob Vosmaer 创作于
      Allow to queue API requests and limit given capacity
      
      This MR implements an API queueing on Workhorse side.
      It's meant to better control given capacity for different resources.
      
      This is meant to solve: https://gitlab.com/gitlab-com/infrastructure/issues/320.
      
      And make a large number of requests easier to handle: https://gitlab.com/gitlab-org/gitlab-ce/issues/21698
      
      It fulfils these requirements:
      - allow to limit capacity given to API, specifically to allow to process up to N-number of requests at single time,
      - allow to queue API requests and timeout them, specifically it allows to slow down processing of API calls if the Unicorn can process the current API requests in reasonable time
      
      The implementation is made as constant cost and it's dead simple.
      It should not inflate the memory / CPU usage of Workhorse.
      
      It works like this:
      - we hook into processing of requests,
      - we try to acquire slot for our request by pushing to buffered channel. The buffered channel actually limits number of processed requests at single time,
      - if we can't push to channel it means that all concurrent slots are in use and we have to wait,
      - we block on buffered channel for the free a slot, secondly we wait on timer to timeout on channel,
      - we generate 502 if timeout occurs,
      - we process request if we manage to push to channel,
      - we pop from channel when we finish processing of requests, allowing other requests to fire,
      - if there's already too many request (over `apiQueueLimit`) we return 429,
      
      This introduces 3 extra parameters (off by default):
      - `apiLimit` - limit number of concurrent API requests,
      - `apiQueueLimit` - limit the backlog for queueing,
      - `apiQueueTimeout` - duration after we timeout requests if they sit too long in queue.
      
      This allows:
      - limit used capacity to any number of available workers, ex. allowing for API to use at most 25% of capacity,
      - slowly process requests in case of slowness,
      - better manage the API calls then rate limiting requests,
      - by slowing down we are automatically backing off all services using API,
      
      
      See merge request !65
      f3f03271
加载中