该项目从 https://gitlab.com/gitlab-org/gitlab.git 镜像。
拉取镜像更新于 。
- 9月 29, 2020
-
-
由 Jacob Vosmaer 创作于
-
- 9月 23, 2020
-
-
由 Jacob Vosmaer 创作于
-
- 9月 17, 2020
-
-
由 Jacob Vosmaer 创作于
-
- 9月 14, 2020
-
-
由 Jacob Vosmaer 创作于
-
- 9月 03, 2020
-
-
由 Stan Hu 创作于
logrus starts a Goroutine to write messages to the output buffer, but there is no way to flush these messages to ensure they are ready (https://github.com/sirupsen/logrus/issues/435). To avoid race conditions, drop these log checks from our tests.
-
- 9月 02, 2020
-
-
由 Erick Bajao 创作于
-
由 Stan Hu 创作于
This should eliminate a race condition that only occurred in the test where multiple Goroutines attempt to write to the same value when opening a bucket.
-
- 8月 28, 2020
-
-
由 Jacob Vosmaer 创作于
-
由 Stan Hu 创作于
In an normal upload flow, GitLab Rails will move a file to its final destination and remove the temporary file. Workhorse issues a DELETE a request to ensure this is cleaned up. Previously this was generating log noise with every upload. Now only log an error message if there were some error other than a 404.
-
- 8月 21, 2020
-
-
由 Jacob Vosmaer 创作于
-
- 8月 19, 2020
-
-
由 Stan Hu 创作于
This merge request introduces a client for Azure Blob Storage in Workhorse. Currently customers wanting to use Azure Blob Storage have to set up a Minio Gateway (https://docs.gitlab.com/charts/advanced/external-object-storage/azure-minio-gateway.html), which isn't ideal because it requires customers to maintain their own proxy server for Azure. We have a number of customers who want native support for Azure Blob Storage. Unlike AWS and Google, Azure needs to use an Azure client inside Workhorse to support direct uploads. Using standard HTTP transfers with pre-signed URLs with the Azure Put Blob API (https://docs.microsoft.com/en-us/rest/api/storageservices/put-blob) doesn't work because Azure doesn't support chunked transfer encoding. However, Azure does support uploading files in segments via the Put Block and Put Block List API (https://docs.microsoft.com/en-us/rest/api/storageservices/put-block), but this requires an Azure client that can speak this API. Instead of embedding the Microsoft Azure client directly, we use the Go Cloud Development Kit (https://godoc.org/gocloud.dev/blob) to make it easier to add other object storage providers later. For example, GitLab Rails might return this JSON payload in the `/internal/uploads/authorize` call: ```json { "UseWorkhorseClient":true, "ObjectStorage":{ "Provider":"AzureRM", "GoCloudConfig":{ "URL":"azblob://test-bucket" } } } ``` The `azblob` scheme is managed by the Go Cloud `URLMux` (https://godoc.org/gocloud.dev/blob#URLMux). Converting our existing S3 client with Go Cloud should be done later (https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/275). This changes requires https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38882 to work. Omnibus configuration changes are in https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4505. Part of https://gitlab.com/gitlab-org/gitlab/-/issues/25877
-
- 8月 13, 2020
-
-
由 Stan Hu 创作于
Previously it was particularly tricky to add a new object storage method because you had to be aware of how to deal with different Goroutines and contexts to handle the Workhorse upload flow (https://docs.gitlab.com/ee/development/uploads.html#direct-upload). In addition, the execution engine to handle this was duplicated across multiple files. The execution engine essentially did the following: 1. Set up an upload context with a deadline 2. Record upload metrics 3. Initialize cleanup functions 4. Initiate upload 5. Validate upload ETag 6. Do cleanup (e.g. delete the temporary file) To reduce code duplication and to make it easier to add new object stores, the common execution sequence is now encapsulated in the `uploader` `Execute()` method. We also introduce an `UploadStrategy` interface that handles the details of the uploads, and `Execute()` calls methods on this interface. Now adding a new object storage type is a matter of implementing the `UploadStrategy` interface without needing to understand the details of the execution engine.
-
- 8月 07, 2020
-
-
由 Stan Hu 创作于
Prior to this change, uploads to AWS S3 were only encrypted on the server if a default encryption were specified on the bucket. With this change, admins can now configure the encryption and the AWS Key Management Service (KMS) key ID in GitLab Rails, and the configuration will be used in uploads. Bucket policies to enforce encryption can now be used since Workhorse sends the required headers (`x-amz-server-side-encryption` and `x-amz-server-side-encryption-aws-kms-key-id`). The bucket policy cannot be enforced with default encryption, since that is applied after the check. This requires the changes in https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38240 to work. Part of https://gitlab.com/gitlab-org/gitlab/-/issues/22200
-
- 5月 30, 2020
-
-
由 Stan Hu 创作于
This adds the AWS client directly to Workhorse and a new configuration section for specifying credentials. This makes it possible to use S3 buckets with KMS encryption and proper MD5 checksums. This is disabled by default. For this to be used: 1. GitLab Rails needs to send the `UseWorkhorseClient` and `RemoteTempObjectID` in the `/authorize` endpoint. (https://gitlab.com/gitlab-org/gitlab/-/merge_requests/29389) 2. S3 configuration must be specified in `config.toml`, or Rails must be configured to use IAM instance profiles (`use_iam_profile` in Fog connection parameters). S3 sessions are created lazily and cached for 10 minutes to avoid unnecessary local I/O access. When IAM instance profiles are used, this also cuts down the number of HTTP requests needed to request AWS credentials. Related issues: 1. https://gitlab.com/gitlab-org/gitlab-workhorse/issues/222 2. https://gitlab.com/gitlab-org/gitlab-workhorse/issues/185 3. https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/210
-
- 11月 05, 2019
-
-
由 Alessio Caiazza 创作于
Our ETag comparison on CompleteMultipartUpload was based on a reverse engineered protocol. It is not part of the S3 API specification and not every providers compute it in the same way.
-
- 10月 15, 2019
-
-
由 Alessio Caiazza 创作于
Some object storage providers returns upcase ETag headers. This commit will perform the check in case insensitive mode
-
- 7月 24, 2019
-
-
由 Andrew Newdigate 创作于
-
- 6月 05, 2019
-
- 3月 25, 2019
-
-
由 Jacob Vosmaer 创作于
-
由 Patrick Bajao 创作于
Includes some fixes on some files based on suggestion of `staticcheck`. Bumps the default go image to 1.11 from 1.10. Still run 1.10 on `test using go 1.10` job.
-
- 1月 18, 2019
-
-
由 Andrew Newdigate 创作于
-
- 11月 30, 2018
-
-
由 Pirate Praveen 创作于
-
- 11月 23, 2018
-
-
由 Andrew Newdigate 创作于
-
- 11月 06, 2018
-
-
由 Andrew Newdigate 创作于
-
- 9月 04, 2018
-
-
由 Stan Hu 创作于
As revealed in https://gitlab.com/gitlab-org/gitlab-ce/issues/49957, Rails generates a signed URL with a fixed HTTP header with `Content-Type: application/octet-stream`. However, if we change or remove that for some reason in Workhorse, this breaks the upload with a 403 Unauthorized because the signed URL is not valid. We can make this more robust by doing the following: 1. In the `/uploads/authorize` request, Rails can return a `StoreHeaders` key-value pair in the JSON response containing the required headers that the PUT request must include. 2. Use those HTTP headers if that value is present. 3. For backwards compatibility, if that key is not present, default to the old behavior of sending the fixed `Content-Type` header.
-
- 8月 23, 2018
-
-
由 Andrew Newdigate 创作于
-
- 6月 07, 2018
-
-
由 Alessio Caiazza 创作于
-
- 6月 01, 2018
-
-
由 Alessio Caiazza 创作于
-
- 5月 22, 2018
-
-
由 Alessio Caiazza 创作于
When we are uploading big objects, remote server may close the connection while we are still writing. This patch allows to fetch the real error instead of io.ErrClosedPipe
-
- 3月 07, 2018
-
-
由 Alessio Caiazza 创作于
-
- 2月 22, 2018
-
-
由 Alessio Caiazza 创作于
-