- 12月 14, 2022
- 12月 13, 2022
-
-
由 ssumit33 创作于
[KPLATFORM-1475] Minor: Doing node availability check before running tests in parallel in jenkins pipeline - Check if the jenkins cluster have atleast 2 free nodes before running the tests in parallel across nodes - If free nodes are not available, running the tests in current nodes itself - This will prevent the build from getting stuck while holding on to one node when there is crunch on resources
-
由 andymg3 创作于
-
由 Ashish Malgawa 创作于
* removed describe permission * updated comments
-
由 Ismael Juma 创作于
Also upgrade netty-tcnative to 2.0.54.Final. The highlights are the type pollution fixes although not sure if they affect us: * https://github.com/netty/netty/pull/12709 * https://github.com/netty/netty/pull/12806 netty release notes: * https://netty.io/news/2022/08/26/4-1-80-Final.html * https://netty.io/news/2022/09/08/4-1-81-Final.html * https://netty.io/news/2022/09/13/4-1-82-Final.html * https://netty.io/news/2022/10/11/4-1-84-Final.html * https://netty.io/news/2022/11/09/4-1-85-Final.html * https://netty.io/news/2022/12/12/4-1-86-Final.html netty tcnative diff: * https://github.com/netty/netty-tcnative/compare/netty-tcnative-parent-2.0.53.Final...netty-tcnative-parent-2.0.54.Final Reviewers: Alok Nikhil <anikhil@confluent.io>
-
由 Jason Gustafson 创作于
This patch adds support for generating snapshots from the metadata shell. Example: generate a new snapshot at the latest offset in the current working directory. ``` >> write-snapshot Wrote snapshot: /Users/jgustafson/Projects/ce-kafka/./00000000000011755838-0000001054.checkpoint ``` Example: generate a new snapshot in the `/tmp` directory using a snapshotId with offset=500 and epoch=100 ``` >> write-snapshot -o /tmp --snapshot-id 500-100 Wrote snapshot: /tmp/00000000000000000500-0000000100.checkpoint ``` The tool will, however, ensure that this ID is consistent with the log contents (the offset/epoch must be greater than or equal to the current values from the loaded log). This behavior can be overridden with the `-f` flag: ``` >> write-snapshot -f -o /tmp --snapshot-id 500-100 ``` Additionally, this patch improves the handling of encrypted records. The encryptor configuration is specified when the shell is loaded with the `--encryptor-config` argument. For example: ``` bin/kafka-metadata-shell.sh --directory __cluster_metadata-0 --encryptor-config encryptor.properties ``` If no encryptor properties are provided, then encrypted records will be skipped as in the current behavior. In this case, however, the tool will disallow snapshot generation since the data was loaded incomplete. This behavior can also be overridden with the `-f` flag. Reviewers: David Arthur <mumrah@gmail.com>
-
由 Matthew Wong 创作于
this PR adds a new exception meter to catch EOFExceptions that can be thrown in https://github.com/confluentinc/ce-kafka/blob/master/core/src/main/java/kafka/tier/fetcher/TierSegmentReader.java#L318-L334 these EOFExceptions are much more fatal and could be indicative of data loss. they need to be caught separately as we saw them in https://confluentinc.atlassian.net/browse/RCCA-8548 We will make a separate monitor for these exceptions with a much lower tolerance bar to be able to alert more aggressively. The current monitor is https://github.com/confluentinc/cc-terraform-datadog/blob/master/teams/kafka-storage-foundation/module/monitor-kafka_tiered_storage_tier_fetch_exception_rate.tf
-
由 srpconfluent 创作于
The following metrics has been added for Tier Topic Snapshots Active Indicator - A gauge metric to indicate if the Tier Topic Snapshot Manger is active or not , value will be 0/1. Consumer Lag - A gauge metric to indicate the total lag for the tier topic consumer. Total Failure - A cumulative count metric for the total number of failures (recorded using a sensor). Total Uploaded - A cumulative count metric for the total successful snapshot uploads (recorded using a sensor). Added unit tests. Deployed the changes to devel cluster. metrics are available in development datadog.
-
- 12月 11, 2022
-
-
由 Vikas Singh 创作于
* SBC: Don't expire replica entity if topic partition is present In CCloud we add broker in CKU multiples, so multiple brokers are added one after another. SBC processes them one after another and coalesces them by canceling any ongoing operation. This causes some replicas to move and thus these replica metric windows were becoming invalid. This chnage updates the window code to treat these replicas as valid as long as their topic partition is known. Added a new integration test to trigger this behavior. The test fails without this change and passes with it.
- 12月 10, 2022
-
-
由 Feng Min 创作于
* KCFUN-696: Add a lower bound to the negative value for TokenBucket * Rename & checking zero quota case * Add missing test file
-
由 kpatelatwork 创作于
Route topic doesn't block external listeners from accepting requests. If route topic doesn't exist, the external backchannel listener will reject the connection requests during SASL authentication. Periodically, a thread will check for the route topic existence and start the traffic store.
-
由 santhoshct 创作于
CONFLUENT: Skip publishing for projects with no scala suffix when the scala version is not the default (#8235) * added logic to skip publishing for scala 2.12 * changed the implementation to only skip publish for projects with 2.12 and without scala suffix * added comments and lint fixes * Tweak comments and print logging Co-authored-by: Ismael Juma <ismael@juma.me.uk>
-
由 Vincent Rose 创作于
* switch to codeartifact repo * attempt to use a tmp file for loading the gradle vars * escape $ * wrap more stages in withEnv. Remove nodelabel to use general cluster * wrap more things in withEnv * use withEnv on partitionTwo * add nodelabel back because it didn't work. copy sh script * use a modified gradle vault secret * cleanup * Update Jenkinsfile * use withGradleEnv * revert semaphore secret * fix comment * review comments * add a comment * fix closure errors * typo * Removed comment about previous approach We just need the explanation for how it works now, Co-authored-by: Ismael Juma <ismael@juma.me.uk>
-
由 Yang Yu 创作于
MINOR: use last modified time of the stray metadata file as timestamp for delayed stray log deletion (#8185) The last modified time of the log dir will change after broker is restarted. Using the last modified time of the stray log metadata file will reserve the original timestamp when the log was marked stray.
-
由 Sanjana Kaundinya 创作于
- 12月 09, 2022
-
-
由 David Jacot 创作于
This patch adds `listGroups` to the new `GroupCoordinator` interface and updates `KafkaApis` to use it. Reviewers: Justine Olshan <jolshan@confluent.io>, Jason Gustafson <jason@confluent.io> (cherry picked from commit 854dfb5f)
-
由 David Jacot 创作于
This patch adds `leaveGroup` to the new `GroupCoordinator` interface and updates `KafkaApis` to use it. Reviewers: Justine Olshan <jolshan@confluent.io>, Jeff Kim <jeff.kim@confluent.io>, Jason Gustafson <jason@confluent.io> (cherry picked from commit df29b17f)
-
由 David Jacot 创作于
This patch adds `syncGroup` to the new `GroupCoordinator` interface and updates `KafkaApis` to use it. Reviewers: Justine Olshan <jolshan@confluent.io>, Jeff Kim <jeff.kim@confluent.io>, Jason Gustafson <jason@confluent.io> (cherry picked from commit fd05073c)
-
由 David Jacot 创作于
This patch adds `heartbeat` to the new `GroupCoordinator` interface and updates `KafkaApis` to use it. Reviewers: Justine Olshan <jolshan@confluent.io>, Jeff Kim <jeff.kim@confluent.io>, Jason Gustafson <jason@confluent.io> (cherry picked from commit f5305fb3)
-
由 David Jacot 创作于
This patch adds `joinGroup` to the new `GroupCoordinator` interface and updates `KafkaApis` to use it. For the context, I will do the same for all the other interactions with the current group coordinator. In order to limit the changes, I have chosen to introduce the `GroupCoordinatorAdapter` that translates the new interface to the old one. It is basically a wrapper. This allows keeping the current group coordinator untouched for now and focus on the `KafkaApis` changes. Eventually, we can remove `GroupCoordinatorAdapter`. Reviewers: Justine Olshan <jolshan@confluent.io>, Jeff Kim <jeff.kim@confluent.io>, Luke Chen <showuon@gmail.com>, Jason Gustafson <jason@confluent.io> (cherry picked from commit 98e19b30)
-
由 David Jacot 创作于
MINOR: Handle JoinGroupResponseData.protocolName backward compatibility in JoinGroupResponse (#12864) This is a small refactor extracted from https://github.com/apache/kafka/pull/12845. It basically moves the logic to handle the backward compatibility of `JoinGroupResponseData.protocolName` from `KafkaApis` to `JoinGroupResponse`. The patch adds a new unit test for `JoinGroupResponse` and relies on existing tests as well. Reviewers: Justine Olshan <jolshan@confluent.io>, Jason Gustafson <jason@confluent.io> (cherry picked from commit c2fc36f3)
-
由 ssumit33 创作于
Disabling testSelfHealingWithIgnoredBrokersPresentWithReplicaPlacements and testMaybeShutdownShutsDownBrokers tests which are causing the jenkins build to hang frequently. Example: https://confluent.slack.com/archives/C09EP1SS3/p1670511971503719?thread_ts=1667952421.752439&cid=C09EP1SS3
-
由 Proven Provenzano 创作于
* Update EndToEndAuthorizationTest to test both ZK and KRAFT quorum servers * KAFKA-14398: Update EndToEndAuthorizationTest to test both ZK and KRAFT quorum servers (#12896) - Update EndToEndAuthorizationTest to test both ZK and KRAFT quorum servers. SCRAM and Delegation are not implemented for KRAFT yet so they emit a message to stderr and pass the test. Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com> * Don't break LDAP tests. They won't work with KRAFT but should work with ZK.
-
由 Shaik Zakir Hussain 创作于
Deduplicating Connect log events. Maintaining a logEventState per connector/task in the worker class, and using it to check if a current error is a duplicate of an immediate preceding error for the given connector/task.
-
由 Sanjana Kaundinya 创作于
-
由 Aishwarya Gune 创作于
Use effective goals to rebalance on manually from triggerEvenClusterLoadTask. Effective goals will rightly chose sbcv1 or v2 goals.
-
由 Matthew Wong 创作于
-
由 Yang Yu 创作于
Adds a new internal config min.segment.ms. This new config will clamp segment.ms to a minimal value, i.e.: the time based segment roll criteria will be calculated as Math.max(segment.ms, min.segment.ms).
-
由 Matthew Wong 创作于
-
由 Aishwarya Gune 创作于
The PR introduces tenant aware goal class where we would restrain all tenant partitions within the specified cell. The PR implements associated methods with the goal to detect tenant aware goal violation and fixing it.
-