- 12月 09, 2022
-
-
由 kpatelatwork 创作于
* KGLOBAL-2358 converted CCLOUD_HOST_SUFFIXES to a config for fedramp used broker config in validate security protocol fixed tests fixed broken test * fixed broken test
- 12月 08, 2022
-
-
由 Truc Nguyen 创作于
Make numMessages a long type to increase the capability of sending messages for Rest produce workload (#8238)
-
由 Lucas Bradstreet 创作于
We're seeing frequent gradle test executor non-zero exit value 134 errors in PRs. I've tracked it down to this test which appears to be memory leaking. Please see the thread discussing it in https://confluent.slack.com/archives/C09EP1SS3/p1667952421752439. I've opened https://confluentinc.atlassian.net/browse/KSTREAMS-5348 to track re-enablement.
-
由 Aman Singh 创作于
Fix the topic load timeout config name and default value
-
由 Calvin Liu 创作于
The default replica selector chooses a replica on whether the broker.rack matches the client.rack in the fetch request and whether the offset exists in the follower. If the follower is not in the ISR, we know it's lagging behind which will also lag the consumer behind. there are two cases: 1. the follower recovers and joins the isr. the consumer will no longer fall behind. 2. the follower continues to lag behind. after 5 minutes, the consumer will refresh its preferred read replica and the leader will return the same lagging follower since the offset the consumer fetched up to is capped by the follower's HWM. this can go on indefinitely. If the replica selector chooses a broker in the ISR then we can ensure that at least every 5 minutes the consumer will consume from an up-to-date replica. Reviewers: David Jacot <djacot@confluent.io> Conflicts: core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala Mainly with the updated syntax diff. Co-authored-by: Jeff Kim <kimkb2011@gmail.com>
-
由 Aadithya Chandra 创作于
-
由 Xuhui Lu 创作于
* Add CCloudIdentityProviderAdmin to delete identity providers/pools for deactivated orgs * Fix the resource type * Add test cases
-
由 Stanislav Kozlovski 创作于
- 12月 07, 2022
-
-
由 Proven Provenzano 创作于
* KAFKA-14375: Remove use of "authorizer-properties" from EndToEndAuthorizerTest (#12843) - This removes use of a deprecated feature and instead has all ACL calls going through the brokers. This work is preliminary work needed before I can make them run in KRAFT mode. Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Igor Soarez <soarez@apple.com> * Fixes to allow Confluent LDAP tests to pass. * Remove the unnecessary ACL commands at setup.
-
由 Honshu Priyadarshi 创作于
* KDATA-637: Move compaction validation to a separate alert type This fix also fixes some error logs messages and keeps the offset map file in case of data validation failures. Added a fix to handle the cases when the offset map file is full of tombstones. Added a unit test to repro the case.
-
由 andymg3 创作于
This changes the ReplicaPlacer interface to return a class instead of a list of list of integers. There are two reasons for the suggestion. First, as mentioned in the JIRA, it will make the interface, arguably, a bit more readable and understandable by explicitly modeling the idea of topic and partition. Second and more importantly, it makes the interface more extendable in the future. Right now it would be challenging to add more metadata to the response. Reviewers: José Armando García Sancio <jsancio@apache.org>
-
由 David Jacot 创作于
This path moves the timeline data structures from metadata module to server-common module as those will be used in the new group coordinator. Reviewers: José Armando García Sancio <jsancio@users.noreply.github.com>, Colin Patrick McCabe <cmccabe@apache.org>
-
Now that Kafka is generating a metadata snapshot every hour and the default metadata retention is to delete snapshots after 7 days, every cluster metadata partition will have 168 (1 snapshot per hour * 24 hours per day * 7 days) snapshots. If we assume that in most cases the size of the snapshot is determined by the number of partitions in a cluster, a cluster with 100K partitions will have a snapshot size of roughly 10MB (100 bytes per partition * 100k partitions). For this kind of clusters the cluster metadata partition will always consume around 1.7GB. KIP-876 changed the default value for metadata.max.retention.bytes to 100MB. This should limit the size of the cluster metadata partition for large clusters but keep 7 days worth of snapshots for small clusters. Reviewers: Jason Gustafson <jason@confluent.io>
-
由 Colin Patrick McCabe 创作于
Implement functions to measure the number of events in the event queue. Reviewers: David Arthur <mumrah@gmail.com>
-
由 Ismael Juma 创作于
Fix the underlying warnings instead. Reviewers: Luke Chen <showuon@gmail.com>
-
由 Matthew de Detrich 创作于
In addition to the version bump, we also had to: * Update the zinc version * Workaround compiler warnings via suppression (proper fix in a follow up) * Adjust `testDeleteTopicDoesNotRetryThrottlingQuotaExceededException` to fix a test failure Release notes: * https://github.com/scala/scala/releases/tag/v2.13.9 * https://github.com/scala/scala/releases/tag/v2.13.10 Reviewers: Ismael Juma <ismael@juma.me.uk>
-
由 Ismael Juma 创作于
It doesn't add much value since lambdas were introduced in Java 8. Also remove KafkaTimerTest. Reviewers: David Jacot <djacot@confluent.io>, Christo Lolov <lolovc@amazon.com>
-
由 Ismael Juma 创作于
* Remove whitespace before package declaration * Avoid unnecessary postfix language usage Reviewers: Luke Chen <showuon@gmail.com>
-
由 Ismael Juma 创作于
Also remove `ApiUtilsTest`. Reviewers: David Jacot <djacot@confluent.io>, dengziming <dengziming1993@gmail.com>
-
由 Justine Olshan 创作于
KAFKA-14417: Producer doesn't handle REQUEST_TIMED_OUT for InitProducerIdRequest, treats as fatal error (#12915) (#8227) The broker may return the `REQUEST_TIMED_OUT` error in `InitProducerId` responses when allocating the ID using the `AllocateProducerIds` request. The client currently does not handle this. Instead of retrying as we would expect, the client raises a fatal exception to the application. In this patch, we address this problem by modifying the producer to handle `REQUEST_TIMED_OUT` and any other retriable errors by re-enqueuing the request. Reviewers: Jason Gustafson <jason@confluent.io>
-
由 Jingjing Tian 创作于
* DGS-3472: Emit incremental events for topic metadata in KRaft mode * Add integration test for KRaft * Emit snapshots and add more tests * Minor changes * Extract log configs from broker config * Fix checkstyle * Addressing comments * Handle broker log configs override * Addressing comments * Merge master and handle broker config more gracefully * Remove checkstyle suppression * minor fix * Address more comments * Add comment about TreeSet * Add topicLogConfigDiff * Minor fix * Address comments * Wait for controller to report new leader in tests * Add/remove catalog metrics during leader updates * Remove catalog metrics from constructor * Catch exception when adding duplicate metrics * Address concurrency comments * Address comments * Add back testMetricsRemoved * Try catch all exceptions
-
由 Ismael Juma 创作于
This constructor is deprecated in Java 17. Reviewers: Justine Olshan <jolshan@confluent.io>
-
由 Jeff Kim 创作于
When a consumer makes a fetch request to a follower (KIP-392), the fetch request will sit in the purgatory until `fetch.max.wait.ms` is reached because the purgatory is not completed after replication. This patch aims to complete the delayed fetch purgatory after successfully replicating from the leader. Reviewers: Artem Livshits <alivshits@confluent.io>, Luke Chen <showuon@gmail.com>, David Jacot <djacot@confluent.io>