- 9月 10, 2020
-
-
由 Soumyarka Mondal 创作于
* AuditJob reports durability lapse for relevant S3 error codes * Introduced time.sleep within AuditJob for throttled tierObjStore access * TierMetadataValidator uses read method to drain the buffer Relevant unit tests have been added, in addition to DEVEL cluster testing. reviewer: @rohitshekhar29
-
- 9月 09, 2020
-
-
由 Rajini Sivaram 创作于
Reviewers: Gwen Shapira
-
由 Lucas Bradstreet 创作于
Originally the idea was that OFFSET_TIERED would be the first isInternal code, and would start at Short.MAX_VALUE and each next internal code would decrement by one. Unfortunately it was the first and only code to use this scheme. It was a bit confusing for me when I rediscovered this, so given that we can't use this isInternal scheme I have removed this support and hard coded the code as with the others.
-
由 Rajini Sivaram 创作于
Fixes: - Topic create with mirroring enabled authorizes Cluster:Alter for request principal on destination - Internal clear mirror operation no longer requires Topic:Alter for destination broker principal - Propagate config exception if admin client creation fails - Propagate authorization exception for cluster link config update Also adds end-to-end authorization tests to verify ACLs for destination principal requesting cluster link operations, destination broker principal and link principal on the source cluster. Reviewers: Brian Byrne
-
由 Kowshik Prakasam 创作于
-
由 Lucas Bradstreet 创作于
-
由 Gardner Vickers 创作于
CONFLUENT: Sync from confluentinc/kafka master (18 Aug 2020) Conflicts: core/src/main/scala/kafka/network/SocketServer.scala core/src/main/scala/kafka/server/DynamicBrokerConfig.scala core/src/test/scala/unit/kafka/server/DynamicBrokerConfigTest.scala tests/docker/Dockerfile Changes to address build failure: core/src/test/scala/integration/kafka/network/DynamicConnectionQuotaTest.scala - temporary disabled testDynamicListenerConnctionCreationRateQuota. CNKAF-1129 tracks re-enabling it. core/src/main/scala/kafka/server/QuotaFactory.scala - deregister cluster link quota manager with DiskUsageBasedThrottler to address high memory usage.
-
由 Lucas Bradstreet 创作于
handleFetchRequest may throw unhandled OffsetOutOfRange error after the following: A partition is assigned to a new replica, causing it to restore the tier state from object storage. The active segment on the leader is empty. This can happen if the segment is rolled due to confluent.tier.hotset.roll.min.bytes. The newly assigned replica fetches from the follower at fetchOffset = tier end offset + 1. Since the active segment is empty, this is enough for the replica to join the ISR. The newly assigned replica becomes the leader with an incorrect HWM. This results in a HWM that is < local log start offset. This results in fetch request handling that throws unhandled OffsetOutOfRange exceptions, likely in the log start offset or high watermark update code, falling through to handleError in KafkaApis. This is fixed by setting the HWM to the restore point.
-
由 Bob Barrett 创作于
CNKAF-1169: Correctly handle single racks and no racks when creating topics in the MT interceptor (#2512) * TenantPartitionAssignor does not work when a cluster has all its brokers withing a single zone * Handle no and one rack, no no-rack test * Add integration tests for one and no racks Co-authored-by:
David Jacot <djacot@confluent.io>
- 9月 08, 2020
-
-
由 Kowshik Prakasam 创作于
-
由 Kowshik Prakasam 创作于
-
- 9月 07, 2020
-
-
由 Kowshik Prakasam 创作于
- 9月 05, 2020
-
-
由 Vikas Singh 创作于
* CNKAF-1087: Fix rack aware topic placement for cloud use case In cloud deployements the partition assigment is done by TenantPartitionAssignor class. The class relies on cluster metadata from the callback that is called by 'UpdateMetadataRequest' api. In this callback we have bug where rack information isn't propagated, hence we end up doing rack unaware assignment. This change fixes that by passign rack information in the callback. The fix is just one line in MetadataCache. Most of the change is adding integration tests to make sure TenantPartitionAssignor works as expected for cloud use case, for both rack aware and unaware case. The newly written rack aware test case fails w/o this change and passes with this change.
- 9月 04, 2020
-
-
由 Rajini Sivaram 创作于
Reviewers: Brian Byrne
-
由 Kowshik Prakasam 创作于
-
由 Kowshik Prakasam 创作于
-
Users of older brokers have reported that partition reassignment didn't work when performed against a topic partition which had the replica placement constraint set to the empty string. This has been fixed since KCORE-175.