- 11月 10, 2022
-
-
由 k-raina 创作于
-
由 Stanislav Kozlovski 创作于
CPKAFKA-7953: Fix flaky alter broker replica exclusion admin client unit test by applying a request matcher to the response Because of some race condition in the testing framework for the admin client tests, the test sometimes fails with an exception in casting the response from the exclusion request - it inaccurately tries to cast it into a MetadataResponse (as instructed by the prepareResponse() test method) and that results in the following error: ``` org.opentest4j.AssertionFailedError: Expected a TimeoutException exception, but got ClassCastException ==> expected: <org.apache.kafka.common.errors.TimeoutException> but was: <java.lang.ClassCastException> at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) ``` This patch fixes this issue simply by using a request matcher on the prepareResponse() method, ensuring the response is created only if the given request has come in.
-
由 danilo queiroz 创作于
This test was failing eventually on Jenkins for having a very short timeout of 5 ms. Changed the timeout to 500 ms to minimize failures. Notice that this is a timeout and the test should complete before the timeout is reached.
-
由 Mathew Hogan 创作于
Reviewer: @alok123t
-
由 Vikas Singh 创作于
The id need to be distinct, but as part of copy-and-paste to crate a new metric by mistake the id was same as one of the previous metric. This change fixes the metric, however the id field itself is no longer needed and should be removed. Filed https://confluentinc.atlassian.net/browse/KAFKALESS-1470 to take care of it.
-
由 Lingnan Liu 创作于
* KAFKALESS-1376: Add TopicPartitionMovement * Use MockTime instead of SystemTime in unit test * Added unit test for default config value * KAFKALESS-1376: Add SuspendedTopicPartition * Added notes in javadoc * Separate out factory methods * Don't use Delay interface since it is not suitable * Use Iterator to generate distinct partitions and suspension durations * Removed unused fields * Use MockTime in unit test * Add epoch to definition * Put duplicate code in abstract topic partition history * Fix javadoc * Address comments * Fix checkstyle * Fix config type * Minor adjustments * Addressed comments * Decrease the threshold of validity test
-
由 Fred Zheng 创作于
-
由 Stanislav Kozlovski 创作于
KAFKALESS-1439: Log full exception stack trace at DEBUG when a plan fails to compute during retry (#7885)
-
由 kpatelatwork 创作于
-
由 Milo Simpson 创作于
- add CCloudKsqlRoleBindingAdmin and CCloudKsqlHealthChecker - KsqlAdmin can grant KsqlAdmin to another user for the same ksql Co-authored-by: Panagiotis Garefalakis <pgarefalakis@confluent.io>
- 11月 09, 2022
-
-
由 Xavier Léauté 创作于
This change migrates telemetry reporter to send data in OpenTelemetry format instead of OpenCensus format. - refactors SinglePointMetric methods to accept generic Number classes to reduce code duplication - rename SinglePointMetric methods sum / counter to sum / deltaSum, to reflect the OpenTelemetry metric naming - converts the OpenCensus specific "resource type" field into a generic OpenTelemetry "type" resource attribute - filters out the resource `type` attribute from events to prevent it from clashing with the existing "event type" field. - simplifies `Instant.clock(timestamp)` to `clock.timestamp()` - remove unnecessary references to `this.` to keep the code style consistent This is dependent on downstream consumers (Druid, Connect, SBC, Telemetry Receiver) to support both formats for the duration of the upgrade from the old version to the new version. Dependencies: - [x] update SBC telemetry sampler to support opentelemetry https://github.com/confluentinc/ce-kafka/pull/7678 - [x] define upgrade path for SBC – SBC will ignore newer format metrics, and pause balancing during roll. - [x] Druid support https://github.com/confluentinc/druid/pull/63 - [x] Connect support https://github.com/confluentinc/opencensus-protobuf-converter/pull/10 - [x] Telemetry Receiver support for OpenTelemetry https://github.com/confluentinc/schroedinger/pull/1230
-
由 Panagiotis Garefalakis 创作于
-
由 Colin Patrick McCabe 创作于
This PR adds a new ImageWriter interface which replaces the generic Consumer interface which accepted lists of records. It is better to do batching in the ImageWriter than to try to deal with that complexity in the MetadataImage#write functions, especially since batching is not semantically meaningful in KRaft snapshots. The new ImageWriter interface also supports freeze and close, which more closely matches the semantics of the underlying Raft classes. The PR also adds an ImageWriterOptions class which we can use to pass parameters to control how the new image is written. Right now, the parameters that we are interested in are the target metadata version (which may be more or less than the original image's version) and a handler function which is invoked whenever metadata is lost due to the target version. Convert over the MetadataImage#write function (and associated functions) to use the new ImageWriter and ImageWriterOptions. In particular, we now have a way to handle metadata losses by invoking ImageWriterOptions#handleLoss. This allows us to handle writing an image at a lower version, for the first time. This support is still not enabled externally by this PR, though. That will come in a future PR. Get rid of the use of SOME_RECORD_TYPE.highestSupportedVersion() in several places. In general, we do not want to "silently" change the version of a record that we output, just because a new version was added. We should be explicit about what record version numbers we are outputting. Implement ProducerIdsDelta#toString, to make debug logs look better. Move MockRandom to the server-common package so that other internal broker packages can use it. Reviewers: José Armando García Sancio <jsancio@apache.org> Conflicts: Added ImageWriter / ImageWriterOptions to Confluent-specific metadata image classes. Handled conflicts related to metadata encryption existing in ce-kafka but not AK. Fix a bug in BrokerMetadataSnapshotter and BrokerMetadataListener where we were not calling close(true) on the image writers.
-
由 Yiran 创作于
-
由 Michael Li 创作于
In order to meet the code freeze for CP 7.3, we duplicated a few classes in both ce-kafka and telemetry-api. Now that we have some time, we need to consolidate these classes so that we reduce tech debt and only make future changes in one class. The following classes have been moved from ce-kafka to telemetry-api: NamedFilter RemoteConfiguration RemoteConfigurationResponse RemoteConfigurationRequest We also introduce a new dependency on com.hubspot.jackson.datatype.protobuf to register the ProtobufModule() with the HttpRemoteConfigurationSource's ObjectMapper since the RemoteConfigurationRequest that is shared is an Otel resource proto and the object mapper needs to know how to serialize that object here. The corresponding changes in telemetry-api can be found in this PR.
- 11月 08, 2022
-
-
由 ssumit33 创作于
-
由 Aman Singh 创作于
change field name in user metadata record and updates in parsing the user metadata record.
-
由 Sarat Kakarla 创作于
* adding describe cells load rpc message
-
由 gbadoni 创作于
* Test Cases Written and Passing * All 6 cases passing, added kafka config prop * minor format * Minor format * minor cleanup * Incorporated review comments * Comment * incorporated comment * Added a log statement * extra import removed
- 11月 07, 2022
-
-
由 ssumit33 创作于
- Reducing the expiry time for gradle cached items to 7 days so that the cache doesn't get too big
-
由 Vikas Singh 创作于
* KAFKALESS-1386: Clear metrics not belonging to leader/follower Currently the code clears only follower metrics when creating follower load and there it misses on few of the metrics. This change clears all metrics that do not belong to a follower. On top of that it changes the code to also clear up metrics not belonging to leader. Added new tests to make sure that only metrics relevant to leader/follower are present and added to ClusterModel. Some calculation in existing tests need to be redone.
-
由 ConfluentSemaphore 创作于
-
由 Jason Gustafson 创作于
Support dynamic configuration of create-topic/alter-config/create-cluster-link policies in kraft (#7987) This patch adds support for dynamic configuration of policy implementations on the controller in KRaft. This is needed in order to support cluster shrink/expand which requires updating partition limits in the `CreateTopicPolicy` implementation. Reviewers: José Armando García Sancio <jsancio@users.noreply.github.com>