- 10月 07, 2020
- 9月 16, 2020
-
-
由 Confluent Jenkins Bot 创作于
-
- 8月 21, 2020
-
-
由 Colin Patrick McCabe 创作于
This reverts commit bf6dffe9 Reviewers: Ismael Juma <ismael@confluent.io> (cherry picked from commit 232a0f48)
-
- 8月 19, 2020
-
-
由 Andrew Egelhofer 创作于
ducktape diff: https://github.com/confluentinc/ducktape/compare/v0.7.8...v0.7.9 - bcrypt (a dependency of ducktape) dropped Python2.7 support. ducktape-0.7.9 now pins bcrypt to a Python2.7-supported version. Author: Andrew Egelhofer <aegelhofer@confluent.io> Reviewers: Dhruvil Shah <dhruvil@confluent.io>, Manikumar Reddy <manikumar.reddy@gmail.com> Closes #9192 from andrewegel/trunk (cherry picked from commit f6c26eaa) Signed-off-by:
Manikumar Reddy <manikumar.reddy@gmail.com>
-
- 8月 18, 2020
-
-
由 Konstantine Karantasis 创作于
KAFKA-10387: Fix inclusion of transformation configs when topic creation is enabled in Connect (#9172) Addition of configs for custom topic creation with KIP-158 created a regression when transformation configs are also included in the configuration of a source connector. To experience the issue, just enabling topic creation at the worker is not sufficient. A user needs to supply a source connector configuration that contains both transformations and custom topic creation properties. The issue is that the enrichment of configs in `SourceConnectorConfig` happens on top of an `AbstractConfig` rather than a `ConnectorConfig`. Inheriting from the latter allows enrichment to be composable for both topic creation and transformations. Unit tests and integration tests are written to test these combinations. Reviewers: Randall Hauch <rhauch@gmail.com>
-
由 Ismael Juma 创作于
Update Jackson to 2.10.5 and Jersey to 2.30. Note that the versions in master are already aligned. Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>
-
- 8月 14, 2020
-
-
由 Rajini Sivaram 创作于
Reviewers: Ismael Juma <ismael@juma.me.uk>
-
- 8月 13, 2020
-
-
由 Nitesh Mor 创作于
-
- 8月 11, 2020
-
-
由 Stanislav Kozlovski 创作于
This patch ensures we use a force resolution strategy for the scala-library dependency. I've tested this locally and saw a difference in the output. With the change (using 2.4 and the jackson library 2.10.5): ``` ./core/build/dependant-libs-2.12.10/scala-java8-compat_2.12-0.9.0.jar ./core/build/dependant-libs-2.12.10/scala-collection-compat_2.12-2.1.2.jar ./core/build/dependant-libs-2.12.10/scala-reflect-2.12.10.jar ./core/build/dependant-libs-2.12.10/scala-logging_2.12-3.9.2.jar ./core/build/dependant-libs-2.12.10/scala-library-2.12.10.jar ``` Without (using 2.4 and the jackson library 2.10.5): ``` find . -name 'scala*.jar' ./core/build/dependant-libs-2.12.10/scala-java8-compat_2.12-0.9.0.jar ./core/build/dependant-libs-2.12.10/scala-collection-compat_2.12-2.1.2.jar ./core/build/dependant-libs-2.12.10/scala-reflect-2.12.10.jar ./core/build/dependant-libs-2.12.10/scala-logging_2.12-3.9.2.jar ./core/build/dependant-libs-2.12.10/scala-library-2.12.12.jar ``` Reviewers: Ismael Juma <ismael@juma.me.uk>
-
- 8月 10, 2020
-
-
由 Brian Bushree 创作于
This patch adds `share/java/confluent-telemetry` to Kafka's classpath as a place to install the confluent-metrics jar needed for the telemetry reporter. While the telemetry reporter is not installed by default with ccs Kafka, we'd like to support it.
-
- 8月 04, 2020
-
-
-
由 Chia-Ping Tsai 创作于
Creating a topic may fail (due to timeout) in running system tests. However, `RoundTripWorker` does not ignore `TopicExistsException` which makes `round_trip_fault_test.py` be a flaky one. More specifically, a network exception can cause the `CreateTopics` request to reach Kafka but Trogdor retry it and hit a `TopicAlreadyExists` exception on the retry, failing the test. Reviewers: Ismael Juma <ismael@juma.me.uk>
-
- 8月 01, 2020
-
-
-
由 Jason Gustafson 创作于
Add some notable changes to the reassignment tool for the 2.6 release. Reviewers: Randall Hauch <rhauch@gmail.com>
-
- 7月 29, 2020
-
-
-
由 Rens Groothuijsen 创作于
Update jersey license from CDDL to EPLv2 Author: Rens Groothuijsen <l.groothuijsen@alumni.maastrichtuniversity.nl> Reviewer: Randall Hauch <rhauch@gmail.com>
-
- 7月 28, 2020
-
-
由 Bruno Cadonna 创作于
In PR #8962 we introduced a sentinel UNKNOWN_OFFSET to mark unknown offsets in checkpoint files. The sentinel was set to -2 which is the same value used for the sentinel LATEST_OFFSET that is used in subscriptions to signal that state stores have been used by an active task. Unfortunately, we missed to skip UNKNOWN_OFFSET when we compute the sum of the changelog offsets. If a task had only one state store and it did not restore anything before the next rebalance, the stream thread wrote -2 (i.e., UNKNOWN_OFFSET) into the subscription as sum of the changelog offsets. During assignment, the leader interpreted the -2 as if the stream run the task as active although it might have run it as standby. This misinterpretation of the sentinel value resulted in unexpected task assigments. Reviewers: A. Sophie Blee-Goldman <sophie@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>, John Roesler <vvcephei@apache.org>, Matthias J. Sax <mjsax@apache.org>
-
由 Bruno Cadonna 创作于
A system test failed with the following error: global name 'self' is not defined The reason was that `self` was accessed to log a message in a static method. This commit makes the method an instance method. Reviewer: Matthias J. Sax <matthias@confluent.io>
-
- 7月 27, 2020
-
-
-
由 John Roesler 创作于
Fixes slow release due to establishing a separate SSH connection per file to copy. Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
-
由 Randall Hauch 创作于
MINOR: Adjust 'release.py' script to use shell when using gradlewAll and PGP signing, which were required to build the 2.6.0 RCs (#9045)
-
-
由 Matthias J. Sax 创作于
* KAFKA-10306: GlobalThread should fail on InvalidOffsetException * Update streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateUpdateTask.java Co-authored-by:
John Roesler <vvcephei@users.noreply.github.com> * Update streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateUpdateTask.java Co-authored-by:
John Roesler <vvcephei@users.noreply.github.com> * Update streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStreamThread.java Co-authored-by:
John Roesler <vvcephei@users.noreply.github.com> * Update streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStreamThread.java Co-authored-by:
John Roesler <vvcephei@users.noreply.github.com>
-
由 Ismael Juma 创作于
Java 11 has been recommended for a while, but the ops section had not been updated. Also added `-XX:+ExplicitGCInvokesConcurrent` which has been in `kafka-run-class` for a while. Finally, tweaked the text slightly to read better. Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
-
- 7月 26, 2020
-
-
-
由 Brian Byrne 创作于
Set `replica.fetch.max.bytes` to `1` and produce multiple record batches to allow for throttling to take place. This helps avoid a race condition where the reassignment would complete more quickly than expected causing an assertion to fail some times. Reviewers: Lucas Bradstreet <lucas@confluent.io>, Jason Gustafson <jason@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>, Ismael Juma <ismael@juma.me.uk>
-
- 7月 25, 2020
-
-
由 Stanislav Kozlovski 创作于
We would previously update the map by adding the new replicas to the map and then removing the old ones. During a recent refactoring, we changed the logic to first clear the map and then add all the replicas to it. While this is done in a write lock, not all callers that access the map structure use a lock. It is safer to revert to the previous behavior of showing the intermediate state of the map with extra replicas, rather than an intermediate state of the map with no replicas. Reviewers: Ismael Juma <ismael@juma.me.uk>
-
由 huxi 创作于
* KAFKA-10268: dynamic config like "--delete-config log.retention.ms" doesn't work https://issues.apache.org/jira/browse/KAFKA-10268 Currently, ConfigCommand --delete-config API does not restore the config to default value, no matter at broker-level or broker-default level. Besides, Admin.incrementalAlterConfigs API also runs into this problem. This patch fixes it by removing the corresponding config from the newConfig properties when reconfiguring dynamic broker config.
-
- 7月 23, 2020
-
-
由 Rajini Sivaram 创作于
Reviewers: Ismael Juma <ismael@juma.me.uk>
-
由 Jason Gustafson 创作于
KAFKA-10235 fixed a consistency issue with the transaction timeout and the progress timeout. Since the test case relies on transaction timeouts, we need to wait at last as long as the timeout in order to ensure progress. However, having a low transaction timeout makes the test prone to the issue identified in KAFKA-9802, in which the coordinator timed out the transaction while the producer was awaiting a Produce response. Reviewers: Chia-Ping Tsai <chia7712@gmail.com>, Boyang Chen <boyang@confluent.io>, Jun Rao <junrao@gmail.com>
-
- 7月 22, 2020
-
-
由 David Mao 创作于
-
- 7月 21, 2020
-
-
由 Nitesh Mor 创作于
Context: log4j v1 has reached end of life many years ago, and is affected by CVE-2019-17571 Confluent repackaged version of log4j fixes the security vulnerabilities. Reviewers: Ismael Juma <ismael@juma.me.uk>, Jeff Kim <jeff.kim@confluent.io>
-
- 7月 20, 2020
-
-
由 Greg Harris 创作于
Signed-off-by:
Greg Harris <gregh@confluent.io>
-
由 Greg Harris 创作于
Currently, the system tests `connect_distributed_test` and `connect_rest_test` only wait for the REST api to come up. The startup of the worker includes an asynchronous process for joining the worker group and syncing with other workers. There are some situations in which this sync takes an unusually long time, and the test continues without all workers up. This leads to flakey test failures, as worker joins are not given sufficient time to timeout and retry without waiting explicitly. This changes the `ConnectDistributedTest` to wait for the Joined group message to be printed to the logs before continuing with tests. I've activated this behavior by default, as it's a superset of the checks that were performed by default before. This log message is present in every version of DistributedHerder that I could find, in slightly different forms, but always with `Joined group` at the beginning of the log message. This change should be safe to backport to any branch. Signed-off-by:
Greg Harris <gregh@confluent.io> Author: Greg Harris <gregh@confluent.io> Reviewer: Randall Hauch <rhauch@gmail.com>
-
-
由 Manikumar Reddy 创作于
- Add missing broker/client compatibility tests for 2.5.0 release Author: Manikumar Reddy <manikumar.reddy@gmail.com> Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com> Closes #9041 from omkreddy/compat (cherry picked from commit b02fa534) Signed-off-by:
Manikumar Reddy <manikumar.reddy@gmail.com>
-
- 7月 18, 2020
-
-
由 Brian Byrne 创作于
-
由 elismaga 创作于
-
由 Rajini Sivaram 创作于
KAFKA-10223; Use NOT_LEADER_OR_FOLLOWER instead of non-retriable REPLICA_NOT_AVAILABLE for consumers (#8979) Brokers currently return NOT_LEADER_FOR_PARTITION to producers and REPLICA_NOT_AVAILABLE to consumers if a replica is not available on the broker during reassignments. Non-Java clients treat REPLICA_NOT_AVAILABLE as a non-retriable exception, Java consumers handle this error by explicitly matching the error code even though it is not an InvalidMetadataException. This PR renames NOT_LEADER_FOR_PARTITION to NOT_LEADER_OR_FOLLOWER and uses the same error for producers and consumers. This is compatible with both Java and non-Java clients since all clients handle this error code (6) as retriable exception. The PR also makes ReplicaNotAvailableException a subclass of InvalidMetadataException. - ALTER_REPLICA_LOG_DIRS continues to return REPLICA_NOT_AVAILABLE. Retained this for compatibility since this request never returned NOT_LEADER_FOR_PARTITION earlier. - MetadataRequest version 0 also returns REPLICA_NOT_AVAILABLE as topic-level error code for compatibility. Newer versions filter these out and return Errors.NONE, so didn't change this. - Partition responses in MetadataRequest return REPLICA_NOT_AVAILABLE to indicate that one of the replicas is not available. Did not change this since NOT_LEADER_FOR_PARTITION is not suitable in this case. Reviewers: Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>, Bob Barrett <bob.barrett@confluent.io>
-
由 Jason Gustafson 创作于
The test case `OffsetValidationTest.test_fencing_static_consumer` fails periodically due to this error: ``` Traceback (most recent call last): File "/home/jenkins/workspace/system-test-kafka_2.6/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.8-py2.7.egg/ducktape/tests/runner_client.py", line 134, in run data = self.run_test() File "/home/jenkins/workspace/system-test-kafka_2.6/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.8-py2.7.egg/ducktape/tests/runner_client.py", line 192, in run_test return self.test_context.function(self.test) File "/home/jenkins/workspace/system-test-kafka_2.6/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.8-py2.7.egg/ducktape/mark/_mark.py", line 429, in wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) File "/home/jenkins/workspace/system-test-kafka_2.6/kafka/tests/kafkatest/tests/client/consumer_test.py", line 257, in test_fencing_static_consumer assert len(consumer.dead_nodes()) == num_conflict_consumers AssertionError ``` When a consumer stops, there is some latency between when the shutdown is observed by the service and when the node is added to the dead nodes. This patch fixes the problem by giving some time for the assertion to be satisfied. Reviewers: Boyang Chen <boyang@confluent.io>
-