Skip to content
代码片段 群组 项目

比较版本

更改显示为版本正在合并到目标版本。了解更多关于比较版本的信息。

来源

选择目标项目
No results found
选择 Git 版本
  • trunk
  • 3.7
  • 3.5
  • KAFKA-16411
  • cmccabe_2024_03_22_fix_closeables
  • 37_upgrade_comp_streams_tests
  • 37-upgrade-system-ttests
  • 3.7-version-bump
  • cmccabe_2024_02_26_fix
  • 37-snapshot
  • minor-ops-html-jbod
  • add-37-upgrade-notes
  • reconcile-upgrade-html
  • init_metadata_publishers
  • KAFKA-16101
  • KAFKA-16126
  • 3.4
  • KAFKA-16094
  • kafka-15987
  • 352_system_test
  • KAFKA-15311
  • kafka-15653
  • change-ProducerConfig-logging-constructor-to-protected
  • 3.3
  • 3.2
  • 3.1
  • 3.0
  • 2.8
  • 2.7
  • 2.6
  • java-21-readme
  • cmccabe_kip_919
  • 3.6
  • KAFKA-15183-II
  • KAFKA-15183
  • txn1
  • cmccabe_2023-06-30_shell
  • C0urante-patch-1
  • kafka-14884-2
  • cmccabe_2023-06-21_some_minor_fixes
  • stale-patch
  • revert-13838-minor-flake-checkpoint-no-tasks
  • minor-stale-pr-action
  • add-github-stale-action
  • cmccabe_2023-05-10_cleanup
  • dependabot/pip/tests/requests-2.31.0
  • printer
  • vvcephei-patch-1
  • cmccabe_2023-04-11_improve_controller_logging
  • revert-13391-kafka-14561
  • fix-abstract-herder-base-plugin-config-defs
  • revert-13348-hdq_fix
  • fix-connect-docs
  • build-dep-update-3.5
  • KAFKA-14496-fix-oauth-encoder
  • consumer-refator-find-coordinator
  • remove-kraft-readme-rat-exclusion
  • gradle-7.6
  • kip-866-zk-migration-to-kraft
  • KAFKA-14367-join-group
  • consumer-refactor-commit-coord-lookup
  • spotbugs
  • consumer-refactor-background-thread
  • running-kraft-mode-readme
  • repro-task-idling-problem
  • kraft-mode-prod-ready-upgrade-note
  • KAFKA-14216
  • ijuma/retry-branch-builds
  • trivial-fix-exit
  • abort-previous-builds-pr
  • demystify-rebalance-schedule-log
  • 12349-test-cleanup
  • minor-synchronize-snapshots
  • minor-bump-version
  • reviewers-py
  • john-disable-12049
  • john-revert-12049
  • add-assignor-log-generation
  • 2.5
  • producer-idempotence-upgrade-docs-tweaks-2
  • 2.2
  • 2.3
  • 2.4
  • metashell
  • 2.1
  • kafka-10867-improved-task-idling-nolog
  • 2.0
  • minor-alter-isr-scheduling
  • jenkinsfile-jdk-names
  • 1.0
  • 1.1
  • 0.10.0
  • 2.x
  • temp-8436
  • 0.11.0
  • 0.10.2
  • 0.10.1
  • 0.9.0
  • 0.8.2
  • 0.8.1
  • 0.10.0.0
  • 0.10.0.0-rc1
  • 0.10.0.0-rc2
  • 0.10.0.0-rc3
  • 0.10.0.0-rc4
  • 0.10.0.0-rc5
  • 0.10.0.0-rc6
  • 0.10.0.1
  • 0.10.0.1-rc0
  • 0.10.0.1-rc1
  • 0.10.0.1-rc2
  • 0.10.1.0
  • 0.10.1.0-rc0
  • 0.10.1.0-rc1
  • 0.10.1.0-rc2
  • 0.10.1.0-rc3
  • 0.10.1.1
  • 0.10.1.1-rc0
  • 0.10.1.1-rc1
  • 0.10.2.0
  • 0.10.2.0-KAFKA-5526
  • 0.10.2.0-rc0
  • 0.10.2.0-rc1
  • 0.10.2.0-rc2
  • 0.10.2.1
  • 0.10.2.1-rc0
  • 0.10.2.1-rc1
  • 0.10.2.1-rc2
  • 0.10.2.1-rc3
  • 0.10.2.2
  • 0.10.2.2-rc0
  • 0.10.2.2-rc1
  • 0.11.0.0
  • 0.11.0.0-rc0
  • 0.11.0.0-rc1
  • 0.11.0.0-rc2
  • 0.11.0.1
  • 0.11.0.1-rc0
  • 0.11.0.2
  • 0.11.0.2-rc0
  • 0.11.0.3-rc0
  • 0.8.0
  • 0.8.0-beta1
  • 0.8.0-beta1-candidate1
  • 0.8.1
  • 0.8.1.0
  • 0.8.1.1
  • 0.8.2-beta
  • 0.8.2.0
  • 0.8.2.0-cp
  • 0.8.2.1
  • 0.8.2.2
  • 0.9.0.0
  • 0.9.0.1
  • 1.0.0
  • 1.0.0-rc0
  • 1.0.0-rc1
  • 1.0.0-rc2
  • 1.0.0-rc3
  • 1.0.0-rc4
  • 1.0.1
  • 1.0.1-rc0
  • 1.0.1-rc1
  • 1.0.1-rc2
  • 1.0.2
  • 1.0.2-rc0
  • 1.0.2-rc1
  • 1.1.0
  • 1.1.0-rc0
  • 1.1.0-rc1
  • 1.1.0-rc2
  • 1.1.0-rc3
  • 1.1.0-rc4
  • 1.1.1
  • 1.1.1-rc0
  • 1.1.1-rc1
  • 1.1.1-rc2
  • 1.1.1-rc3
  • 2.0.0
  • 2.0.0-rc0
  • 2.0.0-rc1
  • 2.0.0-rc2
  • 2.0.0-rc3
  • 2.0.1
  • 2.0.1-rc0
  • 2.1.0
  • 2.1.0-rc0
  • 2.1.0-rc1
  • 2.1.1
  • 2.1.1-rc0
  • 2.1.1-rc1
  • 2.1.1-rc2
  • 2.2.0
  • 2.2.0-rc0
  • 2.2.0-rc1
  • 2.2.0-rc2
  • 2.2.1
  • 2.2.1-rc0
  • 2.2.1-rc1
  • 2.2.2
200 个结果

目标

选择目标项目
  • oss-mirrors / ICV / kafka
  • Archie Kelly / kafka
2 个结果
选择 Git 版本
  • trunk
  • 3.7
  • 3.8
  • 3.9
  • deflake-testFenceMultipleBrokers
  • KAFKA-17506-KRaftMigrationDriver-init-race
  • KAFKA-17506-flaky-ZkMigrationFailoverTest
  • cmccabe_KAFKA-17492
  • gh-publish-pr-build-scan
  • gh-fix-junit-timeout
  • gh-KAFKA-16602
  • gh-ignoreFailures
  • gh-deflake-testMigrateTopicDeletions
  • gh-KAFKA-17479-relocate-xml-files
  • gh-ge-credential-expiry
  • gh-deflake-testMigrateTopicDeletions-2
  • gh-investigate-test-caching
  • kip853_testkit
  • 3.4
  • 3.5
  • gh-public-forks
  • gh-set-timeout
  • gh-fix-java-matrix
  • bump-producer-max-block-ms-mirror-maker-transactions-test
  • fix-distributed-herder-stage-recording-in-add-request
  • cmccabe-KAFKA-16518
  • jsancio_KAFKA-16535
  • test-offsets-api-integration-test-wait-for-consumer-groups
  • test-fix-dedicated-mirror-integration-test-multi-node-cluster
  • fix-dedicated-mirror-integration-test-multi-node-cluster
  • offsets-api-integration-test-wait-for-consumer-groups
  • KAFKA-17011
  • 3.3
  • 3.2
  • dependabot/pip/tests/requests-2.32.0
  • kafka-16308
  • testRemoteLogManagerRemoteMetrics
  • KAFKA-16624
  • KAFKA-16649
  • KAFKA-16411
  • cmccabe_2024_03_22_fix_closeables
  • 37_upgrade_comp_streams_tests
  • 37-upgrade-system-ttests
  • 3.7-version-bump
  • cmccabe_2024_02_26_fix
  • 37-snapshot
  • minor-ops-html-jbod
  • add-37-upgrade-notes
  • reconcile-upgrade-html
  • init_metadata_publishers
  • KAFKA-16101
  • KAFKA-16126
  • KAFKA-16094
  • kafka-15987
  • 352_system_test
  • KAFKA-15311
  • kafka-15653
  • change-ProducerConfig-logging-constructor-to-protected
  • 3.1
  • 3.0
  • 2.8
  • 2.7
  • 2.6
  • java-21-readme
  • cmccabe_kip_919
  • 3.6
  • KAFKA-15183-II
  • KAFKA-15183
  • txn1
  • cmccabe_2023-06-30_shell
  • C0urante-patch-1
  • kafka-14884-2
  • cmccabe_2023-06-21_some_minor_fixes
  • stale-patch
  • revert-13838-minor-flake-checkpoint-no-tasks
  • minor-stale-pr-action
  • add-github-stale-action
  • cmccabe_2023-05-10_cleanup
  • dependabot/pip/tests/requests-2.31.0
  • printer
  • vvcephei-patch-1
  • cmccabe_2023-04-11_improve_controller_logging
  • revert-13391-kafka-14561
  • fix-abstract-herder-base-plugin-config-defs
  • revert-13348-hdq_fix
  • fix-connect-docs
  • build-dep-update-3.5
  • KAFKA-14496-fix-oauth-encoder
  • consumer-refator-find-coordinator
  • remove-kraft-readme-rat-exclusion
  • gradle-7.6
  • kip-866-zk-migration-to-kraft
  • KAFKA-14367-join-group
  • consumer-refactor-commit-coord-lookup
  • spotbugs
  • consumer-refactor-background-thread
  • running-kraft-mode-readme
  • repro-task-idling-problem
  • kraft-mode-prod-ready-upgrade-note
  • KAFKA-14216
  • 0.10.0.0
  • 0.10.0.0-rc1
  • 0.10.0.0-rc2
  • 0.10.0.0-rc3
  • 0.10.0.0-rc4
  • 0.10.0.0-rc5
  • 0.10.0.0-rc6
  • 0.10.0.1
  • 0.10.0.1-rc0
  • 0.10.0.1-rc1
  • 0.10.0.1-rc2
  • 0.10.1.0
  • 0.10.1.0-rc0
  • 0.10.1.0-rc1
  • 0.10.1.0-rc2
  • 0.10.1.0-rc3
  • 0.10.1.1
  • 0.10.1.1-rc0
  • 0.10.1.1-rc1
  • 0.10.2.0
  • 0.10.2.0-KAFKA-5526
  • 0.10.2.0-rc0
  • 0.10.2.0-rc1
  • 0.10.2.0-rc2
  • 0.10.2.1
  • 0.10.2.1-rc0
  • 0.10.2.1-rc1
  • 0.10.2.1-rc2
  • 0.10.2.1-rc3
  • 0.10.2.2
  • 0.10.2.2-rc0
  • 0.10.2.2-rc1
  • 0.11.0.0
  • 0.11.0.0-rc0
  • 0.11.0.0-rc1
  • 0.11.0.0-rc2
  • 0.11.0.1
  • 0.11.0.1-rc0
  • 0.11.0.2
  • 0.11.0.2-rc0
  • 0.11.0.3-rc0
  • 0.8.0
  • 0.8.0-beta1
  • 0.8.0-beta1-candidate1
  • 0.8.1
  • 0.8.1.0
  • 0.8.1.1
  • 0.8.2-beta
  • 0.8.2.0
  • 0.8.2.0-cp
  • 0.8.2.1
  • 0.8.2.2
  • 0.9.0.0
  • 0.9.0.1
  • 1.0.0
  • 1.0.0-rc0
  • 1.0.0-rc1
  • 1.0.0-rc2
  • 1.0.0-rc3
  • 1.0.0-rc4
  • 1.0.1
  • 1.0.1-rc0
  • 1.0.1-rc1
  • 1.0.1-rc2
  • 1.0.2
  • 1.0.2-rc0
  • 1.0.2-rc1
  • 1.1.0
  • 1.1.0-rc0
  • 1.1.0-rc1
  • 1.1.0-rc2
  • 1.1.0-rc3
  • 1.1.0-rc4
  • 1.1.1
  • 1.1.1-rc0
  • 1.1.1-rc1
  • 1.1.1-rc2
  • 1.1.1-rc3
  • 2.0.0
  • 2.0.0-rc0
  • 2.0.0-rc1
  • 2.0.0-rc2
  • 2.0.0-rc3
  • 2.0.1
  • 2.0.1-rc0
  • 2.1.0
  • 2.1.0-rc0
  • 2.1.0-rc1
  • 2.1.1
  • 2.1.1-rc0
  • 2.1.1-rc1
  • 2.1.1-rc2
  • 2.2.0
  • 2.2.0-rc0
  • 2.2.0-rc1
  • 2.2.0-rc2
  • 2.2.1
  • 2.2.1-rc0
  • 2.2.1-rc1
  • 2.2.2
200 个结果
显示更改

Commits on Source 3625

3,525 additional commits have been omitted to prevent performance issues.
1000 个文件
+ 157208
107
比较变更
  • 并排
  • 内联

文件

.dockerignore

0 → 100644
+1 −0
原始行号 差异行号 差异行
results
+6 −0
原始行号 差异行号 差异行
@@ -56,3 +56,9 @@ clients/src/generated-test
jmh-benchmarks/generated
jmh-benchmarks/src/main/generated
streams/src/generated

.netrc
.semaphore-cache/
.ssh
vendor/
/tmp/
+43 −0
原始行号 差异行号 差异行
version: v1.0
name: ce-kafka
agent:
  machine:
    type: e1-standard-8
    os_image: ubuntu1804
blocks:
  - name: "Build, Test, Release"
    task:
      secrets:
        - name: vault_sem2_approle_prod
      prologue:
        commands:
          - checkout
          - make install-vault
          - . mk-include/bin/vault-setup
          - . vault-sem-get-secret semaphore-secrets-global
          - . vault-sem-get-secret artifactory-docker-helm
          - . vault-sem-get-secret testbreak-reporting
          - . vault-sem-get-secret aws_credentials
          - . vault-sem-get-secret ssh_id_rsa
          - . vault-sem-get-secret ssh_config
          - . vault-sem-get-secret gitconfig
          - . vault-sem-get-secret netrc
          - . vault-sem-get-secret maven-settings
          - . vault-sem-get-secret gradle_properties
          - chmod 400 ~/.ssh/id_rsa
          - make init-ci
          - sem-version java 8
          - sem-version go 1.12
          - git config --global url."git@github.com:".insteadOf "https://github.com/"
          - export SEMAPHORE_CACHE_DIR=/home/semaphore
          - source /home/semaphore/.testbreak/setup.sh
      jobs:
        - name: Setup, build, release
          commands:
            - make init-ci
            - make build
            - make test
            - make release-ci
      epilogue:
        commands:
          - source /home/semaphore/.testbreak/after.sh
 No newline at end of file

COPYRIGHT

0 → 100644
+1 −0
原始行号 差异行号 差异行
Copyright 2016 Confluent, Inc.

Confluent-README.md

0 → 100644
+6 −0
原始行号 差异行号 差异行
Confluent Open Source
==========================
This is a [Confluent](https://www.confluent.io/) Open Source fork of Apache Kafka.

This version includes several modifications to enhance maintainability and ease-of-use.
Just like Apache Kafka, COS is distributed under the Apache 2.0 license.

Dockerfile

0 → 100644
+90 −0
原始行号 差异行号 差异行
##########

##########

FROM openjdk:11-jdk as kafka-builder
USER root

COPY . /home/gradle

WORKDIR /home/gradle

# /root/.gradle is a docker volume so we can't copy files into it
ENV GRADLE_USER_HOME=/root/gradle-home

RUN mkdir -p /root/.m2/repository $GRADLE_USER_HOME \
  && cp ./tmp/gradle/gradle.properties $GRADLE_USER_HOME

RUN ./gradlew clean releaseTarGz -x signArchives --stacktrace -PpackageMetricsReporter=true && ./gradlew install --stacktrace

WORKDIR /build
# The build generates two tgz files, one with compiled code and one
# with site-docs. The pattern needs to include part of the version
# before .tgz so we only match the code jar and not the site-docs
# jar.
RUN tar -xzvf /home/gradle/core/build/distributions/kafka_*-ce.tgz --strip-components 1

##########

# Build a Docker image for the K8s liveness storage probe.

FROM golang:1.12.7 as go-build

ARG version

WORKDIR /root
COPY .ssh .ssh
COPY .netrc ./
RUN ssh-keyscan -t rsa github.com > /root/.ssh/known_hosts

WORKDIR /go/src/github.com/confluentinc/ce-kafka/cc-services/storage_probe
COPY cc-services/storage_probe .
COPY ./mk-include ./mk-include

RUN make deps DEP_ARGS=-vendor-only VERSION=${version}

RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 make build-go GO_OUTDIR= VERSION=${version}


##########

FROM confluent-docker.jfrog.io/confluentinc/cc-base:v5.1.0-jdk-14

ARG version
ARG confluent_version
ARG git_sha
ARG git_branch

ENV COMPONENT=kafka
ENV KAFKA_SECRETS_DIR=/mnt/secrets
ENV KAFKA_LOG4J_DIR=/mnt/log
ENV KAFKA_CONFIG_DIR=/mnt/config

EXPOSE 9092

VOLUME ["${KAFKA_SECRETS_DIR}", "${KAFKA_LOG4J_DIR}"]

LABEL git_sha="${git_sha}"
LABEL git_branch="${git_branch}"

CMD ["/opt/caas/bin/run"]

#Copy kafka
COPY --from=kafka-builder /build /opt/confluent

COPY include/opt/caas /opt/caas

# Set up storage probe
COPY --from=go-build /storage-probe /opt/caas/bin

WORKDIR /
RUN mkdir -p /opt/caas/lib \
  && curl -o /opt/caas/lib/jmx_prometheus_javaagent-0.12.0.jar -O https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.12.0/jmx_prometheus_javaagent-0.12.0.jar \
  && mkdir -p /opt/asyncprofiler \
  && curl -L https://github.com/jvm-profiling-tools/async-profiler/releases/download/v1.6/async-profiler-1.6-linux-x64.tar.gz | tar xz -C /opt/asyncprofiler \
  && apt update \
  && apt install -y cc-rollingupgrade-ctl=0.9.0 vim-tiny \
  && apt-get autoremove -y \
  && mkdir -p  "${KAFKA_SECRETS_DIR}" "${KAFKA_LOG4J_DIR}" /opt/caas/config/kafka \
  && ln -s "${KAFKA_CONFIG_DIR}/kafka.properties" /opt/caas/config/kafka/kafka.properties \
  && chmod -R ag+w "${KAFKA_SECRETS_DIR}" "${KAFKA_LOG4J_DIR}"

Jenkinsfile

0 → 100755
+127 −0
原始行号 差异行号 差异行
#!/usr/bin/env groovy

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements. See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License. You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

def config = jobConfig {
    cron = '@weekly'
    nodeLabel = 'docker-oraclejdk8-ce-kafka'
    testResultSpecs = ['junit': '**/build/test-results/**/TEST-*.xml']
    slackChannel = '#kafka-warn'
    timeoutHours = 4
    runMergeCheck = false
    downStreamValidate = false
    downStreamRepos = ["common",]
    disableConcurrentBuilds = true
}

def retryFlagsString(jobConfig) {
    if (jobConfig.isPrJob) " -PmaxTestRetries=1 -PmaxTestRetryFailures=5"
    else ""
}

def downstreamBuildFailureOutput = ""
def publishStep(String mavenUrl) {
    sh "./gradlewAll -PmavenUrl=${mavenUrl} --no-daemon uploadArchives"
}
def job = {
    withVaultEnv([["artifactory/tools_jenkins", "user", "ORG_GRADLE_PROJECT_mavenUsername"],
        ["artifactory/tools_jenkins", "password", "ORG_GRADLE_PROJECT_mavenPassword"]]) {
        if (!config.isReleaseJob) {
            ciTool("ci-update-version ${env.WORKSPACE} ce-kafka")
        }

        stage("Check compilation compatibility with Scala 2.12") {
            sh "./gradlew clean assemble spotlessScalaCheck checkstyleMain checkstyleTest spotbugsMain " +
                    "--no-daemon --stacktrace -PxmlSpotBugsReport=true -PscalaVersion=2.12"
        }

        stage("Compile and validate") {
            sh "./gradlew clean assemble install spotlessScalaCheck checkstyleMain checkstyleTest spotbugsMain " +
                    "--no-daemon --stacktrace -PxmlSpotBugsReport=true"
        }

        if (config.publish && (config.isDevJob || config.isPreviewJob)) {
            stage("Publish to Artifactory") {
                if (!config.isReleaseJob && !config.isPrJob && !config.isPreviewJob) {
                    ciTool("ci-push-tag ${env.WORKSPACE} ce-kafka")
                }

                if (config.isDevJob) {
                    publishStep('https://confluent.jfrog.io/confluent/maven-public/')
                } else if (config.isPreviewJob) {
                    publishStep('https://confluent.jfrog.io/confluent/maven-releases-preview/')
                }
            }
        }

        if (config.publish && config.isDevJob && !config.isReleaseJob && !config.isPrJob) {
            stage("Start Downstream Builds") {
                config.downStreamRepos.each { repo ->
                    build(job: "confluentinc/${repo}/${env.BRANCH_NAME}",
                        wait: false,
                        propagate: false
                    )
                }
            }
        }
    }

    def runTestsStepName = "Step run-tests"
    def downstreamBuildsStepName = "Step cp-downstream-builds"
    def testTargets = [
        runTestsStepName: {
            stage('Run tests') {
                echo "Running unit and integration tests"
                sh "./gradlew unitTest integrationTest " +
                    "--no-daemon --stacktrace --continue -PtestLoggingEvents=started,passed,skipped,failed " +
                    "-PmaxParallelForks=4 -PignoreFailures=true" + retryFlagsString(config)
            }
            stage('Upload results') {
                // Kafka failed test stdout files
                archiveArtifacts artifacts: '**/testOutput/*.stdout', allowEmptyArchive: true

                def summary = junit '**/build/test-results/**/TEST-*.xml'
                def total = summary.getTotalCount()
                def failed = summary.getFailCount()
                def skipped = summary.getSkipCount()
                summary = "Test results:\n\t"
                summary = summary + ("Passed: " + (total - failed - skipped))
                summary = summary + (", Failed: " + failed)
                summary = summary + (", Skipped: " + skipped)
                return summary;
            }
        },
        downstreamBuildsStepName: {
            echo "Building cp-downstream-builds"
            stage('Downstream validation') {
                if (config.isPrJob && config.downStreamValidate) {
                    downStreamValidation(true, true)
                } else {
                    return "skip downStreamValidation"
                }
            }
        }
    ]

    result = parallel testTargets
    // combine results of the two targets into one result string
    return result.runTestsStepName + "\n" + result.downstreamBuildsStepName
}

runJob config, job
echo downstreamBuildFailureOutput

Makefile

0 → 100644
+116 −0
原始行号 差异行号 差异行
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

IMAGE_NAME := ce-kafka
BASE_IMAGE := confluent-docker.jfrog.io/confluentinc/cc-base
BASE_VERSION := v2.4.0
MASTER_BRANCH := master
KAFKA_VERSION := $(shell awk 'sub(/.*version=/,""){print $1}' ./gradle.properties)
VERSION_POST := -$(KAFKA_VERSION)
DOCKER_BUILD_PRE  += copy-gradle-properties
DOCKER_BUILD_POST += clean-gradle-properties

BUILD_TARGETS += build-docker-cc-kafka-init
BUILD_TARGETS += build-docker-cc-zookeeper
TEST_TARGETS += test-cc-services
RELEASE_POSTCOMMIT += push-docker-cc-kafka-init
RELEASE_POSTCOMMIT += push-docker-cc-zookeeper

ifeq ($(CONFLUENT_PLATFORM_PACKAGING),)
include ./mk-include/cc-begin.mk
include ./mk-include/cc-vault.mk
include ./mk-include/cc-semver.mk
include ./mk-include/cc-docker.mk
include ./mk-include/cc-end.mk
else
.PHONY: clean
clean:

.PHONY: distclean
distclean:

%:
	$(MAKE) -f debian/Makefile $@
endif

# Custom docker targets
.PHONY: show-docker-all
show-docker-all:
	@echo
	@echo ========================
	@echo "Docker info for ce-kafka:"
	@make VERSION=$(VERSION) show-docker
	@echo
	@echo ========================
	@echo "Docker info for cc-zookeeper"
	@make VERSION=$(VERSION) -C cc-zookeeper show-docker
	@echo
	@echo ========================
	@echo "Docker info for cc-kafka-init"
	@make VERSION=$(VERSION) -C cc-kafka-init show-docker
	@echo
	@echo ========================
	@echo "Docker info for soak_cluster"
	@make VERSION=$(VERSION) -C cc-services/soak_cluster show-docker
	@echo
	@echo ========================
	@echo "Docker info for trogdor"
	@make VERSION=$(VERSION) -C cc-services/trogdor show-docker
	@echo
	@echo ========================
	@echo "Docker info for tier-validator-services"
	@make VERSION=$(VERSION) -C cc-services/tier_validator show-docker

.PHONY: build-docker-cc-kafka-init
build-docker-cc-kafka-init:
	make VERSION=$(VERSION) -C cc-kafka-init build-docker

.PHONY: push-docker-cc-kafka-init
push-docker-cc-kafka-init:
	make VERSION=$(VERSION) -C cc-kafka-init push-docker

.PHONY: build-docker-cc-services
build-docker-cc-services:
	make VERSION=$(VERSION) BASE_IMAGE=$(IMAGE_REPO)/$(IMAGE_NAME) BASE_VERSION=$(IMAGE_VERSION) -C cc-services/soak_cluster build-docker
	make VERSION=$(VERSION) BASE_IMAGE=$(IMAGE_REPO)/$(IMAGE_NAME) BASE_VERSION=$(IMAGE_VERSION) -C cc-services/trogdor build-docker
	make VERSION=$(VERSION) BASE_IMAGE=$(IMAGE_REPO)/$(IMAGE_NAME) BASE_VERSION=$(IMAGE_VERSION) -C cc-services/tier_validator build-docker

.PHONY: push-docker-cc-services
push-docker-cc-services:
	make VERSION=$(VERSION) BASE_IMAGE=$(IMAGE_REPO)/$(IMAGE_NAME) BASE_VERSION=$(IMAGE_VERSION) -C cc-services/soak_cluster push-docker
	make VERSION=$(VERSION) BASE_IMAGE=$(IMAGE_REPO)/$(IMAGE_NAME) BASE_VERSION=$(IMAGE_VERSION) -C cc-services/trogdor push-docker
	make VERSION=$(VERSION) BASE_IMAGE=$(IMAGE_REPO)/$(IMAGE_NAME) BASE_VERSION=$(IMAGE_VERSION) -C cc-services/tier_validator push-docker

.PHONY: test-cc-services
test-cc-services:
	make VERSION=$(VERSION) -C cc-services/storage_probe test

.PHONY: build-docker-cc-zookeeper
build-docker-cc-zookeeper:
	make VERSION=$(VERSION) BASE_IMAGE=$(IMAGE_REPO)/$(IMAGE_NAME) BASE_VERSION=$(IMAGE_VERSION) -C cc-zookeeper build-docker

.PHONY: push-docker-cc-zookeeper
push-docker-cc-zookeeper:
	make VERSION=$(VERSION) BASE_IMAGE=$(IMAGE_REPO)/$(IMAGE_NAME) BASE_VERSION=$(IMAGE_VERSION) -C cc-zookeeper push-docker

GRADLE_TEMP = ./tmp/gradle/
.PHONY: copy-gradle-properties
copy-gradle-properties:
	mkdir -p $(GRADLE_TEMP)
	cp ~/.gradle/gradle.properties $(GRADLE_TEMP)

.PHONY: clean-gradle-properties
clean-gradle-properties:
	rm -rf $(GRADLE_TEMP)
+35 −0
原始行号 差异行号 差异行
@@ -6,3 +6,38 @@ The Apache Software Foundation (https://www.apache.org/).

This distribution has a binary dependency on jersey, which is available under the CDDL
License. The source code of jersey can be found at https://github.com/jersey/jersey/.

This distribution uses SSLExplorer (https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/samples/sni/SSLExplorer.java)
and SSLCapabilities (https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/samples/sni/SSLCapabilities.java),
with modification and refactored to clients/src/main/java/org/apache/kafka/common/network/SslUtil.java.
Both are available under the BSD 3-Clause License as described below:
/*
 * Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *
 *   - Redistributions of source code must retain the above copyright
 *     notice, this list of conditions and the following disclaimer.
 *
 *   - Redistributions in binary form must reproduce the above copyright
 *     notice, this list of conditions and the following disclaimer in the
 *     documentation and/or other materials provided with the distribution.
 *
 *   - Neither the name of Oracle or the names of its
 *     contributors may be used to endorse or promote products derived
 *     from this software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
 * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
 * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
 * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
 * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
 * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */
+4 −2
原始行号 差异行号 差异行
*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*
comment to ping reviewers. Please delete this
explanatory text.*

*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*
system tests should be considered for larger changes.
Please delete this explanatory text.*

### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation 
+5 −0
原始行号 差异行号 差异行
@@ -51,6 +51,7 @@ ec2_subnet_id = nil
# Only override this by setting it to false if you're running in a VPC and you
# are running Vagrant from within that VPC as well.
ec2_associate_public_ip = nil
ec2_iam_instance_profile_name = nil

jdk_major = '8'
jdk_full = '8u202-linux-x64'
@@ -121,6 +122,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
    aws.ami = ec2_ami
    aws.security_groups = ec2_security_groups
    aws.subnet_id = ec2_subnet_id
    aws.block_device_mapping = [{ 'DeviceName' => '/dev/sda1', 'Ebs.VolumeSize' => 20 }]
    # If a subnet is specified, default to turning on a public IP unless the
    # user explicitly specifies the option. Without a public IP, Vagrant won't
    # be able to SSH into the hosts unless Vagrant is also running in the VPC.
@@ -133,6 +135,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
      region.spot_instance = ec2_spot_instance
      region.spot_max_price = ec2_spot_max_price
    end
    aws.iam_instance_profile_name = ec2_iam_instance_profile_name

    # Exclude some directories that can grow very large from syncing
    override.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: ['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
@@ -143,6 +146,8 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
    node.vm.provider :aws do |aws|
      aws.tags = {
        'Name' => ec2_instance_name_prefix + "-" + Socket.gethostname + "-" + name,
        'role' => 'ce-kafka',
        'Owner' => 'ce-kafka',
        'JenkinsBuildUrl' => ENV['BUILD_URL']
      }
    end

bin/aegis-server-start

0 → 100755
+10 −0
原始行号 差异行号 差异行
#!/bin/bash

# Copyright 2018, Confluent

export INCLUDE_TEST_JARS=1
if [[ -z "${KAFKA_LOG4J_OPTS}" ]]; then
    export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$(dirname $0)/../config/aegis-log4j.properties"
fi
export CLASS="io.confluent.aegis.proxy.Aegis"
exec $(dirname $0)/kafka-run-class.sh "${CLASS}" "$@"

bin/aegis-server-stop

0 → 100755
+10 −0
原始行号 差异行号 差异行
#!/bin/bash

# Copyright 2018, Confluent

SIGNAL=${SIGNAL:-TERM}
PIDS=$(ps ax | grep 'io.confluent.aegis.proxy.[A]egis' | awk '{print $1}')

if [[ -n "${PIDS}" ]]; then
  kill -s ${SIGNAL} ${PIDS}
fi
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+30 −1
原始行号 差异行号 差异行
@@ -22,9 +22,37 @@ fi

base_dir=$(dirname $0)

###
### Classpath additions for Confluent Platform releases (LSB-style layout)
###
#cd -P deals with symlink from /bin to /usr/bin
java_base_dir=$( cd -P "$base_dir/../share/java" && pwd )

# confluent-common: required by kafka-serde-tools
# kafka-serde-tools (e.g. Avro serializer): bundled with confluent-schema-registry package
for library in "confluent-security/connect" "kafka" "confluent-common" "kafka-serde-tools" "monitoring-interceptors"; do
  dir="$java_base_dir/$library"
  if [ -d "$dir" ]; then
    classpath_prefix="$CLASSPATH:"
    if [ "x$CLASSPATH" = "x" ]; then
      classpath_prefix=""
    fi
    CLASSPATH="$classpath_prefix$dir/*"
  fi
done

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
    export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties"
  LOG4J_CONFIG_NORMAL_INSTALL="/etc/kafka/connect-log4j.properties"
  LOG4J_CONFIG_ZIP_INSTALL="$base_dir/../etc/kafka/connect-log4j.properties"
  if [ -e "$LOG4J_CONFIG_NORMAL_INSTALL" ]; then # Normal install layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL}"
  elif [ -e "${LOG4J_CONFIG_ZIP_INSTALL}" ]; then # Simple zip file layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL}"
  else # Fallback to normal default
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties"
  fi
fi
export KAFKA_LOG4J_OPTS

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
  export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"
@@ -42,4 +70,5 @@ case $COMMAND in
    ;;
esac

export CLASSPATH
exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectDistributed "$@"

bin/connect-standalone

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+30 −1
原始行号 差异行号 差异行
@@ -22,9 +22,37 @@ fi

base_dir=$(dirname $0)

###
### Classpath additions for Confluent Platform releases (LSB-style layout)
###
#cd -P deals with symlink from /bin to /usr/bin
java_base_dir=$( cd -P "$base_dir/../share/java" && pwd )

# confluent-common: required by kafka-serde-tools
# kafka-serde-tools (e.g. Avro serializer): bundled with confluent-schema-registry package
for library in "confluent-security/connect" "kafka" "confluent-common" "kafka-serde-tools" "monitoring-interceptors"; do
  dir="$java_base_dir/$library"
  if [ -d "$dir" ]; then
    classpath_prefix="$CLASSPATH:"
    if [ "x$CLASSPATH" = "x" ]; then
      classpath_prefix=""
    fi
    CLASSPATH="$classpath_prefix$dir/*"
  fi
done

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
    export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties"
  LOG4J_CONFIG_NORMAL_INSTALL="/etc/kafka/connect-log4j.properties"
  LOG4J_CONFIG_ZIP_INSTALL="$base_dir/../etc/kafka/connect-log4j.properties"
  if [ -e "$LOG4J_CONFIG_NORMAL_INSTALL" ]; then # Normal install layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL}"
  elif [ -e "${LOG4J_CONFIG_ZIP_INSTALL}" ]; then # Simple zip file layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL}"
  else # Fallback to normal default
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties"
  fi
fi
export KAFKA_LOG4J_OPTS

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
  export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"
@@ -42,4 +70,5 @@ case $COMMAND in
    ;;
esac

export CLASSPATH
exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectStandalone "$@"

bin/git-hooks/pre-push

0 → 100644
+34 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

GIT_REMOTE="${2}"
DIR="$( cd "$(dirname "${0}")" ; pwd )"

die() {
    echo $@
    exit 1
}

# This script must be installed in the .git/hooks directory.
# It checks for the Confluent-README.md file in the project root directory.
# If it is, we only allow pushing to ce-kafka repos.

if [[ -e "${DIR}/../../Confluent-README.md" ]]; then
    if [[ ! $GIT_REMOTE =~ .*/ce-kafka(\.git|$) ]]; then
        die "FATAL: Attempt to push to $GIT_REMOTE. Pushing to repos other \
          than ce-kafka is not permissible."
    fi
fi

bin/kafka-acls

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+4 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Copyright 2020 Confluent Inc.

exec "$0.sh" "$@"
+4 −0
原始行号 差异行号 差异行
#!/bin/bash
# Copyright 2020 Confluent Inc.

exec $(dirname $0)/kafka-run-class.sh kafka.admin.ClusterLinkCommand "$@"

bin/kafka-configs

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"

bin/kafka-dump-log

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"

bin/kafka-log-dirs

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"

bin/kafka-mirror-maker

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+4 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Copyright 2020 Confluent Inc.

exec "$0.sh" "$@"
+4 −0
原始行号 差异行号 差异行
#!/bin/bash
# Copyright 2020 Confluent Inc.

exec $(dirname $0)/kafka-run-class.sh kafka.admin.BrokerRemovalCommand "$@"
+4 −0
原始行号 差异行号 差异行
#!/bin/bash
# Copyright 2019 Confluent Inc.

exec $(dirname $0)/kafka-run-class.sh kafka.admin.ReplicaStatusCommand "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"

bin/kafka-run-class

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+71 −1
原始行号 差异行号 差异行
@@ -99,6 +99,27 @@ do
  fi
done

ce_aegis_build_dir=$(dirname $0)/../ce-aegis/build/
for file in "$ce_aegis_build_dir"/libs/*.jar "$ce_aegis_build_dir"/dependant-libs/*.jar; do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

ce_metrics_build_dir=$(dirname $0)/../ce-metrics/build/
for file in "$ce_metrics_build_dir"/libs/*.jar "$ce_metrics_build_dir"/dependant-libs/*.jar; do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

ce_sbk_build_dir=$(dirname $0)/../ce-sbk/build/
for file in "$ce_sbk_build_dir"/libs/*.jar "$ce_sbk_build_dir"/dependant-libs-${SCALA_VERSION}/*.jar; do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
  for file in "$base_dir"/streams/examples/build/libs/kafka-streams-examples*.jar;
  do
@@ -147,6 +168,18 @@ do
  CLASSPATH="$CLASSPATH:$dir/*"
done

CLASSPATH="${CLASSPATH}:${base_dir}/ce-broker-plugins/build/libs/*"
CLASSPATH="${CLASSPATH}:${base_dir}/ce-broker-plugins/build/dependant-libs/*"

CLASSPATH="${CLASSPATH}:${base_dir}/ce-auth-providers/build/libs/*"
CLASSPATH="${CLASSPATH}:${base_dir}/ce-auth-providers/build/dependant-libs/*"

CLASSPATH="${CLASSPATH}:${base_dir}/ce-rest-server/build/libs/*"
CLASSPATH="${CLASSPATH}:${base_dir}/ce-rest-server/build/dependant-libs/*"

CLASSPATH="${CLASSPATH}:${base_dir}/ce-audit/build/libs/*"
CLASSPATH="${CLASSPATH}:${base_dir}/ce-audit/build/dependant-libs/*"

for cc_pkg in "api" "transforms" "runtime" "file" "mirror" "mirror-client" "json" "tools" "basic-auth-extension"
do
  for file in "$base_dir"/connect/${cc_pkg}/build/libs/connect-${cc_pkg}*.jar;
@@ -168,6 +201,34 @@ do
  fi
done

# CONFLUENT: classpath addition for releases with LSB-style layout
CLASSPATH="$CLASSPATH":"$base_dir/share/java/kafka/*"

# classpath for support confluent metadata-service with LSB-style layout
CLASSPATH="$CLASSPATH":"$base_dir/share/java/confluent-metadata-service/*"
CLASSPATH="$CLASSPATH":"$base_dir/share/java/rest-utils/*"
CLASSPATH="$CLASSPATH":"$base_dir/share/java/confluent-common/*"

# classpath for Kafka HTTP server and its servlets
CLASSPATH="$CLASSPATH":"$base_dir/share/java/ce-kafka-http-server/*"
CLASSPATH="$CLASSPATH":"$base_dir/share/java/ce-kafka-rest-servlet/*"
CLASSPATH="$CLASSPATH":"$base_dir/share/java/ce-kafka-rest-extensions/*"
CLASSPATH="$CLASSPATH":"$base_dir/share/java/kafka-rest-lib/*"
CLASSPATH="$CLASSPATH":"$base_dir/share/java/confluent-security/kafka-rest/*"

# classpath for schema validator
CLASSPATH="$CLASSPATH":"$base_dir/share/java/confluent-security/schema-validator/*"

# classpath for support-metrics-client jars
CLASSPATH="$CLASSPATH:$base_dir/support-metrics-client/build/dependant-libs-${SCALA_VERSION}/*"
CLASSPATH="$CLASSPATH:$base_dir/support-metrics-client/build/libs/*"

# classpath for telemetry
CLASSPATH="$CLASSPATH":"$base_dir/share/java/confluent-telemetry/*"

# classpath for support jars with LSB-style layout
CLASSPATH="$CLASSPATH":"/usr/share/java/support-metrics-client/*"

for file in "$base_dir"/core/build/libs/kafka_${SCALA_BINARY_VERSION}*.jar;
do
  if should_include_file "$file"; then
@@ -199,7 +260,15 @@ fi
# Log4j settings
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  LOG4J_CONFIG_NORMAL_INSTALL="/etc/kafka/tools-log4j.properties"
  LOG4J_CONFIG_ZIP_INSTALL="$base_dir/etc/kafka/tools-log4j.properties"
  if [ -e "$LOG4J_CONFIG_NORMAL_INSTALL" ]; then # Normal install layout
    LOG4J_DIR="${LOG4J_CONFIG_NORMAL_INSTALL}"
  elif [ -e "${LOG4J_CONFIG_ZIP_INSTALL}" ]; then # Simple zip file layout
    LOG4J_DIR="${LOG4J_CONFIG_ZIP_INSTALL}"
  else # Fallback to normal default
    LOG4J_DIR="$base_dir/config/tools-log4j.properties"
  fi
  # If Cygwin is detected, LOG4J_DIR is converted to Windows format.
  (( CYGWIN )) && LOG4J_DIR=$(cygpath --path --mixed "${LOG4J_DIR}")
  KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_DIR}"
@@ -311,6 +380,7 @@ CLASSPATH=${CLASSPATH#:}
# If Cygwin is detected, classpath is converted to Windows format.
(( CYGWIN )) && CLASSPATH=$(cygpath --path --mixed "${CLASSPATH}")


# Launch mode
if [ "x$DAEMON_MODE" = "xtrue" ]; then
  nohup "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &

bin/kafka-server-start

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+10 −1
原始行号 差异行号 差异行
@@ -22,8 +22,17 @@ fi
base_dir=$(dirname $0)

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
    export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
  LOG4J_CONFIG_NORMAL_INSTALL="/etc/kafka/log4j.properties"
  LOG4J_CONFIG_ZIP_INSTALL="$base_dir/../etc/kafka/log4j.properties"
  if [ -e "$LOG4J_CONFIG_NORMAL_INSTALL" ]; then # Normal install layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL}"
  elif [ -e "${LOG4J_CONFIG_ZIP_INSTALL}" ]; then # Simple zip file layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL}"
  else # Fallback to normal default
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
  fi
fi
export KAFKA_LOG4J_OPTS

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

bin/kafka-server-stop

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+8 −2
原始行号 差异行号 差异行
@@ -22,11 +22,17 @@ if [[ $(uname -s) == "OS/390" ]]; then
    PIDS=$(ps -A -o pid,jobname,comm | grep -i $JOBNAME | grep java | grep -v grep | awk '{print $1}')
else
    PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')
    PIDS_SUPPORT=$(ps ax | grep -i 'io\.confluent\.support\.metrics\.SupportedKafka' | grep java | grep -v grep | awk '{print $1}')
fi

if [ -z "$PIDS" ]; then
  # Normal Kafka is not running, but maybe we are running the support wrapper?
  if [ -z "${PIDS_SUPPORT}" ]; then
    echo "No kafka server to stop"
    exit 1
  else
    kill -s $SIGNAL $PIDS_SUPPORT
  fi
else
  kill -s $SIGNAL $PIDS
fi
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"

bin/kafka-topics

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+5 −1
原始行号 差异行号 差异行
@@ -116,7 +116,11 @@ IF ["%LOG_DIR%"] EQU [""] (

rem Log4j settings
IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (
	if exist %~dp0../../etc/kafka/tools-log4j.properties (
		set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../etc/kafka/tools-log4j.properties
	) else (
		set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/config/tools-log4j.properties
	)
) ELSE (
  rem create logs directory
  IF not exist "%LOG_DIR%" (
+5 −1
原始行号 差异行号 差异行
@@ -21,8 +21,12 @@ IF [%1] EQU [] (

SetLocal
IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (
    if exist %~dp0../../etc/kafka/log4j.properties (
        set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../etc/kafka/log4j.properties
    ) else (
        set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/log4j.properties
    )
)
IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (
    rem detect OS architecture
    wmic os get osarchitecture | find /i "32-bit" >nul 2>&1
+5 −1
原始行号 差异行号 差异行
@@ -21,8 +21,12 @@ IF [%1] EQU [] (

SetLocal
IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (
    if exist %~dp0../../etc/kafka/log4j.properties (
        set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../etc/kafka/log4j.properties
    ) else (
        set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/log4j.properties
    )
)
IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (
    set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M
)
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+10 −1
原始行号 差异行号 差异行
@@ -22,8 +22,17 @@ fi
base_dir=$(dirname $0)

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
    export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
  LOG4J_CONFIG_NORMAL_INSTALL="/etc/kafka/log4j.properties"
  LOG4J_CONFIG_ZIP_INSTALL="$base_dir/../etc/kafka/log4j.properties"
  if [ -e "$LOG4J_CONFIG_NORMAL_INSTALL" ]; then # Normal install layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL}"
  elif [ -e "${LOG4J_CONFIG_ZIP_INSTALL}" ]; then # Simple zip file layout
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL}"
  else # Fallback to normal default
    KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
  fi
fi
export KAFKA_LOG4J_OPTS

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"

bin/zookeeper-shell

0 → 100755
+17 −0
原始行号 差异行号 差异行
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

exec "$0.sh" "$@"
+1321 −95

文件已更改。

预览已超出大小限制,变更已折叠。

+2 −0
原始行号 差异行号 差异行
.gomodcache
bin
+22 −0
原始行号 差异行号 差异行
FROM golang:1.12.7 AS build_base

WORKDIR /go/src/github.com/confluentinc/ce-kafka/cc-kafka-init

ENV GO111MODULE=on

COPY go.mod .
COPY go.sum .

RUN go mod download

FROM build_base AS server_builder
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix nocgo -o /cc-kafka-init .

FROM alpine AS cc_kafka_init

ENV KAFKA_CONFIG_DIR=/mnt/config

RUN apk add ca-certificates
COPY --from=server_builder /cc-kafka-init /bin/cc-kafka-init
COPY scripts/encode-host.sh /bin/cc-encode-host.sh

cc-kafka-init/Makefile

0 → 100644
+15 −0
原始行号 差异行号 差异行
SERVICE_NAME := kafka-init
IMAGE_NAME := cc-$(SERVICE_NAME)
CHART_NAME := cc-$(SERVICE_NAME)
CODECOV := true
MAIN_GO := ./main.go
GO_OUTDIR ?= bin
GO_TEST_TARGET = test-go
GO_TEST_ARGS = -race -v -cover -p=1
GO_CODECOV_TEST_ARGS = -race -v -cover -p=1

include ../mk-include/cc-begin.mk
include ../mk-include/cc-semver.mk
include ../mk-include/cc-go.mk
include ../mk-include/cc-docker.mk
include ../mk-include/cc-end.mk
+38 −0
原始行号 差异行号 差异行
package cmd

import (
	"fmt"
	"github.com/confluentinc/ce-kafka/cc-kafka-init/encodehost"
	"github.com/spf13/cobra"
	"log"
	"os"
)

const HostIpEnvVarName string = "HOST_IP"

// This logic is intended to be moved to our common init container, once it's ready
// https://confluentinc.atlassian.net/wiki/spaces/~roger/pages/937957745/Init+Container+Plan
// https://github.com/confluentinc/confluent-platform-images/tree/master/components/init-container
var encodeHostCmd = &cobra.Command{
	Use:   "encode-host",
	Short: "Encode host IP",
	Long:  `Encodes host IP for use in direct endpoint addresses. Requires HOST_IP env var`,
	Run: func(cmd *cobra.Command, args []string) {
		logger := log.New(os.Stderr, "encoder: ", 0)
		hostIp, ok := os.LookupEnv(HostIpEnvVarName)
		if !ok {
			logger.Println(fmt.Sprintf("Env var \"%s\" is not set", HostIpEnvVarName))
			os.Exit(1)
		}
		encoded, err := encodehost.Encode(hostIp)
		if err != nil {
			logger.Println(err)
			os.Exit(1)
		}
		fmt.Println(encoded)
	},
}

func init() {
	rootCmd.AddCommand(encodeHostCmd)
}
+62 −0
原始行号 差异行号 差异行
package cmd

import (
	"context"
	"log"
	"os"
	"time"

	"github.com/confluentinc/ce-kafka/cc-kafka-init/listener"
	"github.com/spf13/cobra"
)

// listenerCmd represents the listener command
var listenerCmd = &cobra.Command{
	Use:   "listener",
	Short: "Wait for Kafka's external listener to become available",
	Long: `Parses a Kafka 'server.properties' file to find the port corresponding to the provided LISTENER. This 
port will be used by a TCP client to validate external network availability.`,
	Run: func(cmd *cobra.Command, args []string) {
		ctx := context.Background()
		ctx, cancel := withInterruptHandling(ctx)
		defer cancel()
		logger := log.New(os.Stdout, "listener: ", 0)
		listenerName, err := cmd.Flags().GetString("listener")
		if err != nil {
			logger.Println("listener not specified")
			os.Exit(1)
		}
		serverPropertiesPath, err := cmd.Flags().GetString("server-properties")
		if err != nil {
			logger.Println("server-properties not specified")
			os.Exit(1)
		}

		internalPort, err := cmd.Flags().GetInt("internal-port")
		if err != nil {
			logger.Println("server-properties not specified")
			os.Exit(1)
		}

		config := listener.Config{
			ServerPropertiesPath: serverPropertiesPath,
			Listener:             listenerName,
			InternalPort:         internalPort,
			ReadTimeout:          30 * time.Second,
		}
		srv := listener.NewTcpNonceRoundTripper(logger, config)
		err = srv.Run(ctx)
		if err != nil {
			logger.Fatalf("error: %v\n", err)
		} else {
			os.Exit(0)
		}
	},
}

func init() {
	rootCmd.AddCommand(listenerCmd)
	listenerCmd.Flags().String("listener", "EXTERNAL", "specify a listener to search for and bind to in server.properties")
	listenerCmd.Flags().String("server-properties", "/mnt/config/kafka.properties", "path to kafka server.properties")
	listenerCmd.Flags().Int("internal-port", 9092, "internal port which Kafka binds to")
}
+59 −0
原始行号 差异行号 差异行
package cmd

import (
	"fmt"
	"os"

	"github.com/spf13/cobra"

	homedir "github.com/mitchellh/go-homedir"
	"github.com/spf13/viper"
)

var cfgFile string

// rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{
	Use:   "cc-kafka-init",
	Short: "init container for Kafka",
	Long:  `This executable provides various entrypoints for Kafka initialization.`,
}

func Execute() {
	if err := rootCmd.Execute(); err != nil {
		fmt.Println(err)
		os.Exit(1)
	}
}

func init() {
	cobra.OnInitialize(initConfig)
	rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file (default is $HOME/.cc-initcontainer.yaml)")
	rootCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}

// initConfig reads in config file and ENV variables if set.
func initConfig() {
	if cfgFile != "" {
		// Use config file from the flag.
		viper.SetConfigFile(cfgFile)
	} else {
		// Find home directory.
		home, err := homedir.Dir()
		if err != nil {
			fmt.Println(err)
			os.Exit(1)
		}

		// Search config in home directory with name ".cc-initcontainer" (without extension).
		viper.AddConfigPath(home)
		viper.SetConfigName(".cc-initcontainer")
	}

	viper.AutomaticEnv() // read in environment variables that match

	// If a config file is found, read it in.
	if err := viper.ReadInConfig(); err == nil {
		fmt.Println("Using config file:", viper.ConfigFileUsed())
	}
}
+26 −0
原始行号 差异行号 差异行
package cmd

import (
	"context"
	"os"
	"os/signal"
)

// Starts up a signal handler which will cancel the provided context when an interrupt
// is caught.
func withInterruptHandling(ctx context.Context) (context.Context, context.CancelFunc) {
	ctx, cancel := context.WithCancel(ctx)
	c := make(chan os.Signal, 1)
	signal.Notify(c, os.Interrupt)
	go func() {
		select {
		case <-c:
			cancel()
		case <-ctx.Done():
		}
	}()
	return ctx, func() {
		signal.Stop(c)
		cancel()
	}
}
+36 −0
原始行号 差异行号 差异行
package encodehost

import (
	"encoding/binary"
	"errors"
	"fmt"
	"net"
)

// This logic is intended to be moved to our common init container, once it's ready
// https://confluentinc.atlassian.net/wiki/spaces/~roger/pages/937957745/Init+Container+Plan
// https://github.com/confluentinc/confluent-platform-images/tree/master/components/init-container

func Encode(hostIp string) (string, error) {
	ipv4, err := parseIpv4(hostIp)
	if err != nil {
		return "", err
	}
	return encodeLastTwoOctetsHex(ipv4), nil
}

func parseIpv4(ipAddr string) (uint32, error) {
	ip := net.ParseIP(ipAddr)
	if ip == nil {
		return 0, errors.New(fmt.Sprintf("bad ipAddr format: %s", ipAddr))
	}
	ipv4 := ip.To4()
	if ipv4 == nil {
		return 0, errors.New(fmt.Sprintf("not a IPv4 address: %s", ipAddr))
	}
	return binary.BigEndian.Uint32(ipv4), nil
}

func encodeLastTwoOctetsHex(ipv4 uint32) string {
	return fmt.Sprintf("%02x", ipv4)[3:]
}
+97 −0
原始行号 差异行号 差异行
package encodehost

import "testing"

func Test_parseIpv4(t *testing.T) {
	type args struct {
		ipAddr string
	}
	tests := []struct {
		name    string
		args    args
		want    uint32
		wantErr bool
	}{
		{
			name:    "Fail on junk",
			args:    args{ipAddr: "barf"},
			wantErr: true,
		},
		{
			name:    "Fail on IPv6",
			args:    args{ipAddr: "2001:db8::68"},
			wantErr: true,
		},
		{
			name: "IPv4 succeeds",
			args: args{ipAddr: "10.15.52.189"},
			want: 168768701,
		},
	}
	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			got, err := parseIpv4(tt.args.ipAddr)
			if (err != nil) != tt.wantErr {
				t.Errorf("parseIpv4() error = %v, wantErr %v", err, tt.wantErr)
				return
			}
			if got != tt.want {
				t.Errorf("parseIpv4() got = %v, want %v", got, tt.want)
			}
		})
	}
}

func Test_encodeLastTwoOctetsHex(t *testing.T) {
	type args struct {
		ipv4 uint32
	}
	tests := []struct {
		name string
		args args
		want string
	}{
		{
			name: "Last two octets as hex",
			args: args{ipv4: 168768701},
			want: "34bd",
		},
	}
	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			if got := encodeLastTwoOctetsHex(tt.args.ipv4); got != tt.want {
				t.Errorf("encodeLastTwoOctetsHex() = %v, want %v", got, tt.want)
			}
		})
	}
}

func TestEncode(t *testing.T) {
	type args struct {
		hostIp string
	}
	tests := []struct {
		name    string
		args    args
		want    string
		wantErr bool
	}{
		{
			name: "Encodes host IP",
			args: args{hostIp: "10.15.52.189"},
			want: "34bd",
		},
	}
	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			got, err := Encode(tt.args.hostIp)
			if (err != nil) != tt.wantErr {
				t.Errorf("Encode() error = %v, wantErr %v", err, tt.wantErr)
				return
			}
			if got != tt.want {
				t.Errorf("Encode() got = %v, want %v", got, tt.want)
			}
		})
	}
}

cc-kafka-init/go.mod

0 → 100644
+9 −0
原始行号 差异行号 差异行
module github.com/confluentinc/ce-kafka/cc-kafka-init

go 1.12

require (
	github.com/mitchellh/go-homedir v1.1.0
	github.com/spf13/cobra v0.0.5
	github.com/spf13/viper v1.4.0
)

cc-kafka-init/go.sum

0 → 100644
+155 −0
原始行号 差异行号 差异行
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0 h1:MP4Eh7ZCb31lleYCFuwm0oe4/YGak+5l1vA2NOE80nA=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0 h1:LLgXmsheXeRoUOBOjtwPQCWIYqM/LU1ayDtDePerRcY=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.4.0 h1:yXHLWeravcrgGyFSyCgdYpXQ9dR9c/WED3pg1RhxqEU=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.uber.org/atomic v1.4.0 h1:cxzIVoETapQEqDhQu3QfnvXAV4AlzcvUCxkVUFw3+EU=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0 h1:HoEmRHQPVSqub6w2z2d2EOVs2fjyFRGyofhKuyDq0QI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0 h1:ORx85nbTijNz8ljznvCMR1ZBIPKFn3jQrag10X2AsuM=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+89 −0
原始行号 差异行号 差异行
package listener

import (
	"bufio"
	"fmt"
	"io"
	"net"
	"os"
	"regexp"
	"strings"
	"time"
)

// Config for the listener command
type Config struct {
	// ServerPropertiesPath specifies the path of a Kafka server.properties file.
	// The server.properties file will be parsed and used to derive the external listener port.
	ServerPropertiesPath string
	// Listener is the name of the Kafka external LISTENER
	Listener string

	InternalPort int
	// ReadTimeout is the maximum duration to wait for creating a connection to the nonce server
	ReadTimeout time.Duration

	// local bind addr, used for testing
	listenerAddrOverride string
}

// readAdvertisedListener will attempt to read a Kafka server.properties file and parse out
// the endpoint for the provided LISTENER name. Returns a hostport string.
func readAdvertisedListener(listenerName string, reader io.Reader) (string, error) {
	scanner := bufio.NewScanner(reader)
	scanner.Split(bufio.ScanLines)
	advertisedListeners := ""
	for scanner.Scan() {
		line := scanner.Text()
		split := strings.SplitN(line, "=", 2)
		if split[0] == "advertised.listeners" {
			advertisedListeners = split[1]
			break
		}
	}

	if advertisedListeners == "" {
		return "", fmt.Errorf("advertised.listeners property not found")
	}

	allListeners := strings.SplitAfter(advertisedListeners, ",")
	re := regexp.MustCompile(`^(.*)://\[?([0-9a-zA-Z\-%._:]*)]?:(-?[0-9]+)`)
	for _, listener := range allListeners {
		subMatches := re.FindStringSubmatch(listener)
		if len(subMatches) == 4 && subMatches[1] == listenerName {
			port := subMatches[3]
			host := subMatches[2]
			if host == "" {
				return "", fmt.Errorf("invalid host string")
			}
			if port == "" {
				return "", fmt.Errorf("invalid port string")
			}
			hostport := net.JoinHostPort(host, port)

			return hostport, nil
		}
	}
	return "", fmt.Errorf("no valid advertised listeners were found")
}

func (c *Config) listenerAddr() (string, error) {
	if c.Listener != "" && c.ServerPropertiesPath != "" {
		fd, err := os.Open(c.ServerPropertiesPath)
		if err != nil {
			return "", fmt.Errorf("failed to open server properties file '%s': %v", c.ServerPropertiesPath, err)
		}
		defer fd.Close()

		hostport, err := readAdvertisedListener(c.Listener, fd)
		if err != nil {
			return "", fmt.Errorf("failed to read advertised listener '%s' from file '%s': %v", c.Listener, c.ServerPropertiesPath, err)
		}
		return hostport, nil
	}

	if c.listenerAddrOverride != "" {
		return c.listenerAddrOverride, nil
	}
	return "", fmt.Errorf("no listener endpoint provided")
}
+20 −0
原始行号 差异行号 差异行
package listener

import (
	"os"
	"testing"
)

func Test_readAdvertisedListenerProperty(t *testing.T) {
	fd, err := os.Open("golden/server-pod.properties.golden")
	if err != nil {
		t.Error(err)
	}
	listenersProp, err := readAdvertisedListener("EXTERNAL", fd)
	if err != nil {
		t.Error(err)
	}
	if listenersProp != "b0-pkc-1mqdxgn.us-central1.gcp.priv.cpdev.cloud:9092" {
		t.Error("unexpected listeners prop")
	}
}
+3 −0
原始行号 差异行号 差异行
broker.id=0
broker.rack=0
advertised.listeners=INTERNAL://kafka-0.kafka.pkc-1mqdxgn.svc.cluster.local:9071,REPLICATION://kafka-0.kafka.pkc-1mqdxgn.svc.cluster.local:9072,EXTERNAL://b0-pkc-1mqdxgn.us-central1.gcp.priv.cpdev.cloud:9092
 No newline at end of file
+176 −0
原始行号 差异行号 差异行
package listener

import (
	"bufio"
	"context"
	"fmt"
	"io"
	"log"
	"math/rand"
	"net"
	"strconv"
	"sync"
	"time"
)

type TcpNonceRoundTripper struct {
	config Config
	logger *log.Logger
	nonce  string
}

func NewTcpNonceRoundTripper(logger *log.Logger, config Config) *TcpNonceRoundTripper {
	nonce := strconv.Itoa(rand.Int())
	rt := TcpNonceRoundTripper{config: config, logger: logger, nonce: nonce}
	return &rt
}

func handle(logger *log.Logger, nonce string, conn io.WriteCloser) {
	defer conn.Close()
	_, err := conn.Write([]byte(fmt.Sprintf("%s\n", nonce)))
	if err != nil {
		logger.Printf("error writing response: %v\n", err)
		return
	}
}

func (rt *TcpNonceRoundTripper) bind(ctx context.Context, internalPort int) error {
	rt.logger.Printf("bound to %d\n", internalPort)
	listenAddr := fmt.Sprintf("%s:%d", "0.0.0.0", internalPort)
	addr, err := net.ResolveTCPAddr("tcp", listenAddr)
	rt.logger.Printf("listen %s\n", listenAddr)
	if err != nil {
		return err
	}
	listener, err := net.ListenTCP("tcp", addr)
	if err != nil {
		return err
	}

	closeOnce := sync.Once{}
	go func() {
		<-ctx.Done()
		rt.logger.Println("closing listener")

		closeOnce.Do(func() {
			_ = listener.SetDeadline(time.Now())
			_ = listener.Close()
		})
	}()

	for {
		select {
		case <-ctx.Done():
		default:
			conn, err := listener.AcceptTCP()
			if err != nil {
				rt.logger.Printf("error accepting: %v\n", err)
				continue
			}
			rt.logger.Printf("accepting new connection: %v\n", conn.RemoteAddr())
			_ = conn.SetLinger(0)
			go func() {
				handle(rt.logger, rt.nonce, conn)
			}()
		}
	}
}

func (rt *TcpNonceRoundTripper) readNonce() (string, error) {
	addr, err := rt.config.listenerAddr()
	if err != nil {
		return "", err
	}
	rt.logger.Printf("dialing %s", addr)
	conn, err := net.DialTimeout("tcp", addr, rt.config.ReadTimeout)
	if err != nil {
		return "", err
	}
	responseRead := bufio.NewReader(conn)
	response, prefix, err := responseRead.ReadLine()
	if err != nil {
		return "", err
	}
	if prefix {
		return "", nil
	}
	return string(response), nil
}

func (rt *TcpNonceRoundTripper) Run(ctx context.Context) error {
	wg := sync.WaitGroup{}
	serverErrCh := make(chan error, 1)
	clientCh := make(chan struct {
		success bool
		err     error
	}, 1)

	wg.Add(1)
	go func() {
		defer close(serverErrCh)
		defer wg.Done()
		rt.logger.Println("starting server")
		defer rt.logger.Println("stopped server")
		if err := rt.bind(ctx, rt.config.InternalPort); err != nil {
			serverErrCh <- err
		}
	}()

	wg.Add(1)
	go func() {
		defer close(clientCh)
		defer wg.Done()
		rt.logger.Println("starting reading nonce")
		defer rt.logger.Println("finished reading nonce")

		nonceMatched := false
		var lastErr error = nil

		timeout := time.Second * 0
	outer:
		for {
			// sleep for a bit at the start of the loop, its likely the server isn't up anyway
			select {
			case <-time.After(timeout):
			case <-ctx.Done():
				break
			}
			responseNonce, err := rt.readNonce()
			if err != nil {
				lastErr = fmt.Errorf("error reading response nonce: %v", err)
				break outer
			} else if responseNonce != rt.nonce {
				rt.logger.Printf(fmt.Sprintf("nonce mismatch: %v\n", lastErr))
				lastErr = fmt.Errorf("nonce response '%s' did not match expected '%s'", responseNonce, rt.nonce)
			} else {
				nonceMatched = true
				break outer
			}
			timeout = time.Second * 1
		}
		clientCh <- struct {
			success bool
			err     error
		}{
			success: nonceMatched,
			err:     lastErr,
		}
	}()

	select {
	case resp, ok := <-clientCh:
		if !ok {
			return fmt.Errorf("client chan closed before response could be read")
		} else if resp.err != nil {
			return fmt.Errorf("client encountered error: %v", resp.err)
		} else if !resp.success {
			return fmt.Errorf("response nonce was incorrect")
		} else {
			return nil
		}
	case err := <-serverErrCh:
		return err
	case <-ctx.Done():
		return fmt.Errorf("canceled")
	}
}
+108 −0
原始行号 差异行号 差异行
package listener

import (
	"bufio"
	"context"
	"io/ioutil"
	"log"
	"net"
	"testing"
	"time"
)

func Test_handle(t *testing.T) {
	logger := log.New(ioutil.Discard, "listen", 0)
	nonce := "world"
	client, server := net.Pipe()

	go handle(logger, nonce, server)

	reader := bufio.NewReader(client)

	bytes, isPrefix, err := reader.ReadLine()
	if err != nil {
		t.Error(err)
	}
	if isPrefix {
		t.Error("expected to read line")
	}

	response := string(bytes)
	if response != nonce {
		t.Errorf("expected response %s to equal %s", response, nonce)
	}
}

func Test_bind(t *testing.T) {
	ctx := context.Background()
	ctx, cancel := context.WithCancel(ctx)
	defer cancel()
	//logger := log.NewNopLogger()
	logger := log.New(ioutil.Discard, "", 0)
	roundTripper := NewTcpNonceRoundTripper(logger, Config{
		ServerPropertiesPath: "",
		Listener:             "EXTERNAL",
		InternalPort:         9092,
		listenerAddrOverride: "localhost:9092",
	})

	go func() {
		err := roundTripper.bind(ctx, 9092)
		if err != nil {
			t.Error(err)
		}
	}()

	conn, err := net.Dial("tcp", roundTripper.config.listenerAddrOverride)
	if err != nil {
		t.Error(err)
	}
	line, pre, err := bufio.NewReader(conn).ReadLine()
	if err != nil {
		t.Error(err)
	}
	if pre {
		t.Error("did not expect prefix")
	}
	parsed := string(line)
	if parsed != roundTripper.nonce {
		t.Errorf("expected return value %s to equal nonce %s", parsed, roundTripper.nonce)
	}
}

func TestTcpNonceRoundTripper_Run(t *testing.T) {
	ctx := context.Background()
	ctx, cancel := context.WithCancel(ctx)
	defer cancel()
	logger := log.New(ioutil.Discard, "", 0)
	roundTripper := NewTcpNonceRoundTripper(logger, Config{
		ServerPropertiesPath: "",
		Listener:             "",
		InternalPort:         9092,
		listenerAddrOverride: "localhost:9092",
	})

	err := roundTripper.Run(ctx)
	if err != nil {
		t.Error(err)
	}
}

func TestTcpNonceRoundTripper_Run_fail(t *testing.T) {
	ctx := context.Background()
	ctx, cancel := context.WithCancel(ctx)
	defer cancel()
	logger := log.New(ioutil.Discard, "", 0)
	roundTripper := NewTcpNonceRoundTripper(logger, Config{
		ServerPropertiesPath: "",
		Listener:             "",
		InternalPort:         9093,
		ReadTimeout:          1 * time.Millisecond,
		listenerAddrOverride: "localhost:9092",
	})

	err := roundTripper.Run(ctx)
	if err == nil {
		t.Error("expected roundTripper to fail")
	}
}

cc-kafka-init/main.go

0 → 100644
+7 −0
原始行号 差异行号 差异行
package main

import "github.com/confluentinc/ce-kafka/cc-kafka-init/cmd"

func main() {
	cmd.Execute()
}
+12 −0
原始行号 差异行号 差异行
#!/bin/sh
set -e 

# This is temporary init container logic that is intended to be replaced by a common init container
# https://confluentinc.atlassian.net/wiki/spaces/~roger/pages/937957745/Init+Container+Plan
encoded_host=$(/bin/cc-kafka-init encode-host)
echo "Encoded host: ${encoded_host}"
# the emptyDir config mount shared by init container and main container is currently /mnt/config
# The sed template logic is intended to be replaced by jsonnet templating done in a common init container
cat ${KAFKA_CONFIG_DIR}/shared/server-common.properties > ${KAFKA_CONFIG_DIR}/kafka.properties
cat ${KAFKA_CONFIG_DIR}/pod/${POD_NAME}/server-pod.properties | sed "s/{encoded_host}/${encoded_host}/g" >> ${KAFKA_CONFIG_DIR}/kafka.properties
cat ${KAFKA_CONFIG_DIR}/kafka.properties

cc-services/README.md

0 → 100644
+104 −0
原始行号 差异行号 差异行
# Cloud

Here we store CCloud utilities (written in Go) that are closely tied to Kafka.
There are two services here currently:
  - code orchestrating the Kafka Core Soak cluster's clients via Trogdor under the `soak_cluster `package. It is responsible for creating Trogdor tasks in accordance to a given configuration consisting of desired soak testing length, throughput and etc.
  - The storage liveness probe used for CCloud clusters. This is a small Go utility which writes and fsyncs data to some files; the speed and success of these writes is intended to reflect the underlying storage system health.

# Soak Testing

### How to run on CPD

You need:
 - a running CPD cluster: https://github.com/confluentinc/cpd
 - A kafka cluster provisioned in that CPD
 - A pair of keys from that cluster

Target your CPD k8s in kubectl. It is recommended to inject the required creds for your CPD instance in the new namespace, so that your CPD can pull images from the docker repositories directly:

```
CPD_ID="$(cpd priv ls | fgrep ' gke-' | head -1 | awk '{print $1}')"

cpd priv export --id ${CPD_ID}

cpd priv inject-credentials --id ${CPD_ID} --namespace soak-tests
```

Then:

1. Update the configurations
    * `cc-services/trogdor/charts/values/local.yaml`
        - number of Trogdor agents
    * `cc-services/soak_cluster/charts/values/local.yaml`
        - with the same number of Trogdor agents,
        - bootstrap URL,
        - api keys/secrets
    * `cc-services/soak_cluster/charts/cc-soak-clients/templates/configMaps.yaml`
        - with the desired throughput/client count for every topic

2. Build and push your custom docker images. See the section [on building images](#building-new-docker-images) below.

3. Deploy the Trogdor agents
    * Optionally `make -C cc-services/trogdor helm-clean` to clean the state.
    * `make -C cc-services/trogdor helm-deploy-soak`
    * Inspect the pods: `kubectl get pods -n soak-tests`

4. Deploy the Soak clients
    * Optionally `make -C cc-services/soak_cluster helm-clean` to clean the state.
    * `make -C cc-services/soak_cluster helm-deploy-soak`

5. Trigger the `clients-spawner`:
    * Option 1: Edit the clients-spawner cronjob `schedule:` field in `kubectl edit cronjobs -n soak-tests cc-soak-clients-clients-spawner`.
    * Option 2: start a oneoff job: `kubectl create job --from=cronjob/cc-soak-clients-clients-spawner -n soak-tests cc-soak-clients-clients-spawner-oneoff`
    * Track the new clients-spawner pod from the cronjob: `kubectl get pods -n soak-tests`

If all goes well, you should have clients producing and consuming from the configured Kafka cluster.

It is easiest to inspect the tasks by manually querying the Trogdor coordinator.

```bash
kubectl port-forward cc-trogdor-service-coordinator-0 9002:9002 &
curl -X GET 'http://localhost:9002/coordinator/tasks?state=RUNNING'
```

### Building new docker images

To test a newer build of `cc-trogdor` or `cc-soak-clients`, you need to build a docker image and push it to [JFrog's Artifactory](https://confluent.jfrog.io/confluent/webapp/):

You can build and push for all the `cc-services` based on your current branch using:

```
make -C cc-services/soak_cluster build-docker push-docker
make -C cc-services/trogdor build-docker push-docker
```

With the previous command, Trogdor would use the latest image  of `ce-kafka` built from the `master` branch.

If you need a custom kafka base image for trogdor from the local branch run from the root of the project. It will build all the containers, including ce-kafka and the cc-services.

```
make build-docker build-docker-cc-services push-docker-cc-services
```

# Storage Probe

This is a small Go utility which attempts to write a small amount of data to all the files specified on its command line, fsyncing them as well to ensure that they actually go to disk. You can specify a maximum time that they are allowed to run before timing out, which defaults to 30 seconds.

## Building the Storage Probe

This is built and included in the standard ce-kafka build, but it's also got its own standard Confluent Makefile (with build-go and test-go targets). There are a series of unit tests.

## Docker Images

There are no standalone Docker images for the storage probe; it's
built and placed in the standard ce-kafka Docker image.

## Invoking the Storage Probe

The storage probe takes 2 arguments,
  - `-timeout <duration>` -- determines how long before the probe times out and returns.
  - `-statsAddr <host:port>` -- address of the Datadog stats daemon to connect to. If not provided, no stats are exported.

## Caveats of the Storage Probe

While the probe has a `timeout` argument controlling how long before the writes are considered to have failed, we've observed that truly unavailable storage can result in the probe failing to exit (stuck in the fsync system call), so it's better to go belt-and-suspenders by using something like `timeout` to ensure that the program exits after some specified time.
+4 −0
原始行号 差异行号 差异行
.mk-include.local-context-copy
charts/package
.gomodcache
bin/
+27 −0
原始行号 差异行号 差异行
FROM confluent-docker.jfrog.io/confluentinc/cc-service-base:1.10

ARG version

WORKDIR /root
COPY .ssh .ssh
COPY .netrc ./
RUN ssh-keyscan -t rsa github.com > /root/.ssh/known_hosts

WORKDIR /go/src/github.com/confluentinc/ce-kafka/cc-services/soak_cluster
COPY . .
# Replace mk-include link with the local context copy
RUN rm ./mk-include
COPY .mk-include.local-context-copy ./mk-include
RUN make deps DEP_ARGS=-vendor-only VERSION=${version}

RUN make lint-go test-go
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 make build-go GO_OUTDIR= VERSION=${version}

FROM confluent-docker.jfrog.io/confluentinc/cc-built-base:v1.0.0

COPY --from=0 /soak-clients /

ENTRYPOINT ["/soak-clients"]

# keep container running
CMD tail -f /dev/null
+252 −0
原始行号 差异行号 差异行
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.


[[projects]]
  digest = "1:0071395e15d380ab62281dda93801b294045206b05a91c54fe6fceb7da9989dd"
  name = "github.com/confluentinc/cc-utils"
  packages = ["log"]
  pruneopts = "UT"
  revision = "fb4e29f8de74417bda70c979e52c7e4e08e3a255"
  version = "v0.4.0"

[[projects]]
  branch = "master"
  digest = "1:f0e3de9b451a7b3bf83f965cf2e26ffba20f5e41093ef4ba573106ae89944594"
  name = "github.com/dariubs/percent"
  packages = ["."]
  pruneopts = "UT"
  revision = "76df7a01afe5044af9b52a55a8b0b54fbf04a9d1"

[[projects]]
  digest = "1:ffe9824d294da03b391f44e1ae8281281b4afc1bdaa9588c9097785e3af10cec"
  name = "github.com/davecgh/go-spew"
  packages = ["spew"]
  pruneopts = "UT"
  revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
  version = "v1.1.1"

[[projects]]
  digest = "1:abeb38ade3f32a92943e5be54f55ed6d6e3b6602761d74b4aab4c9dd45c18abd"
  name = "github.com/fsnotify/fsnotify"
  packages = ["."]
  pruneopts = "UT"
  revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
  version = "v1.4.7"

[[projects]]
  digest = "1:704b145137b5555c6bac6a7e52875e4df8a0a1b7ea74aaae864abcab61e672b0"
  name = "github.com/go-kit/kit"
  packages = [
    "log",
    "log/level",
    "log/term",
  ]
  pruneopts = "UT"
  revision = "4dc7be5d2d12881735283bcab7352178e190fc71"
  version = "v0.6.0"

[[projects]]
  digest = "1:4062bc6de62d73e2be342243cf138cf499b34d558876db8d9430e2149388a4d8"
  name = "github.com/go-logfmt/logfmt"
  packages = ["."]
  pruneopts = "UT"
  revision = "07c9b44f60d7ffdfb7d8efe1ad539965737836dc"
  version = "v0.4.0"

[[projects]]
  digest = "1:586ea76dbd0374d6fb649a91d70d652b7fe0ccffb8910a77468e7702e7901f3d"
  name = "github.com/go-stack/stack"
  packages = ["."]
  pruneopts = "UT"
  revision = "2fee6af1a9795aafbe0253a0cfbdf668e1fb8a9a"
  version = "v1.8.0"

[[projects]]
  digest = "1:c0d19ab64b32ce9fe5cf4ddceba78d5bc9807f0016db6b1183599da3dcc24d10"
  name = "github.com/hashicorp/hcl"
  packages = [
    ".",
    "hcl/ast",
    "hcl/parser",
    "hcl/printer",
    "hcl/scanner",
    "hcl/strconv",
    "hcl/token",
    "json/parser",
    "json/scanner",
    "json/token",
  ]
  pruneopts = "UT"
  revision = "8cb6e5b959231cc1119e43259c4a608f9c51a241"
  version = "v1.0.0"

[[projects]]
  digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be"
  name = "github.com/inconshreveable/mousetrap"
  packages = ["."]
  pruneopts = "UT"
  revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
  version = "v1.0"

[[projects]]
  branch = "master"
  digest = "1:5c3444689562053b027ef3b96372e306adbe0d7d109b6cdd48d01eb80f8bab14"
  name = "github.com/jinzhu/copier"
  packages = ["."]
  pruneopts = "UT"
  revision = "7e38e58719c33e0d44d585c4ab477a30f8cb82dd"

[[projects]]
  branch = "master"
  digest = "1:a64e323dc06b73892e5bb5d040ced475c4645d456038333883f58934abbf6f72"
  name = "github.com/kr/logfmt"
  packages = ["."]
  pruneopts = "UT"
  revision = "b84e30acd515aadc4b783ad4ff83aff3299bdfe0"

[[projects]]
  digest = "1:c568d7727aa262c32bdf8a3f7db83614f7af0ed661474b24588de635c20024c7"
  name = "github.com/magiconair/properties"
  packages = ["."]
  pruneopts = "UT"
  revision = "c2353362d570a7bfa228149c62842019201cfb71"
  version = "v1.8.0"

[[projects]]
  digest = "1:53bc4cd4914cd7cd52139990d5170d6dc99067ae31c56530621b18b35fc30318"
  name = "github.com/mitchellh/mapstructure"
  packages = ["."]
  pruneopts = "UT"
  revision = "3536a929edddb9a5b34bd6861dc4a9647cb459fe"
  version = "v1.1.2"

[[projects]]
  digest = "1:e0f50a07c0def90588d69f77178712c6fdc67eb6576365f551cce98b44b501bf"
  name = "github.com/pelletier/go-toml"
  packages = ["."]
  pruneopts = "UT"
  revision = "63909f0a90ab0f36909e8e044e46ace10cf13ba2"
  version = "v1.3.0"

[[projects]]
  digest = "1:cf31692c14422fa27c83a05292eb5cbe0fb2775972e8f1f8446a71549bd8980b"
  name = "github.com/pkg/errors"
  packages = ["."]
  pruneopts = "UT"
  revision = "ba968bfe8b2f7e042a574c888954fccecfa385b4"
  version = "v0.8.1"

[[projects]]
  digest = "1:0028cb19b2e4c3112225cd871870f2d9cf49b9b4276531f03438a88e94be86fe"
  name = "github.com/pmezard/go-difflib"
  packages = ["difflib"]
  pruneopts = "UT"
  revision = "792786c7400a136282c1664665ae0a8db921c6c2"
  version = "v1.0.0"

[[projects]]
  digest = "1:bb495ec276ab82d3dd08504bbc0594a65de8c3b22c6f2aaa92d05b73fbf3a82e"
  name = "github.com/spf13/afero"
  packages = [
    ".",
    "mem",
  ]
  pruneopts = "UT"
  revision = "588a75ec4f32903aa5e39a2619ba6a4631e28424"
  version = "v1.2.2"

[[projects]]
  digest = "1:08d65904057412fc0270fc4812a1c90c594186819243160dc779a402d4b6d0bc"
  name = "github.com/spf13/cast"
  packages = ["."]
  pruneopts = "UT"
  revision = "8c9545af88b134710ab1cd196795e7f2388358d7"
  version = "v1.3.0"

[[projects]]
  digest = "1:645cabccbb4fa8aab25a956cbcbdf6a6845ca736b2c64e197ca7cbb9d210b939"
  name = "github.com/spf13/cobra"
  packages = ["."]
  pruneopts = "UT"
  revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
  version = "v0.0.3"

[[projects]]
  digest = "1:1b753ec16506f5864d26a28b43703c58831255059644351bbcb019b843950900"
  name = "github.com/spf13/jwalterweatherman"
  packages = ["."]
  pruneopts = "UT"
  revision = "94f6ae3ed3bceceafa716478c5fbf8d29ca601a1"
  version = "v1.1.0"

[[projects]]
  digest = "1:c1b1102241e7f645bc8e0c22ae352e8f0dc6484b6cb4d132fa9f24174e0119e2"
  name = "github.com/spf13/pflag"
  packages = ["."]
  pruneopts = "UT"
  revision = "298182f68c66c05229eb03ac171abe6e309ee79a"
  version = "v1.0.3"

[[projects]]
  digest = "1:1b773526998f3dbde3a51a4a5881680c4d237d3600f570d900f97ac93c7ba0a8"
  name = "github.com/spf13/viper"
  packages = ["."]
  pruneopts = "UT"
  revision = "9e56dacc08fbbf8c9ee2dbc717553c758ce42bc9"
  version = "v1.3.2"

[[projects]]
  digest = "1:972c2427413d41a1e06ca4897e8528e5a1622894050e2f527b38ddf0f343f759"
  name = "github.com/stretchr/testify"
  packages = ["assert"]
  pruneopts = "UT"
  revision = "ffdc059bfe9ce6a4e144ba849dbedead332c6053"
  version = "v1.3.0"

[[projects]]
  branch = "master"
  digest = "1:dbe12489f5cb00d492e275a29528c9e336dbe5af75ce366f81c1911faf10733e"
  name = "golang.org/x/sys"
  packages = ["unix"]
  pruneopts = "UT"
  revision = "e8e3143a4f4a00f1fafef0dd82ba78223281b01b"

[[projects]]
  digest = "1:8029e9743749d4be5bc9f7d42ea1659471767860f0cdc34d37c3111bd308a295"
  name = "golang.org/x/text"
  packages = [
    "internal/gen",
    "internal/triegen",
    "internal/ucd",
    "transform",
    "unicode/cldr",
    "unicode/norm",
  ]
  pruneopts = "UT"
  revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
  version = "v0.3.0"

[[projects]]
  digest = "1:4d2e5a73dc1500038e504a8d78b986630e3626dc027bc030ba5c75da257cdb96"
  name = "gopkg.in/yaml.v2"
  packages = ["."]
  pruneopts = "UT"
  revision = "51d6538a90f86fe93ac480b35f37b2be17fef232"
  version = "v2.2.2"

[solve-meta]
  analyzer-name = "dep"
  analyzer-version = 1
  input-imports = [
    "github.com/confluentinc/cc-utils/log",
    "github.com/dariubs/percent",
    "github.com/go-kit/kit/log",
    "github.com/go-kit/kit/log/level",
    "github.com/jinzhu/copier",
    "github.com/pkg/errors",
    "github.com/spf13/cobra",
    "github.com/spf13/viper",
    "github.com/stretchr/testify/assert",
  ]
  solver-name = "gps-cdcl"
  solver-version = 1
+58 −0
原始行号 差异行号 差异行
# Gopkg.toml example
#
# Refer to https://golang.github.io/dep/docs/Gopkg.toml.html
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
#   name = "github.com/user/project"
#   version = "1.0.0"
#
# [[constraint]]
#   name = "github.com/user/project2"
#   branch = "dev"
#   source = "github.com/myfork/project2"
#
# [[override]]
#   name = "github.com/x/y"
#   version = "2.4.0"
#
# [prune]
#   non-go = false
#   go-tests = true
#   unused-packages = true


[[constraint]]
  name = "github.com/confluentinc/cc-utils"
  version = "0.4.0"

[[constraint]]
  branch = "master"
  name = "github.com/dariubs/percent"

[[constraint]]
  name = "github.com/go-kit/kit"
  version = "0.6.0"

[[constraint]]
  branch = "master"
  name = "github.com/jinzhu/copier"

[[constraint]]
  name = "github.com/pkg/errors"
  version = "0.8.0"

[[constraint]]
  name = "github.com/spf13/viper"
  version = "1.2.1"

[[constraint]]
  name = "github.com/stretchr/testify"
  version = "1.2.2"

[prune]
  go-tests = true
  unused-packages = true
+43 −0
原始行号 差异行号 差异行
SERVICE_NAME := soak-clients
IMAGE_NAME := cc-$(SERVICE_NAME)
CHART_NAME := cc-$(SERVICE_NAME)
CODECOV := true
MAIN_GO := ./main.go
GO_OUTDIR ?= bin
GO_TEST_TARGET = test-go
GO_TEST_ARGS = -race -v -cover -p=1
GO_CODECOV_TEST_ARGS = -race -v -cover -p=1

MASTER_BRANCH := master

SOAK_TEST_NAMESPACE ?= soak-tests

DOCKER_BUILD_PRE = sync-mkinclude
.PHONY: sync-mkinclude
sync-mkinclude:
	rsync -avz --delete-after ../../mk-include/ ./.mk-include.local-context-copy

include ./mk-include/cc-begin.mk
ifeq ($(VERSION),)
include ./mk-include/cc-semver.mk
endif
include ./mk-include/cc-go.mk
include ./mk-include/cc-docker.mk
include ./mk-include/cc-cpd.mk
include ./mk-include/cc-helm.mk
include ./mk-include/cc-end.mk

$(GO_OUTDIR)/soak-clients:
	go build -o $@ -ldflags $(GO_LDFLAGS) soak_cluster/main.go

.PHONY: helm-deploy-soak
## Deploy helm to current kube context with values set to local.yaml on the soak-tests namespace
helm-deploy-soak:
	helm upgrade \
	--install $(CHART_NAME)-dev \
	charts/$(CHART_NAME) \
	--namespace $(SOAK_TEST_NAMESPACE) \
	--set namespace=$(SOAK_TEST_NAMESPACE) \
	--debug \
	-f charts/values/local.yaml \
	--set image.tag=$(CHART_VERSION) $(HELM_ARGS)
+21 −0
原始行号 差异行号 差异行
# soak_clients

The soak_clients package consists of the following components:
* [clients spawner](./soak_clients/main.go) - Creates Trogdor tasks (via its REST API) that are used for soak testing
* [status_reporter](./soak_clients/status_reporter.go) - Periodically queries the currently running Trogdor tasks and reports their status
* [performance_tests](./performance/main.go) - Creates Trogdor tasks (via its REST API) that are used for performance testing

All of them are callable through the [soak_clients CLI](./main.go) 

### clients spawner
The clients spawning code takes in a JSON definition of the type of tasks we want to run. An example of the specification can be found [here](./soak_clients/config/baseline.json).
We split tasks by two types:
* _long-lived_ - These tasks simply run for `long_lived_task_duration_ms`
* _short-lived_ - These tasks run for `short_lived_task_duration_ms`. The main goal with these is to mimick new clients. The spawner will create tasks for these in such a way that we always have a short-lived task running up to the end of the long-lived tasks. They will be re-scheduled every `short_lived_task_reschedule_delay_ms`.
  * example: if we have a long-lived task scheduled to run 13:00-14:00 (1hr) and a short-lived task duration of 15m, we would have 4 short-lived tasks spawned (13:00-13:15, 13:15-13:30, 13:30-13:45, 13:45-14:00).
  * if we were to configure `short_lived_task_reschedule_delay_ms` to 30 minutes, we would have only two short-lived tasks in the same example period (13:00-13:15, 13:45-14:00)  

We define the amount of tasks on a per-topic basis inside the `topics` field. There we also define the total produce/consume throughput we want the topic to have. Said throughput then gets split evenly across every task. Note that this means you may not always have the desired throughput because the short-lived tasks will not be running during `short_lived_task_reschedule_delay_ms`

### performance tests
[README.md](./performance/README.md)
+5 −0
原始行号 差异行号 差异行
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for the Soak Cluster clients
name: cc-soak-clients
version: 0.0.22
+69 −0
原始行号 差异行号 差异行
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: "{{ .Chart.Name }}-clients-cli"
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: {{ .Chart.Name }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ .Chart.Name }}
        release: {{ .Release.Name }}
    spec:
      volumes:
        - name: "client-config-volume"
          configMap:
            name: "{{ .Chart.Name }}-client-config"
        - name: "topic-config-volume"
          configMap:
            name: "{{ .Chart.Name }}-topic-config"
        - name: "performance-test-volume"
          configMap:
            name: "{{ .Chart.Name }}-performance-test-config"
      containers:
        - name: {{ .Chart.Name }}
          {{- if eq .Values.image.tag "latest" }}
          image: "{{ .Values.image.repository }}/{{ .Values.image.name }}:{{ default .Chart.Version .Values.image.tag }}"
          {{- else }}
          image: "{{ .Values.image.repository }}/{{ .Values.image.name }}:v{{ default .Chart.Version .Values.image.tag }}"
          {{- end }}
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          {{- if .Values.image.pullSecret }}
          imagePullSecrets:
            - name: {{ .Values.image.pullSecret }}
          {{- end}}
          volumeMounts:
            - name: client-config-volume
              mountPath: /mnt/config/client
            - name: topic-config-volume
              mountPath: /mnt/config/topic
            - name: performance-test-volume
              mountPath: /mnt/config/performance
          restartPolicy: Always
          command: ["tail"]
          args: ["-f", "/dev/null"]
          env:
            - name: TROGDOR_HOST
              {{- if eq .Values.trogdorNamespace "" }}
              value: "http://cc-trogdor-service-coordinator.{{ .Release.Namespace }}.svc:{{ .Values.trogdorCoordinatorPort }}"
              {{- else }}
              value: "http://cc-trogdor-service-coordinator.{{ .Values.trogdorNamespace }}.svc:{{ .Values.trogdorCoordinatorPort }}"
              {{- end }}
            - name: PERFORMANCE_TEST_CONFIG_PATH
              value: "{{ .Values.performanceTestConfigPath }}"
            - name: TROGDOR_BOOTSTRAPSERVERS
              value: "{{ .Values.bootstrapServer }}"
            - name: TROGDOR_TOPIC_CONFIG_PATH
              value: "{{ .Values.topicConfigPath }}"
            - name: TROGDOR_AGENTS_COUNT
              value: "{{ .Values.agentCount }}"
            - name: TROGDOR_ADMIN_CONF
              value: "/mnt/config/client/client_properties.json"
            - name: DEBUG_LOGS
              value: "{{ .Values.debugLogs }}"
 No newline at end of file
+55 −0
原始行号 差异行号 差异行
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: "{{ .Chart.Name }}-clients-spawner"
spec:
  schedule: "{{ .Values.cronSchedule }}"
  jobTemplate:
    spec:
      template:
        spec:
          volumes:
          - name: "client-config-volume"
            configMap:
              name: "{{ .Chart.Name }}-client-config"
          - name: "topic-config-volume"
            configMap:
              name: "{{ .Chart.Name }}-topic-config"
          restartPolicy: Never
          containers:
          - name: {{ .Chart.Name }}
          {{- if eq .Values.image.tag "latest" }}
            image: "{{ .Values.image.repository }}/{{ .Values.image.name }}:{{ default .Chart.Version .Values.image.tag }}"
          {{- else }}
            image: "{{ .Values.image.repository }}/{{ .Values.image.name }}:v{{ default .Chart.Version .Values.image.tag }}"
          {{- end }}
            imagePullPolicy: {{ .Values.image.pullPolicy }}
          {{- if .Values.image.pullSecret }}
            imagePullSecrets:
              - name: {{ .Values.image.pullSecret }}
          {{- end}}
            args:
              - soak-clients
              - spawn
            env:
              - name: TROGDOR_HOST
              {{- if eq .Values.trogdorNamespace "" }}
                value: "http://cc-trogdor-service-coordinator.{{ .Release.Namespace }}.svc:{{ .Values.trogdorCoordinatorPort }}"
              {{- else }}
                value: "http://cc-trogdor-service-coordinator.{{ .Values.trogdorNamespace }}.svc:{{ .Values.trogdorCoordinatorPort }}"
              {{- end }}
              - name: TROGDOR_BOOTSTRAPSERVERS
                value: "{{ .Values.bootstrapServer }}"
              - name: TROGDOR_TOPIC_CONFIG_PATH
                value: "{{ .Values.topicConfigPath }}"
              - name: TROGDOR_AGENTS_COUNT
                value: "{{ .Values.agentCount }}"
              - name: TROGDOR_ADMIN_CONF
                value: "/mnt/config/client/client_properties.json"
              - name: DEBUG_LOGS
                value: "{{ .Values.debugLogs }}"
            volumeMounts:
              - name: client-config-volume
                mountPath: /mnt/config/client
              - name: topic-config-volume
                mountPath: /mnt/config/topic
+312 −0
原始行号 差异行号 差异行
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Chart.Name }}-client-config
  labels:
    app: {{ .Chart.Name }}
    chart: {{ .Chart.Name }}
    release: {{ .Release.Name }}
data:
  client_properties.json: '
{
  "sasl.mechanism": "PLAIN",
  "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"{{ .Values.apiKey }}\" password=\"{{ .Values.apiSecret }}\";",
  "security.protocol": "SASL_SSL",
  "linger.ms": {{ .Values.LingerMs }}
}
'
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Chart.Name }}-topic-config
  labels:
    app: {{ .Chart.Name }}
    chart: {{ .Chart.Name }}
    release: {{ .Release.Name }}
data:
  enterprise-topics.json: '
{
  "long_lived_task_duration_ms": {{ .Values.longLivedTaskMs }},
  "short_lived_task_duration_ms": {{ .Values.shortLivedTaskMs }},
  "short_lived_task_reschedule_delay_ms": {{ .Values.shortLivedTaskRescheduleDelayMs }},
  "topics": [
    {
      "name": "baseline_soak_medium_topic_25p",
      "partitions_count": 25,
      "produce_mbs_throughput": 15,
      "consume_mbs_throughput": 30,
      "long_lived_producer_count": 2,
      "short_lived_producer_count": 2,
      "long_lived_consumer_count": 2,
      "short_lived_consumer_count": 2,
      {{/*
        sequentialOffsets batch verifier will cause consumers to seek
        to beginning of log on startup, spawn short-lived consumers
        in a random consumer group so they don't affect other consumers
       */ -}}
      "short_lived_random_consumer_group": true,
      "idempotence_enabled": true,
      "short_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      },
      "long_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      }
    },
    {
      "name": "baseline_soak_eos_topic_100p",
      "partitions_count": 100,
      "produce_mbs_throughput": 15,
      "consume_mbs_throughput": 30,
      "long_lived_producer_count": 2,
      "short_lived_producer_count": 2,
      "long_lived_consumer_count": 2,
      "short_lived_consumer_count": 2,
      "transactions_enabled": true
    }
  ]
}
'
  cluster-linking-soak-source.json: '
{
  "long_lived_task_duration_ms": {{ .Values.longLivedTaskMs }},
  "short_lived_task_duration_ms": {{ .Values.shortLivedTaskMs }},
  "short_lived_task_reschedule_delay_ms": {{ .Values.shortLivedTaskRescheduleDelayMs }},
  "topics": [
    {
      "name": "SimpleLoadTopic",
      "partitions_count": 6,
      "produce_mbs_throughput": 10,
      "consume_mbs_throughput": 10,
      "long_lived_producer_count": 6,
      "short_lived_producer_count": 6,
      "long_lived_consumer_count": 6,
      "short_lived_consumer_count": 6
    }
  ]
}
'
  one-large-topic.json: '
{
  "long_lived_task_duration_ms": {{ .Values.longLivedTaskMs }},
  "short_lived_task_duration_ms": {{ .Values.shortLivedTaskMs }},
  "short_lived_task_reschedule_delay_ms": {{ .Values.shortLivedTaskRescheduleDelayMs }},
  "topics": [
    {
      "name": "tenant_soak_topic_900p",
      "partitions_count": 1600,
      "produce_mbs_throughput": 70,
      "consume_mbs_throughput": 60,
      "long_lived_producer_count": 6,
      "short_lived_producer_count": 6,
      "long_lived_consumer_count": 6,
      "short_lived_consumer_count": 6
    }
  ]
}
'
  skewed-load-topics.json: '
{
  "long_lived_task_duration_ms": {{ .Values.longLivedTaskMs }},
  "short_lived_task_duration_ms": {{ .Values.shortLivedTaskMs }},
  "short_lived_task_reschedule_delay_ms": {{ .Values.shortLivedTaskRescheduleDelayMs }},
  "topics": [
    {
      "name": "tenant_soak_topic_4p",
      "partitions_count": 4,
      "produce_mbs_throughput": 15,
      "consume_mbs_throughput": 30,
      "long_lived_producer_count": 2,
      "short_lived_producer_count": 2,
      "long_lived_consumer_count": 2,
      "short_lived_consumer_count": 2
    },
    {
      "name": "tenant_soak_low_bandwidth_topic_16p",
      "partitions_count": 16,
      "produce_mbs_throughput": 1,
      "consume_mbs_throughput": 2,
      "long_lived_producer_count": 1,
      "short_lived_producer_count": 1,
      "long_lived_consumer_count": 1,
      "short_lived_consumer_count": 1,
      "idempotence_enabled": true
    }
  ]
}
'
  unbalanced-topics.json: '
{
  "long_lived_task_duration_ms": {{ .Values.longLivedTaskMs }},
  "short_lived_task_duration_ms": {{ .Values.shortLivedTaskMs }},
  "short_lived_task_reschedule_delay_ms": {{ .Values.shortLivedTaskRescheduleDelayMs }},
  "topics": [
    {
      "name": "unbalanced_soak_topic_1_9p",
      "partitions_count": 9,
      "produce_mbs_throughput": 10,
      "consume_mbs_throughput": 10,
      "long_lived_producer_count": 2,
      "short_lived_producer_count": 2,
      "long_lived_consumer_count": 2,
      "short_lived_consumer_count": 2,
      "workload_type": "gaussian",
      "short_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      },
      "long_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      }
    },
    {
      "name": "unbalanced_soak_topic_2_9p",
      "partitions_count": 9,
      "produce_mbs_throughput": 10,
      "consume_mbs_throughput": 10,
      "long_lived_producer_count": 2,
      "short_lived_producer_count": 2,
      "long_lived_consumer_count": 2,
      "short_lived_consumer_count": 2,
      "workload_type": "gaussian",
      "short_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      },
      "long_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      }
    },
    {
      "name": "unbalanced_soak_topic_3_9p",
      "partitions_count": 9,
      "produce_mbs_throughput": 5,
      "consume_mbs_throughput": 5,
      "long_lived_producer_count": 2,
      "short_lived_producer_count": 2,
      "long_lived_consumer_count": 2,
      "short_lived_consumer_count": 2,
      "workload_type": "gaussian",
      "short_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      },
      "long_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      }
    },
    {
      "name": "unbalanced_soak_topic_4_9p",
      "partitions_count": 9,
      "produce_mbs_throughput": 5,
      "consume_mbs_throughput": 5,
      "long_lived_producer_count": 2,
      "short_lived_producer_count": 2,
      "long_lived_consumer_count": 2,
      "short_lived_consumer_count": 2,
      "workload_type": "gaussian",
      "short_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
      },
      "long_lived_consumer_record_batch_verifier": {
        "type": "sequentialOffsets"
     }
    }
  ]
}
'
  sbk-soak-unbalanced-topics.json: '
{
  "long_lived_task_duration_ms": {{ .Values.longLivedTaskMs }},
  "short_lived_task_duration_ms": {{ .Values.shortLivedTaskMs }},
  "short_lived_task_reschedule_delay_ms": {{ .Values.shortLivedTaskRescheduleDelayMs }},
  "topics": [
    {
      "name": "unbalanced_soak_topic_1_6p",
      "partitions_count": 6,
      "produce_mbs_throughput": 15,
      "consume_mbs_throughput": 15,
      "long_lived_producer_count": 4,
      "long_lived_consumer_count": 4,
      "workload_type": "gaussian"
    },
    {
      "name": "unbalanced_soak_topic_2_6p",
      "partitions_count": 6,
      "produce_mbs_throughput": 15,
      "consume_mbs_throughput": 15,
      "long_lived_producer_count": 4,
      "long_lived_consumer_count": 4,
      "workload_type": "gaussian"
    },
    {
      "name": "unbalanced_soak_topic_3_6p",
      "partitions_count": 6,
      "produce_mbs_throughput": 5,
      "consume_mbs_throughput": 5,
      "long_lived_producer_count": 4,
      "long_lived_consumer_count": 4,
      "workload_type": "gaussian"
    },
    {
      "name": "unbalanced_soak_topic_4_6p",
      "partitions_count": 6,
      "produce_mbs_throughput": 5,
      "consume_mbs_throughput": 5,
      "long_lived_producer_count": 4,
      "long_lived_consumer_count": 4,
      "workload_type": "gaussian"
    }
  ]
}
'
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Chart.Name }}-performance-test-config
  labels:
    app: {{ .Chart.Name }}
    chart: {{ .Chart.Name }}
    release: {{ .Release.Name }}
data:
  performance_test.json: '
{
  "scenario_name": "ExampleTest",
  "test_definitions": [
    {
      "test_type": "ProgressiveWorkload",
      "test_name": "progressive-produce-test-10-200mbs",
      "test_parameters": {
        "workload_type": "Produce",
        "step_duration_ms": 150000,
        "partition_count": 100,
        "step_cooldown_ms": 30000,
        "start_throughput_mbs": 10,
        "end_throughput_mbs": 200,
        "throughput_increase_per_step_mbs": 10,
        "message_size_bytes": 1000
      }
    }
  ]
}
'
  cluster_linking_performance_test_source.json: '
{
  "scenario_name": "CloudTopicsTest",
  "test_definitions": [
    {
      "test_type": "ProgressiveWorkload",
      "test_name": "progressive-produce-test-10-100mbs",
      "test_parameters": {
        "workload_type": "Produce",
        "step_duration_ms": 15000000,
        "partition_count": 6,
        "step_cooldown_ms": 30000,
        "start_throughput_mbs": 10,
        "end_throughput_mbs": 100,
        "throughput_increase_per_step_mbs": 1,
        "message_size_bytes": 1000
      }
    }
  ]
} 
'
+25 −0
原始行号 差异行号 差异行
image:
  # The repository for the docker image. This is different across cloud providers and environments.
  # Azure Devel: cclouddevel.azurecr.io/confluentinc
  # AWS Devel: 037803949979.dkr.ecr.us-west-2.amazonaws.com/confluentinc
  # GCP Devel: us.gcr.io/cc-devel
  # CPD: confluent-docker.jfrog.io/confluentinc
  repository: "confluent-docker.jfrog.io/confluentinc"
  name: cc-soak-clients
  tag: 0.803.0-6.1.0-0-ce
  pullPolicy: IfNotPresent
trogdorCoordinatorPort: 9002
trogdorNamespace: ""  # optional; if you want to explicitly specify the namespace Trogdor is spawned in
apiKey: "null"
apiSecret: "null"
bootstrapServer: "null"
topicConfigPath: "/mnt/config/topic/enterprise-topics.json"
performanceTestConfigPath: "/mnt/config/performance/performance_test.json"
shortLivedTaskMs: "900000" # 15 minutes
shortLivedTaskRescheduleDelayMs: "0"
longLivedTaskMs: "604800000" # 7 days
logRetentionMs: "10800000" # 3 hours
LingerMs: "0"
debugLogs: "true"
agentCount: "6" # as much as agentReplicaCount in cc-trogdor-service
cronSchedule: "0 12 * * 1" # at 12 on every Monday
+10 −0
原始行号 差异行号 差异行
---
apiKey: "null"
apiSecret: "null"
bootstrapServer: "null"
topicConfigPath: "/mnt/config/topic/enterprise-topics.json"
shortLivedTaskMs: "900000"
shortLivedTaskRescheduleDelayMs: "0"
longLivedTaskMs: "604800000"
debugLogs: "true"
agentCount: "6" # as much as agentReplicaCount in cc-trogdor-service
+17 −0
原始行号 差异行号 差异行
---
image:
  # The repository for the docker image. This is different across cloud providers and environments.
  # Azure Devel: cclouddevel.azurecr.io/confluentinc
  # AWS Devel: 037803949979.dkr.ecr.us-west-2.amazonaws.com/confluentinc
  # GCP Devel: us.gcr.io/cc-devel
  # CPD: confluent-docker.jfrog.io/confluentinc
  repository: "us.gcr.io/cc-devel"
trogdorNamespace: ""
apiKey: ""
apiSecret: ""
bootstrapServer: ""
topicConfigPath: "/mnt/config/topic/sbk-soak-unbalanced-topics.json"
longLivedTaskMs: "10800000"
debugLogs: "true"
agentCount: "6" # as much as agentReplicaCount in cc-trogdor-service
cronSchedule: "0 */3 * * *" # (every 3 hours)
+60 −0

添加文件。

预览已超出大小限制,变更已折叠。

+5 −0
原始行号 差异行号 差异行
{
    "linger.ms": 100,
    "compression.type": "lz4",
    "auto.offset.reset": "earliest"
}