kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ij...@apache.org
Subject kafka git commit: KAFKA-4151; Update public docs for Cluster Id (KIP-78)
Date Sat, 24 Sep 2016 09:23:44 GMT
Repository: kafka
Updated Branches:
  refs/heads/0.10.1 cd252352c -> 97f41a5c1

KAFKA-4151; Update public docs for Cluster Id (KIP-78)

- Updated implementation docs with details on Cluster Id generation.
- Mention cluster id in "noteworthy changes for" in upgrade docs.

Author: Sumit Arrawatia <sumit.arrawatia@gmail.com>
Author: arrawatia <sumit.arrawatia@gmail.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes #1895 from arrawatia/kip-78-docs

(cherry picked from commit 36242b846a42b33d7d4c1931f2dae93ebe1547c7)
Signed-off-by: Ismael Juma <ismael@juma.me.uk>

Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/97f41a5c
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/97f41a5c
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/97f41a5c

Branch: refs/heads/0.10.1
Commit: 97f41a5c1193146f9c064a7c2c6265ee340036a5
Parents: cd25235
Author: Sumit Arrawatia <sumit.arrawatia@gmail.com>
Authored: Sat Sep 24 10:16:49 2016 +0100
Committer: Ismael Juma <ismael@juma.me.uk>
Committed: Sat Sep 24 10:23:39 2016 +0100

 docs/implementation.html | 11 ++++++++++-
 docs/upgrade.html        |  9 ++++++++-
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/docs/implementation.html b/docs/implementation.html
index 16ba07a..91e17a6 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -337,7 +337,7 @@ Each of the consumers in the group registers under its group and creates
a znode
 Consumers track the maximum offset they have consumed in each partition. This value is stored
in a ZooKeeper directory if <code>offsets.storage=zookeeper</code>.
-/consumers/[group_id]/offsets/[topic]/[partition_id] --> offset_counter_value ((persistent
+/consumers/[group_id]/offsets/[topic]/[partition_id] --> offset_counter_value (persistent
 <h4><a id="impl_zkowner" href="#impl_zkowner">Partition Owner registry</a></h4>
@@ -350,6 +350,15 @@ Each broker partition is consumed by a single consumer within a given
consumer g
 /consumers/[group_id]/owners/[topic]/[partition_id] --> consumer_node_id (ephemeral node)
+<h4><a id="impl_clusterid" href="#impl_clusterid">Cluster Id</a></h4>
+    The cluster id is a unique and immutable identifier assigned to a Kafka cluster. The
cluster id can have a maximum of 22 characters and the allowed characters are defined by the
regular expression [a-zA-Z0-9_\-]+, which corresponds to the characters used by the URL-safe
Base64 variant with no padding. Conceptually, it is auto-generated when a cluster is started
for the first time.
+    Implementation-wise, it is generated when a broker with version 0.10.1 or later is successfully
started for the first time. The broker tries to get the cluster id from the <code>/cluster/id</code>
znode during startup. If the znode does not exist, the broker generates a new cluster id and
creates the znode with this cluster id.
 <h4><a id="impl_brokerregistration" href="#impl_brokerregistration">Broker node

diff --git a/docs/upgrade.html b/docs/upgrade.html
index 629babb..2174018 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -52,17 +52,24 @@ Note: Because new protocols are introduced, it is important to upgrade
your Kafk
     <li> The open file handlers of 0.10.0 will increase by ~33% because of the addition
of time index files for each segment.</li>
     <li> The time index and offset index share the same index size configuration. Since
each time index entry is 1.5x the size of offset index entry. User may need to increase log.index.size.max.bytes
to avoid potential frequent log rolling. </li>
     <li> Due to the increased number of index files, on some brokers with large amount
the log segments (e.g. >15K), the log loading process during the broker startup could be
longer. Based on our experiment, setting the num.recovery.threads.per.data.dir to one may
reduce the log loading time. </li>
-    <li> ListOffsetRequest v1 is introduced and used by default to support accurate
offset search based on timestamp.
 <h5><a id="upgrade_1010_notable" href="#upgrade_1010_notable">Notable changes
     <li> The new Java consumer is no longer in beta and we recommend it for all new
development. The old Scala consumers are still supported, but they will be deprecated in the
next release
          and will be removed in a future major release. </li>
+    <li> Kafka clusters can now be uniquely identified by a cluster id. It will be
automatically generated when a broker is upgraded to The cluster id is available
via the kafka.server:type=KafkaServer,name=ClusterId metric and it is part of the Metadata
response. Serializers, client interceptors and metric reporters can receive the cluster id
by implementing the ClusterResourceListener interface. </li>
     <li> The BrokerState "RunningAsController" (value 4) has been removed. Due to a
bug, a broker would only be in this state briefly before transitioning out of it and hence
the impact of the removal should be minimal. The recommended way to detect if a given broker
is the controller is via the kafka.controller:type=KafkaController,name=ActiveControllerCount
metric. </li>
     <li> The new Java Consumer now allows users to search offsets by timestamp on partitions.
+<h5><a id="upgrade_1010_new_protocols" href="#upgrade_1010_new_protocols">New
Protocol Versions</a></h5>
+    <li> ListOffsetRequest v1 is introduced and used by default to support accurate
offset search based on timestamp.
+    <li> MetadataRequest/Response v2 has been introduced. v2 adds a new field "cluster_id"
to MetadataResponse.
 <h4><a id="upgrade_10" href="#upgrade_10">Upgrading from 0.8.x or 0.9.x to</a></h4> has <a href="#upgrade_10_breaking">potential breaking changes</a> (please
review before upgrading) and possible <a href="#upgrade_10_performance_impact">  performance
impact following the upgrade</a>. By following the recommended rolling upgrade plan
below, you guarantee no downtime and no performance impact during and following the upgrade.

View raw message