kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ij...@apache.org
Subject kafka-site git commit: Update docs and javadoc with latest minor changes
Date Tue, 24 May 2016 10:37:17 GMT
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site d4123533d -> 1a18a4561


Update docs and javadoc with latest minor changes

Clarify producer blocking behaviour and a couple of updates
on the security documentation.


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/1a18a456
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/1a18a456
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/1a18a456

Branch: refs/heads/asf-site
Commit: 1a18a456193d4ddb73f2d60c159e9cf61ebcd940
Parents: d412353
Author: Ismael Juma <ismael@juma.me.uk>
Authored: Tue May 24 11:36:59 2016 +0100
Committer: Ismael Juma <ismael@juma.me.uk>
Committed: Tue May 24 11:36:59 2016 +0100

----------------------------------------------------------------------
 0100/generated/producer_config.html             |   4 +-
 .../kafka/clients/producer/KafkaProducer.html   |   6 +-
 .../kafka/clients/producer/ProducerConfig.html  |   4 +-
 0100/security.html                              | 147 ++++++++++---------
 0100/upgrade.html                               |   1 +
 5 files changed, 82 insertions(+), 80 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1a18a456/0100/generated/producer_config.html
----------------------------------------------------------------------
diff --git a/0100/generated/producer_config.html b/0100/generated/producer_config.html
index 2a19fa7..665cf15 100644
--- a/0100/generated/producer_config.html
+++ b/0100/generated/producer_config.html
@@ -16,7 +16,7 @@
 <tr>
 <td>acks</td><td>The number of acknowledgments the producer requires the
leader to have received before considering a request complete. This controls the  durability
of records that are sent. The following settings are common:  <ul> <li><code>acks=0</code>
If set to zero then the producer will not wait for any acknowledgment from the server at all.
The record will be immediately added to the socket buffer and considered sent. No guarantee
can be made that the server has received the record in this case, and the <code>retries</code>
configuration will not take effect (as the client won't generally know of any failures). The
offset given back for each record will always be set to -1. <li><code>acks=1</code>
This will mean the leader will write the record to its local log but will respond without
awaiting full acknowledgement from all followers. In this case should the leader fail immediately
after acknowledging the record but before the followers have replicated it then the record
wil
 l be lost. <li><code>acks=all</code> This means the leader will wait for
the full set of in-sync replicas to acknowledge the record. This guarantees that the record
will not be lost as long as at least one in-sync replica remains alive. This is the strongest
available guarantee.</td><td>string</td><td>1</td><td>[all,
-1, 0, 1]</td><td>high</td></tr>
 <tr>
-<td>buffer.memory</td><td>The total bytes of memory the producer can use
to buffer records waiting to be sent to the server. If records are sent faster than they can
be delivered to the server the producer will either block or throw an exception based on the
preference specified by <code>block.on.buffer.full</code>. <p>This setting
should correspond roughly to the total memory the producer will use, but is not a hard bound
since not all memory the producer uses is used for buffering. Some additional memory will
be used for compression (if compression is enabled) as well as for maintaining in-flight requests.</td><td>long</td><td>33554432</td><td>[0,...]</td><td>high</td></tr>
+<td>buffer.memory</td><td>The total bytes of memory the producer can use
to buffer records waiting to be sent to the server. If records are sent faster than they can
be delivered to the server the producer will block for <code>max.block.ms</code>
after which it will throw an exception.<p>This setting should correspond roughly to
the total memory the producer will use, but is not a hard bound since not all memory the producer
uses is used for buffering. Some additional memory will be used for compression (if compression
is enabled) as well as for maintaining in-flight requests.</td><td>long</td><td>33554432</td><td>[0,...]</td><td>high</td></tr>
 <tr>
 <td>compression.type</td><td>The compression type for all data generated
by the producer. The default is none (i.e. no compression). Valid  values are <code>none</code>,
<code>gzip</code>, <code>snappy</code>, or <code>lz4</code>.
Compression is of full batches of data, so the efficacy of batching will also impact the compression
ratio (more batching means better compression).</td><td>string</td><td>none</td><td></td><td>high</td></tr>
 <tr>
@@ -70,7 +70,7 @@
 <tr>
 <td>timeout.ms</td><td>The configuration controls the maximum amount of
time the server will wait for acknowledgments from followers to meet the acknowledgment requirements
the producer has specified with the <code>acks</code> configuration. If the requested
number of acknowledgments are not met when the timeout elapses an error will be returned.
This timeout is measured on the server side and does not include the network latency of the
request.</td><td>int</td><td>30000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>block.on.buffer.full</td><td>When our memory buffer is exhausted we
must either stop accepting new records (block) or throw errors. By default this setting is
false and the producer will throw a BufferExhaustedException if a record is sent and the buffer
space is full. However in some scenarios getting an error is not desirable and it is better
to block. Setting this to <code>true</code> will accomplish that.<em>If
this property is set to true, parameter <code>metadata.fetch.timeout.ms</code>
is not longer honored.</em><p>This parameter is deprecated and will be removed
in a future release. Parameter <code>max.block.ms</code> should be used instead.</td><td>boolean</td><td>false</td><td></td><td>low</td></tr>
+<td>block.on.buffer.full</td><td>When our memory buffer is exhausted we
must either stop accepting new records (block) or throw errors. By default this setting is
false and the producer will no longer throw a BufferExhaustException but instead will use
the {@link #MAX_BLOCK_MS_CONFIG} value to block, after which it will throw a TimeoutException.
Setting this property to true will set the <code>max.block.ms</code> to Long.MAX_VALUE.<em>Also
if this property is set to true, parameter <code>metadata.fetch.timeout.ms</code>
is not longer honored.</em><p>This parameter is deprecated and will be removed
in a future release. Parameter <code>max.block.ms</code> should be used instead.</td><td>boolean</td><td>false</td><td></td><td>low</td></tr>
 <tr>
 <td>interceptor.classes</td><td>A list of classes to use as interceptors.
Implementing the <code>ProducerInterceptor</code> interface allows you to intercept
(and possibly mutate) the records received by the producer before they are published to the
Kafka cluster. By default, there are no interceptors.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1a18a456/0100/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html b/0100/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
index 1487063..a740ad0 100644
--- a/0100/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
+++ b/0100/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
@@ -157,8 +157,8 @@ implements <a href="../../../../../org/apache/kafka/clients/producer/Producer.ht
  <p>
  The <code>buffer.memory</code> controls the total amount of memory available
to the producer for buffering. If records
  are sent faster than they can be transmitted to the server then this buffer space will be
exhausted. When the buffer space is
- exhausted additional send calls will block. For uses where you want to avoid any blocking
you can set <code>block.on.buffer.full=false</code> which
- will cause the send call to result in an exception.
+ exhausted additional send calls will block. The threshold for time to block is determined
by <code>max.block.ms</code> after which it throws
+ a TimeoutException.
  <p>
  The <code>key.serializer</code> and <code>value.serializer</code>
instruct how to turn the key and value objects the user provides with
  their <code>ProducerRecord</code> into bytes. You can use the included <a
href="../../../../../org/apache/kafka/common/serialization/ByteArraySerializer.html" title="class
in org.apache.kafka.common.serialization"><code>ByteArraySerializer</code></a>
or
@@ -441,7 +441,7 @@ implements <a href="../../../../../org/apache/kafka/clients/producer/Producer.ht
 <dt><span class="strong">Throws:</span></dt>
 <dd><code><a href="../../../../../org/apache/kafka/common/errors/InterruptException.html"
title="class in org.apache.kafka.common.errors">InterruptException</a></code>
- If the thread is interrupted while blocked</dd>
 <dd><code><a href="../../../../../org/apache/kafka/common/errors/SerializationException.html"
title="class in org.apache.kafka.common.errors">SerializationException</a></code>
- If the key or value are not valid objects given the configured serializers</dd>
-<dd><code><a href="../../../../../org/apache/kafka/clients/producer/BufferExhaustedException.html"
title="class in org.apache.kafka.clients.producer">BufferExhaustedException</a></code>
- If <code>block.on.buffer.full=false</code> and the buffer is full.</dd></dl>
+<dd><code><a href="../../../../../org/apache/kafka/common/errors/TimeoutException.html"
title="class in org.apache.kafka.common.errors">TimeoutException</a></code>
- if the time taken for fetching metadata or allocating memory for the record has surpassed
<code>max.block.ms</code>.</dd></dl>
 </li>
 </ul>
 <a name="flush()">

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1a18a456/0100/javadoc/org/apache/kafka/clients/producer/ProducerConfig.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/org/apache/kafka/clients/producer/ProducerConfig.html b/0100/javadoc/org/apache/kafka/clients/producer/ProducerConfig.html
index 30060be..961f6dc 100644
--- a/0100/javadoc/org/apache/kafka/clients/producer/ProducerConfig.html
+++ b/0100/javadoc/org/apache/kafka/clients/producer/ProducerConfig.html
@@ -138,7 +138,7 @@ extends <a href="../../../../../org/apache/kafka/common/config/AbstractConfig.ht
 <td class="colFirst"><code>static <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/String.html?is-external=true"
title="class or interface in java.lang">String</a></code></td>
 <td class="colLast"><code><strong><a href="../../../../../org/apache/kafka/clients/producer/ProducerConfig.html#BLOCK_ON_BUFFER_FULL_CONFIG">BLOCK_ON_BUFFER_FULL_CONFIG</a></strong></code>
 <div class="block"><strong>Deprecated.</strong>&nbsp;
-<div class="block"><i>This config will be removed in a future release. Also,
the <a href="../../../../../org/apache/kafka/clients/producer/ProducerConfig.html#METADATA_FETCH_TIMEOUT_CONFIG"><code>METADATA_FETCH_TIMEOUT_CONFIG</code></a>
is no longer honored when this property is set to true.</i></div>
+<div class="block"><i>This config will be removed in a future release. Please
use <a href="../../../../../org/apache/kafka/clients/producer/ProducerConfig.html#MAX_BLOCK_MS_CONFIG"><code>MAX_BLOCK_MS_CONFIG</code></a>.</i></div>
 </div>
 </td>
 </tr>
@@ -521,7 +521,7 @@ public static final&nbsp;<a href="http://docs.oracle.com/javase/7/docs/api/java/
 <h4>BLOCK_ON_BUFFER_FULL_CONFIG</h4>
 <pre><a href="http://docs.oracle.com/javase/7/docs/api/java/lang/Deprecated.html?is-external=true"
title="class or interface in java.lang">@Deprecated</a>
 public static final&nbsp;<a href="http://docs.oracle.com/javase/7/docs/api/java/lang/String.html?is-external=true"
title="class or interface in java.lang">String</a> BLOCK_ON_BUFFER_FULL_CONFIG</pre>
-<div class="block"><span class="strong">Deprecated.</span>&nbsp;<i>This
config will be removed in a future release. Also, the <a href="../../../../../org/apache/kafka/clients/producer/ProducerConfig.html#METADATA_FETCH_TIMEOUT_CONFIG"><code>METADATA_FETCH_TIMEOUT_CONFIG</code></a>
is no longer honored when this property is set to true.</i></div>
+<div class="block"><span class="strong">Deprecated.</span>&nbsp;<i>This
config will be removed in a future release. Please use <a href="../../../../../org/apache/kafka/clients/producer/ProducerConfig.html#MAX_BLOCK_MS_CONFIG"><code>MAX_BLOCK_MS_CONFIG</code></a>.</i></div>
 <dl><dt><span class="strong">See Also:</span></dt><dd><a
href="../../../../../constant-values.html#org.apache.kafka.clients.producer.ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG">Constant
Field Values</a></dd></dl>
 </li>
 </ul>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1a18a456/0100/security.html
----------------------------------------------------------------------
diff --git a/0100/security.html b/0100/security.html
index 3e5085b..2459f54 100644
--- a/0100/security.html
+++ b/0100/security.html
@@ -93,7 +93,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
             <pre>
         #!/bin/bash
         #Step 1
-        keytool -keystore server.keystore.jks -alias localhost -validity 365 -genkey
+        keytool -keystore server.keystore.jks -alias localhost -validity 365 -keyalg RSA
-genkey
         #Step 2
         openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
         keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
@@ -448,74 +448,6 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
           and <a href="#security_sasl_plain_brokerconfig">PLAIN</a> to configure
SASL for the enabled mechanisms.</li>
     </ol>
   </li>
-  <li><h4><a id="security_rolling_upgrade" href="#security_rolling_upgrade">Incorporating
Security Features in a Running Cluster</a></h4>
-          You can secure a running cluster via one or more of the supported protocols discussed
previously. This is done in phases:
-          <p></p>
-          <ul>
-              <li>Incrementally bounce the cluster nodes to open additional secured
port(s).</li>
-              <li>Restart clients using the secured rather than PLAINTEXT port (assuming
you are securing the client-broker connection).</li>
-              <li>Incrementally bounce the cluster again to enable broker-to-broker
security (if this is required)</li>
-            <li>A final incremental bounce to close the PLAINTEXT port.</li>
-          </ul>
-          <p></p>
-          The specific steps for configuring SSL and SASL are described in sections <a
href="#security_ssl">7.2</a> and <a href="#security_sasl">7.3</a>.
-          Follow these steps to enable security for your desired protocol(s).
-          <p></p>
-          The security implementation lets you configure different protocols for both broker-client
and broker-broker communication.
-          These must be enabled in separate bounces. A PLAINTEXT port must be left open throughout
so brokers and/or clients can continue to communicate.
-          <p></p>
-
-          When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It's
also good practice to wait for restarted replicas to return to the ISR list before moving
onto the next node.
-          <p></p>
-          As an example, say we wish to encrypt both broker-client and broker-broker communication
with SSL. In the first incremental bounce, a SSL port is opened on each node:
-          <pre>
-         listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092</pre>
-
-          We then restart the clients, changing their config to point at the newly opened,
secured port:
-
-          <pre>
-        bootstrap.servers = [broker1:9092,...]
-        security.protocol = SSL
-        ...etc</pre>
-
-          In the second incremental server bounce we instruct Kafka to use SSL as the broker-broker
protocol (which will use the same SSL port):
-
-          <pre>
-        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
-        security.inter.broker.protocol=SSL</pre>
-
-          In the final bounce we secure the cluster by closing the PLAINTEXT port:
-
-          <pre>
-        listeners=SSL://broker1:9092
-        security.inter.broker.protocol=SSL</pre>
-
-          Alternatively we might choose to open multiple ports so that different protocols
can be used for broker-broker and broker-client communication. Say we wished to use SSL encryption
throughout (i.e. for broker-broker and broker-client communication) but we'd like to add SASL
authentication to the broker-client connection also. We would achieve this by opening two
additional ports during the first bounce:
-
-          <pre>
-        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093</pre>
-
-          We would then restart the clients, changing their config to point at the newly
opened, SASL & SSL secured port:
-
-          <pre>
-        bootstrap.servers = [broker1:9093,...]
-        security.protocol = SASL_SSL
-        ...etc</pre>
-
-          The second server bounce would switch the cluster to use encrypted broker-broker
communication via the SSL port we previously opened on port 9092:
-
-          <pre>
-        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
-        security.inter.broker.protocol=SSL</pre>
-
-          The final bounce secures the cluster by closing the PLAINTEXT port.
-
-          <pre>
-       listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
-       security.inter.broker.protocol=SSL</pre>
-
-          ZooKeeper can be secured independently of the Kafka cluster. The steps for doing
this are covered in section <a href="#zk_authz_migration">7.5.2</a>.
-  </li>
   <li><h4><a id="saslmechanism_rolling_upgrade" href="#saslmechanism_rolling_upgrade">Modifying
SASL mechanism in a Running Cluster</a></h4>
     <p>SASL mechanism can be modified in a running cluster using the following sequence:</p>
     <ol>
@@ -673,8 +605,77 @@ Suppose you want to add an acl "Principals User:Bob and User:Alice are
allowed t
             In order to remove a principal from producer or consumer role we just need to
pass --remove option. </li>
     </ul>
 
-<h3><a id="zk_authz" href="#zk_authz">7.5 ZooKeeper Authentication</a></h3>
-<h4><a id="zk_authz_new" href="#zk_authz_new">7.5.1 New clusters</a></h4>
+<h3><a id="security_rolling_upgrade" href="#security_rolling_upgrade">7.5 Incorporating
Security Features in a Running Cluster</a></h3>
+    You can secure a running cluster via one or more of the supported protocols discussed
previously. This is done in phases:
+    <p></p>
+    <ul>
+        <li>Incrementally bounce the cluster nodes to open additional secured port(s).</li>
+        <li>Restart clients using the secured rather than PLAINTEXT port (assuming
you are securing the client-broker connection).</li>
+        <li>Incrementally bounce the cluster again to enable broker-to-broker security
(if this is required)</li>
+        <li>A final incremental bounce to close the PLAINTEXT port.</li>
+    </ul>
+    <p></p>
+    The specific steps for configuring SSL and SASL are described in sections <a href="#security_ssl">7.2</a>
and <a href="#security_sasl">7.3</a>.
+    Follow these steps to enable security for your desired protocol(s).
+    <p></p>
+    The security implementation lets you configure different protocols for both broker-client
and broker-broker communication.
+    These must be enabled in separate bounces. A PLAINTEXT port must be left open throughout
so brokers and/or clients can continue to communicate.
+    <p></p>
+
+    When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It's also
good practice to wait for restarted replicas to return to the ISR list before moving onto
the next node.
+    <p></p>
+    As an example, say we wish to encrypt both broker-client and broker-broker communication
with SSL. In the first incremental bounce, a SSL port is opened on each node:
+          <pre>
+         listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092</pre>
+
+    We then restart the clients, changing their config to point at the newly opened, secured
port:
+
+          <pre>
+        bootstrap.servers = [broker1:9092,...]
+        security.protocol = SSL
+        ...etc</pre>
+
+    In the second incremental server bounce we instruct Kafka to use SSL as the broker-broker
protocol (which will use the same SSL port):
+
+          <pre>
+        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
+        security.inter.broker.protocol=SSL</pre>
+
+    In the final bounce we secure the cluster by closing the PLAINTEXT port:
+
+          <pre>
+        listeners=SSL://broker1:9092
+        security.inter.broker.protocol=SSL</pre>
+
+    Alternatively we might choose to open multiple ports so that different protocols can
be used for broker-broker and broker-client communication. Say we wished to use SSL encryption
throughout (i.e. for broker-broker and broker-client communication) but we'd like to add SASL
authentication to the broker-client connection also. We would achieve this by opening two
additional ports during the first bounce:
+
+          <pre>
+        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093</pre>
+
+    We would then restart the clients, changing their config to point at the newly opened,
SASL & SSL secured port:
+
+          <pre>
+        bootstrap.servers = [broker1:9093,...]
+        security.protocol = SASL_SSL
+        ...etc</pre>
+
+    The second server bounce would switch the cluster to use encrypted broker-broker communication
via the SSL port we previously opened on port 9092:
+
+          <pre>
+        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
+        security.inter.broker.protocol=SSL</pre>
+
+    The final bounce secures the cluster by closing the PLAINTEXT port.
+
+          <pre>
+       listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
+       security.inter.broker.protocol=SSL</pre>
+
+    ZooKeeper can be secured independently of the Kafka cluster. The steps for doing this
are covered in section <a href="#zk_authz_migration">7.6.2</a>.
+
+
+<h3><a id="zk_authz" href="#zk_authz">7.6 ZooKeeper Authentication</a></h3>
+<h4><a id="zk_authz_new" href="#zk_authz_new">7.6.1 New clusters</a></h4>
 To enable ZooKeeper authentication on brokers, there are two necessary steps:
 <ol>
 	<li> Create a JAAS login file and set the appropriate system property to point to
it as described above</li>
@@ -683,7 +684,7 @@ To enable ZooKeeper authentication on brokers, there are two necessary
steps:
 
 The metadata stored in ZooKeeper is such that only brokers will be able to modify the corresponding
znodes, but znodes are world readable. The rationale behind this decision is that the data
stored in ZooKeeper is not sensitive, but inappropriate manipulation of znodes can cause cluster
disruption. We also recommend limiting the access to ZooKeeper via network segmentation (only
brokers and some admin tools need access to ZooKeeper if the new consumer and new producer
are used).
 
-<h4><a id="zk_authz_migration" href="#zk_authz_migration">7.5.2 Migrating clusters</a></h4>
+<h4><a id="zk_authz_migration" href="#zk_authz_migration">7.6.2 Migrating clusters</a></h4>
 If you are running a version of Kafka that does not support security or simply with security
disabled, and you want to make the cluster secure, then you need to execute the following
steps to enable ZooKeeper authentication with minimal disruption to your operations:
 <ol>
 	<li>Perform a rolling restart setting the JAAS login file, which enables brokers to
authenticate. At the end of the rolling restart, brokers are able to manipulate znodes with
strict ACLs, but they will not create znodes with those ACLs</li>
@@ -704,7 +705,7 @@ Here is an example of how to run the migration tool:
 <pre>
 ./bin/zookeeper-security-migration --help
 </pre>
-<h4><a id="zk_authz_ensemble" href="#zk_authz_ensemble">7.5.3 Migrating the ZooKeeper
ensemble</a></h4>
+<h4><a id="zk_authz_ensemble" href="#zk_authz_ensemble">7.6.3 Migrating the ZooKeeper
ensemble</a></h4>
 It is also necessary to enable authentication on the ZooKeeper ensemble. To do it, we need
to perform a rolling restart of the server and set a few properties. Please refer to the ZooKeeper
documentation for more detail:
 <ol>
 	<li><a href="http://zookeeper.apache.org/doc/r3.4.6/zookeeperProgrammers.html#sc_ZooKeeperAccessControl">Apache
ZooKeeper documentation</a></li>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1a18a456/0100/upgrade.html
----------------------------------------------------------------------
diff --git a/0100/upgrade.html b/0100/upgrade.html
index d09b9d7..dec0808 100644
--- a/0100/upgrade.html
+++ b/0100/upgrade.html
@@ -164,6 +164,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded
to 0.9
     <li> Altering topic configuration from the kafka-topics.sh script (kafka.admin.TopicCommand)
has been deprecated. Going forward, please use the kafka-configs.sh script (kafka.admin.ConfigCommand)
for this functionality. </li>
     <li> The kafka-consumer-offset-checker.sh (kafka.tools.ConsumerOffsetChecker) has
been deprecated. Going forward, please use kafka-consumer-groups.sh (kafka.admin.ConsumerGroupCommand)
for this functionality. </li>
     <li> The kafka.tools.ProducerPerformance class has been deprecated. Going forward,
please use org.apache.kafka.tools.ProducerPerformance for this functionality (kafka-producer-perf-test.sh
will also be changed to use the new class). </li>
+    <li> The producer config block.on.buffer.full has been deprecated and will be removed
in future release. Currently its default value has been changed to false. The KafkaProducer
will no longer throw BufferExhaustedException but instead will use max.block.ms value to block,
after which it will throw a TimeoutException. If block.on.buffer.full property is set to true
explicitly, it will set the max.block.ms to Long.MAX_VALUE and metadata.fetch.timeout.ms will
not be honoured</li>
 </ul>
 
 <h4><a id="upgrade_82" href="#upgrade_82">Upgrading from 0.8.1 to 0.8.2</a></h4>


Mime
View raw message