kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jun...@apache.org
Subject [1/3] kafka-site git commit: updating docs for 0.9.0.1 release
Date Fri, 19 Feb 2016 17:27:06 GMT
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 2fb26e0a7 -> 7f47d1901


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/7f47d190/090/security.html
----------------------------------------------------------------------
diff --git a/090/security.html b/090/security.html
index 3acbbac..f348798 100644
--- a/090/security.html
+++ b/090/security.html
@@ -34,7 +34,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
 
 <ol>
     <li><h4><a id="security_ssl_key" href="#security_ssl_key">Generate
SSL key and certificate for each Kafka broker</a></h4>
-        The first step of deploying HTTPS is to generate the key and the certificate for
each machine in the cluster. You can use Java’s keytool utility to accomplish this task.
+        The first step of deploying HTTPS is to generate the key and the certificate for
each machine in the cluster. You can use Java's keytool utility to accomplish this task.
         We will generate the key into a temporary keystore initially so that we can export
and sign it later with CA.
         <pre>
         keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey</pre>
@@ -54,7 +54,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
 
         The generated CA is simply a public-private key pair and certificate, and it is intended
to sign other certificates.<br>
 
-        The next step is to add the generated CA to the **clients’ truststore** so that
the clients can trust this CA:
+        The next step is to add the generated CA to the **clients' truststore** so that the
clients can trust this CA:
         <pre>
         keytool -keystore server.truststore.jks -alias CARoot <b>-import</b>
-file ca-cert</pre>
 
@@ -62,7 +62,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
         <pre>
         keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert</pre>
 
-        In contrast to the keystore in step 1 that stores each machine’s own identity,
the truststore of a client stores all the certificates that the client should trust. Importing
a certificate into one’s truststore also means trusting all certificates that are signed
by that certificate. As the analogy above, trusting the government (CA) also means trusting
all passports (certificates) that it has issued. This attribute is called the chain of trust,
and it is particularly useful when deploying SSL on a large Kafka cluster. You can sign all
certificates in the cluster with a single CA, and have all machines share the same truststore
that trusts the CA. That way all machines can authenticate all other machines.</li>
+        In contrast to the keystore in step 1 that stores each machine's own identity, the
truststore of a client stores all the certificates that the client should trust. Importing
a certificate into one's truststore also means trusting all certificates that are signed by
that certificate. As the analogy above, trusting the government (CA) also means trusting all
passports (certificates) that it has issued. This attribute is called the chain of trust,
and it is particularly useful when deploying SSL on a large Kafka cluster. You can sign all
certificates in the cluster with a single CA, and have all machines share the same truststore
that trusts the CA. That way all machines can authenticate all other machines.</li>
 
     <li><h4><a id="security_ssl_signing" href="#security_ssl_signing">Signing
the certificate</a></h4>
         The next step is to sign all certificates generated by step 1 with the CA generated
in step 2. First, you need to export the certificate from the keystore:
@@ -206,7 +206,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
         principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
     };
 
-    # Zookeeper client authentication
+    // Zookeeper client authentication
     Client {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
@@ -216,8 +216,9 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
     };</pre>
 
         </li>
-        <li>Pass the name of the JAAS file as a JVM parameter to each Kafka broker:
+        <li>Pass the JAAS and optionally the krb5 file locations as JVM parameters
to each Kafka broker (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a>
for more details):
             <pre>
+    -Djava.security.krb5.conf=/etc/kafka/krb5.conf
     -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre>
         </li>
         <li>Make sure the keytabs configured in the JAAS file are readable by the operating
system user who is starting kafka broker.</li>
@@ -263,8 +264,9 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
         useTicketCache=true;
     };</pre>
             </li>
-            <li>Pass the name of the JAAS file as a JVM parameter to the client JVM:
-        <pre>
+            <li>Pass the JAAS and optionally krb5 file locations as JVM parameters
to each client JVM (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a>
for more details):
+            <pre>
+    -Djava.security.krb5.conf=/etc/kafka/krb5.conf
     -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf</pre></li>
             <li>Make sure the keytabs configured in the kafka_client_jaas.conf are
readable by the operating system user who is starting kafka client.</li>
             <li>Configure the following properties in producer.properties or consumer.properties:
@@ -273,6 +275,75 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
     sasl.kerberos.service.name=kafka</pre>
             </li>
         </ol></li>
+
+    <li><h4><a id="security_rolling_upgrade" href="#security_rolling_upgrade">Incorporating
Security Features in a Running Cluster</a></h4>
+        You can secure a running cluster via one or more of the supported protocols discussed
previously. This is done in phases:
+        <p></p>
+        <ul>
+            <li>Incrementally bounce the cluster nodes to open additional secured port(s).</li>
+            <li>Restart clients using the secured rather than PLAINTEXT port (assuming
you are securing the client-broker connection).</li>
+            <li>Incrementally bounce the cluster again to enable broker-to-broker security
(if this is required)</li>
+            <li>A final incremental bounce to close the PLAINTEXT port.</li>
+        </ul>
+        <p></p>
+        The specific steps for configuring SSL and SASL are described in sections <a href="#security_ssl">7.2</a>
and <a href="#security_sasl">7.3</a>.
+        Follow these steps to enable security for your desired protocol(s).
+        <p></p>
+        The security implementation lets you configure different protocols for both broker-client
and broker-broker communication.
+        These must be enabled in separate bounces. A PLAINTEXT port must be left open throughout
so brokers and/or clients can continue to communicate.
+        <p></p>
+
+        When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It's
also good practice to wait for restarted replicas to return to the ISR list before moving
onto the next node.
+        <p></p>
+        As an example, say we wish to encrypt both broker-client and broker-broker communication
with SSL. In the first incremental bounce, a SSL port is opened on each node:
+        <pre>
+         listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092</pre>
+
+        We then restart the clients, changing their config to point at the newly opened,
secured port:
+
+        <pre>
+        bootstrap.servers = [broker1:9092,...]
+        security.protocol = SSL
+        ...etc</pre>
+
+        In the second incremental server bounce we instruct Kafka to use SSL as the broker-broker
protocol (which will use the same SSL port):
+
+        <pre>
+        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
+        security.inter.broker.protocol=SSL</pre>
+
+        In the final bounce we secure the cluster by closing the PLAINTEXT port:
+
+        <pre>
+        listeners=SSL://broker1:9092
+        security.inter.broker.protocol=SSL</pre>
+
+        Alternatively we might choose to open multiple ports so that different protocols
can be used for broker-broker and broker-client communication. Say we wished to use SSL encryption
throughout (i.e. for broker-broker and broker-client communication) but we'd like to add SASL
authentication to the broker-client connection also. We would achieve this by opening two
additional ports during the first bounce:
+
+        <pre>
+        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093</pre>
+
+        We would then restart the clients, changing their config to point at the newly opened,
SASL & SSL secured port:
+
+        <pre>
+        bootstrap.servers = [broker1:9093,...]
+        security.protocol = SASL_SSL
+        ...etc</pre>
+
+        The second server bounce would switch the cluster to use encrypted broker-broker
communication via the SSL port we previously opened on port 9092:
+
+        <pre>
+        listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
+        security.inter.broker.protocol=SSL</pre>
+
+        The final bounce secures the cluster by closing the PLAINTEXT port.
+
+        <pre>
+       listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
+       security.inter.broker.protocol=SSL</pre>
+
+        ZooKeeper can be secured independently of the Kafka cluster. The steps for doing
this are covered in section <a href="#zk_authz_migration">7.5.2</a>.
+    </li>
 </ol>
 
 <h3><a id="security_authz" href="#security_authz">7.4 Authorization and ACLs</a></h3>
@@ -283,6 +354,9 @@ One can also add super users in broker.properties like the following (note
that
 By default, the SSL user name will be of the form "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown".
One can change that by setting a customized PrincipalBuilder in broker.properties like the
following.
 <pre>principal.builder.class=CustomizedPrincipalBuilderClass</pre>
 By default, the SASL user name will be the primary part of the Kerberos principal. One can
change that by setting <code>sasl.kerberos.principal.to.local.rules</code> to
a customized rule in broker.properties.
+The format of <code>sasl.kerberos.principal.to.local.rules</code> is a list where
each rule works in the same way as the auth_to_local in <a href="http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html">Kerberos
configuration file (krb5.conf)</a>. Each rules starts with RULE: and contains an expression
in the format [n:string](regexp)s/pattern/replacement/g. See the kerberos documentation for
more details. An example of adding a rule to properly translate user@MYDOMAIN.COM to user
while also keeping the default rule in place is:
+<pre>sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT</pre>
+
 <h4><a id="security_authz_cli" href="#security_authz_cli">Command Line Interface</a></h4>
 Kafka Authorization management CLI can be found under bin directory with all the other CLIs.
The CLI script is called <b>kafka-acls.sh</b>. Following lists all the options
that the script supports:
 <p></p>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/7f47d190/090/upgrade.html
----------------------------------------------------------------------
diff --git a/090/upgrade.html b/090/upgrade.html
index 98ac570..19d9942 100644
--- a/090/upgrade.html
+++ b/090/upgrade.html
@@ -19,7 +19,10 @@
 
 <h4><a id="upgrade_9" href="#upgrade_9">Upgrading from 0.8.0, 0.8.1.X or 0.8.2.X
to 0.9.0.0</a></h4>
 
-0.9.0.0 has <a href="#upgrade_9_breaking">potential breaking changes</a> (please
review before upgrading) and an inter-broker protocol change from previous versions. For a
rolling upgrade:
+0.9.0.0 has <a href="#upgrade_9_breaking">potential breaking changes</a> (please
review before upgrading) and an inter-broker protocol change from previous versions. This
means that upgraded brokers and clients may not be compatible with older versions. It is important
that you upgrade your Kafka cluster before upgrading your clients. If you are using MirrorMaker
downstream clusters should be upgraded first as well.
+
+<p><b>For a rolling upgrade:</b></p>
+
 <ol>
 	<li> Update server.properties file on all brokers and add the following property:
inter.broker.protocol.version=0.8.2.X </li>
 	<li> Upgrade the brokers. This can be done a broker at a time by simply bringing it
down, updating the code, and restarting it. </li>
@@ -39,7 +42,7 @@
     <li> Broker IDs above 1000 are now reserved by default to automatically assigned
broker IDs. If your cluster has existing broker IDs above that threshold make sure to increase
the reserved.broker.max.id broker configuration property accordingly. </li>
     <li> Configuration parameter replica.lag.max.messages was removed. Partition leaders
will no longer consider the number of lagging messages when deciding which replicas are in
sync. </li>
     <li> Configuration parameter replica.lag.time.max.ms now refers not just to the
time passed since last fetch request from replica, but also to time since the replica last
caught up. Replicas that are still fetching messages from leaders but did not catch up to
the latest messages in replica.lag.time.max.ms will be considered out of sync. </li>
-    <li> Configuration parameter log.cleaner.enable is now true by default. This means
topics with a cleanup.policy=compact will now be compacted by default, and 128 MB of heap
will be allocated to the cleaner process via log.cleaner.dedupe.buffer.size. You may want
to review log.cleaner.dedupe.buffer.size and the other log.cleaner configuration values based
on your usage of compacted topics. </li>
+    <li> Compacted topics no longer accept messages without key and an exception is
thrown by the producer if this is attempted. In 0.8.x, a message without key would cause the
log compaction thread to subsequently complain and quit (and stop compacting all compacted
topics). </li>
     <li> MirrorMaker no longer supports multiple target clusters. As a result it will
only accept a single --consumer.config parameter. To mirror multiple source clusters, you
will need at least one MirrorMaker instance per source cluster, each with its own consumer
configuration. </li>
     <li> Tools packaged under <em>org.apache.kafka.clients.tools.*</em>
have been moved to <em>org.apache.kafka.tools.*</em>. All included scripts will
still function as usual, only custom code directly importing these classes will be affected.
</li>
     <li> The default Kafka JVM performance options (KAFKA_JVM_PERFORMANCE_OPTS) have
been changed in kafka-run-class.sh. </li>
@@ -49,6 +52,14 @@
     <li> By default all command line tools will print all logging messages to stderr
instead of stout. </li>
 </ul>
 
+<h5><a id="upgrade_901_notable" href="#upgrade_901_notable">Notable changes in
0.9.0.1</a></h5>
+
+<ul>
+    <li> The new broker id generation feature can be disable by setting broker.id.generation.enable
to false. </li>
+    <li> Configuration parameter log.cleaner.enable is now true by default. This means
topics with a cleanup.policy=compact will now be compacted by default, and 128 MB of heap
will be allocated to the cleaner process via log.cleaner.dedupe.buffer.size. You may want
to review log.cleaner.dedupe.buffer.size and the other log.cleaner configuration values based
on your usage of compacted topics. </li>
+    <li> Default value of configuration parameter fetch.min.bytes for the new consumer
is now 1 by default. </li>
+</ul>
+
 <h5>Deprecations in 0.9.0.0</h5>
 
 <ul>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/7f47d190/downloads.html
----------------------------------------------------------------------
diff --git a/downloads.html b/downloads.html
index 656e738..3c303cd 100644
--- a/downloads.html
+++ b/downloads.html
@@ -1,10 +1,29 @@
 <!--#include virtual="includes/header.html" -->
 
 <h1>Releases</h1>
-0.9.0.0 is the latest release. The current stable version is 0.8.2.2.
+0.9.0.1 is the latest release. The current stable version is 0.9.0.1.
 
 <p>
 You can verify your download by following these <a href="http://www.apache.org/info/verification.html">procedures</a>
and using these <a href="http://kafka.apache.org/KEYS">KEYS</a>.
+<h3>0.9.0.1</h3>
+<ul>
+  <li>
+    <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.1/RELEASE_NOTES.html">Release
Notes</a>
+  </li>
+   <li>
+    Source download: <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.1/kafka-0.9.0.1-src.tgz">kafka-0.9.0.1-src.tgz</a>
(<a href="https://dist.apache.org/repos/dist/release/kafka/0.9.0.1/kafka-0.9.0.1-src.tgz.asc">asc</a>,
<a href="https://dist.apache.org/repos/dist/release/kafka/0.9.0.1/kafka-0.9.0.1-src.tgz.md5">md5</a>)
+  </li>
+   <li>
+    Binary downloads:
+    <ul>
+      <li>Scala 2.10 &nbsp;- <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.1/kafka_2.10-0.9.0.1.tgz">kafka_2.10-0.9.0.1.tgz</a>
(<a href="https://dist.apache.org/repos/dist/release/kafka/0.9.0.1/kafka_2.10-0.9.0.1.tgz.asc">asc</a>,
<a href="https://dist.apache.org/repos/dist/release/kafka/0.9.0.1/kafka_2.10-0.9.0.1.tgz.md5">md5</a>)
+      </li>
+      <li>Scala 2.11 &nbsp;- <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz">kafka_2.11-0.9.0.1.tgz</a>
(<a href="https://dist.apache.org/repos/dist/release/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz.asc">asc</a>,
<a href="https://dist.apache.org/repos/dist/release/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz.md5">md5</a>)
+      </li>
+    </ul>
+We build for multiple versions of Scala. This only matters if you are using Scala and you
want a version built for the same Scala version you use. Otherwise any version should work
(2.11 is recommended).
+  </li>
+</ul>
 <h3>0.9.0.0</h3>
 <ul>
   <li>


Mime
View raw message