kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jun...@apache.org
Subject kafka git commit: trivial doc changes
Date Fri, 13 Nov 2015 18:33:32 GMT
Repository: kafka
Updated Branches:
  refs/heads/0.9.0 3e133c4ee -> 26aae8400


trivial doc changes


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/26aae840
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/26aae840
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/26aae840

Branch: refs/heads/0.9.0
Commit: 26aae840081b398772261a3c34da7e3bdd8c5341
Parents: 3e133c4
Author: Jun Rao <junrao@gmail.com>
Authored: Fri Nov 13 10:31:33 2015 -0800
Committer: Jun Rao <junrao@gmail.com>
Committed: Fri Nov 13 10:32:42 2015 -0800

----------------------------------------------------------------------
 docs/api.html           |  2 +-
 docs/documentation.html |  2 +-
 docs/security.html      | 12 +++++++-----
 3 files changed, 9 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/26aae840/docs/api.html
----------------------------------------------------------------------
diff --git a/docs/api.html b/docs/api.html
index 3aad872..9b739da 100644
--- a/docs/api.html
+++ b/docs/api.html
@@ -155,4 +155,4 @@ As of the 0.9.0 release we have added a replacement for our existing simple
and
 </pre>
 
 Examples showing how to use the producer are given in the
-<a href="http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/producer/KafkaConsumer.html"
title="Kafka 0.9.0 Javadoc">javadocs</a>.
+<a href="http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html"
title="Kafka 0.9.0 Javadoc">javadocs</a>.

http://git-wip-us.apache.org/repos/asf/kafka/blob/26aae840/docs/documentation.html
----------------------------------------------------------------------
diff --git a/docs/documentation.html b/docs/documentation.html
index 69b9ba5..29376e0 100644
--- a/docs/documentation.html
+++ b/docs/documentation.html
@@ -116,7 +116,7 @@ Prior releases: <a href="/07/documentation.html">0.7.x</a>,
<a href="/08/documen
             <li><a href="#security_authz">7.4 Authorization and ACLs</a></li>
             <li><a href="#zk_authz">7.5 ZooKeeper Authentication</a></li>
             <ul>
-                <li><a href="zk_authz_new"</li>
+                <li><a href="zk_authz_new"</a></li>
                 <li><a href="zk_authz_migration">Migrating Clusters</a></li>
                 <li><a href="zk_authz_ensemble">Migrating the ZooKeeper Ensemble</a></li>
             </ul>

http://git-wip-us.apache.org/repos/asf/kafka/blob/26aae840/docs/security.html
----------------------------------------------------------------------
diff --git a/docs/security.html b/docs/security.html
index f4c8668..210eefe 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -20,7 +20,7 @@ In release 0.9.0.0, the Kafka community added a number of features that,
used ei
 <ol>
     <li>Authenticating clients (Producers and consumers) connections to brokers, using
either SSL or SASL (Kerberos)</li>
     <li>Authorizing read / write operations by clients</li>
-    <li>Encryption of data sent between brokers and clients, or between brokers, using
SSL</li>
+    <li>Encryption of data sent between brokers and clients, or between brokers, using
SSL (Note there is performance degradation in the clients when SSL is enabled. The magnitude
of the degradation depends on the CPU type.)</li>
     <li>Authenticate brokers connecting to ZooKeeper</li>
     <li>Security is optional - non-secured clusters are supported, as well as a mix
of authenticated, unauthenticated, encrypted and non-encrypted clients.</li>
     <li>Authorization is pluggable and supports integration with external authorization
services</li>
@@ -54,7 +54,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
         The next step is to add the generated CA to the **clients’ truststore** so that
the clients can trust this CA:
         <pre>keytool -keystore server.truststore.jks -alias CARoot <b>-import</b>
-file ca-cert</pre>
 
-        <b>Note:</b> If you configure Kafka brokers to require client authentication
by setting ssl.client.auth to be "requested" or "required" on <a href="#config_broker">Kafka
broker config</a> then you must provide a truststore for kafka broker as well and it
should have all the CA certificates that clients keys signed by.
+        <b>Note:</b> If you configure Kafka brokers to require client authentication
by setting ssl.client.auth to be "requested" or "required" on <a href="#config_broker">Kafka
broker config</a> then you must provide a truststore for Kafka broker as well and it
should have all the CA certificates that clients keys signed by.
         <pre>keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert</pre>
 
         In contrast to the keystore in step 1 that stores each machine’s own identity,
the truststore of a client stores all the certificates that the client should trust. Importing
a certificate into one’s truststore also means that trusting all certificates that are signed
by that certificate. As the analogy above, trusting the government (CA) also means that trusting
all passports (certificates) that it has issued. This attribute is called the chains of trust,
and it is particularly useful when deploying SSL on a large Kafka cluster. You can sign all
certificates in the cluster with a single CA, and have all machines share the same truststore
that trusts the CA. That way all machines can authenticate all other machines.</li>
@@ -261,7 +261,10 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
 </ol>
 
 <h3><a id="security_authz">7.4 Authorization and ACLs</a></h3>
-Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that
uses zookeeper to store all the acls. Kafka acls are defined in the general format of "Principal
P is [Allowed/Denied] Operation O From Host H On Resource R". You can read more about the
acl structure on KIP-11. In order to add, remove or list acls you can use the Kafka authorizer
CLI.
+Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that
uses zookeeper to store all the acls. Kafka acls are defined in the general format of "Principal
P is [Allowed/Denied] Operation O From Host H On Resource R". You can read more about the
acl structure on KIP-11. In order to add, remove or list acls you can use the Kafka authorizer
CLI. By default, if a Resource R has no associated acl, no one is allowed to access R. If
you want change that behavior, you can include the following in broker.properties.
+<pre>allow.everyone.if.no.acl.found=true</pre>
+One can also add super users in broker.properties like the following.
+<pre>super.users=User:Bob,User:Alice</pre>
 <h4>Command Line Interface</h4>
 Kafka Authorization management CLI can be found under bin directory with all the other CLIs.
The CLI script is called <b>kafka-acls.sh</b>. Following lists all the options
that the script supports:
 <p></p>
@@ -428,5 +431,4 @@ It is also necessary to enable authentication on the ZooKeeper ensemble.
To do i
 <ol>
 	<li><a href="http://zookeeper.apache.org/doc/r3.4.6/zookeeperProgrammers.html#sc_ZooKeeperAccessControl">Apache
ZooKeeper documentation</a></li>
 	<li><a href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL">Apache
ZooKeeper wiki</a></li>
-	<li><a href="http://www.cloudera.com/content/www/en-us/documentation/cdh/5-1-x/CDH5-Security-Guide/cdh5sg_zookeeper_security.html">Cloudera
ZooKeeper security configuration</a></li>
-</ol>
\ No newline at end of file
+</ol>


Mime
View raw message