kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ewe...@apache.org
Subject kafka git commit: MINOR: Security doc fixes
Date Sat, 09 Jan 2016 00:09:29 GMT
Repository: kafka
Updated Branches:
  refs/heads/0.9.0 2861665d1 -> c355f0c35


MINOR: Security doc fixes

Simple fixes that have tripped users.

Author: Ismael Juma <ismael@juma.me.uk>

Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>

Closes #745 from ijuma/security-doc-improvements

(cherry picked from commit 36f5c46a5cf510001a4990db6199beb37f215007)
Signed-off-by: Ewen Cheslack-Postava <me@ewencp.org>


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/c355f0c3
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/c355f0c3
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/c355f0c3

Branch: refs/heads/0.9.0
Commit: c355f0c3528c3f59fc97fab489cb6c5df43d96e4
Parents: 2861665
Author: Ismael Juma <ismael@juma.me.uk>
Authored: Fri Jan 8 16:08:38 2016 -0800
Committer: Ewen Cheslack-Postava <me@ewencp.org>
Committed: Fri Jan 8 16:09:02 2016 -0800

----------------------------------------------------------------------
 docs/security.html | 63 +++++++++++++++++++++++++------------------------
 1 file changed, 32 insertions(+), 31 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/c355f0c3/docs/security.html
----------------------------------------------------------------------
diff --git a/docs/security.html b/docs/security.html
index 848031b..6307207 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -113,23 +113,23 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
 
         Following SSL configs are needed on the broker side
         <pre>
-        ssl.keystore.location = /var/private/ssl/kafka.server.keystore.jks
-        ssl.keystore.password = test1234
-        ssl.key.password = test1234
-        ssl.truststore.location = /var/private/ssl/kafka.server.truststore.jks
-        ssl.truststore.password = test1234</pre>
+        ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
+        ssl.keystore.password=test1234
+        ssl.key.password=test1234
+        ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
+        ssl.truststore.password=test1234</pre>
 
         Optional settings that are worth considering:
         <ol>
-            <li>ssl.client.auth = none ("required" => client authentication is required,
"requested" => client authentication is requested and client without certs can still connect.
The usage of "requested" is discouraged as it provides a false sense of security and misconfigured
clients will still connect successfully.)</li>
-            <li>ssl.cipher.suites = A cipher suite is a named combination of authentication,
encryption, MAC and key exchange algorithm used to negotiate the security settings for a network
connection using TLS or SSL network protocol. (Default is an empty list)</li>
-            <li>ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1 (list out the SSL protocols
that you are going to accept from clients. Do note that SSL is deprecated in favor of TLS
and using SSL in production is not recommended)</li>
-            <li>ssl.keystore.type = JKS</li>
-            <li>ssl.truststore.type = JKS</li>
+            <li>ssl.client.auth=none ("required" => client authentication is required,
"requested" => client authentication is requested and client without certs can still connect.
The usage of "requested" is discouraged as it provides a false sense of security and misconfigured
clients will still connect successfully.)</li>
+            <li>ssl.cipher.suites (Optional). A cipher suite is a named combination
of authentication, encryption, MAC and key exchange algorithm used to negotiate the security
settings for a network connection using TLS or SSL network protocol. (Default is an empty
list)</li>
+            <li>ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (list out the SSL protocols
that you are going to accept from clients. Do note that SSL is deprecated in favor of TLS
and using SSL in production is not recommended)</li>
+            <li>ssl.keystore.type=JKS</li>
+            <li>ssl.truststore.type=JKS</li>
         </ol>
         If you want to enable SSL for inter-broker communication, add the following to the
broker properties file (it defaults to PLAINTEXT)
         <pre>
-        security.inter.broker.protocol = SSL</pre>
+        security.inter.broker.protocol=SSL</pre>
 
         <p>
         Due to import regulations in some countries, the Oracle implementation limits the
strength of cryptographic algorithms available by default. If stronger algorithms are needed
(for example, AES with 256-bit keys), the <a href="http://www.oracle.com/technetwork/java/javase/downloads/index.html">JCE
Unlimited Strength Jurisdiction Policy Files</a> must be obtained and installed in the
JDK/JRE. See the
@@ -155,22 +155,22 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
         SSL is supported only for the new Kafka Producer and Consumer, the older API is not
supported. The configs for SSL will be same for both producer and consumer.<br>
         If client authentication is not required in the broker, then the following is a minimal
configuration example:
         <pre>
-        security.protocol = SSL
-        ssl.truststore.location = "/var/private/ssl/kafka.client.truststore.jks"
-        ssl.truststore.password = "test1234"</pre>
+        security.protocol=SSL
+        ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
+        ssl.truststore.password=test1234</pre>
 
         If client authentication is required, then a keystore must be created like in step
1 and the following must also be configured:
         <pre>
-        ssl.keystore.location = "/var/private/ssl/kafka.client.keystore.jks"
-        ssl.keystore.password = "test1234"
-        ssl.key.password = "test1234"</pre>
+        ssl.keystore.location=/var/private/ssl/kafka.client.keystore.jks
+        ssl.keystore.password=test1234
+        ssl.key.password=test1234</pre>
         Other configuration settings that may also be needed depending on our requirements
and the broker configuration:
             <ol>
                 <li>ssl.provider (Optional). The name of the security provider used
for SSL connections. Default value is the default security provider of the JVM.</li>
                 <li>ssl.cipher.suites (Optional). A cipher suite is a named combination
of authentication, encryption, MAC and key exchange algorithm used to negotiate the security
settings for a network connection using TLS or SSL network protocol.</li>
                 <li>ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should list at
least one of the protocols configured on the broker side</li>
-                <li>ssl.truststore.type = "JKS"</li>
-                <li>ssl.keystore.type = "JKS"</li>
+                <li>ssl.truststore.type=JKS</li>
+                <li>ssl.keystore.type=JKS</li>
             </ol>
 <br>
         Examples using console-producer and console-consumer:
@@ -336,7 +336,7 @@ Kafka Authorization management CLI can be found under bin directory with
all the
         <td>Resource</td>
     </tr>
     <tr>
-        <td>--consumer-group [group-name]</td>
+        <td>--group [group-name]</td>
         <td>Specifies the consumer-group as resource.</td>
         <td></td>
         <td>Resource</td>
@@ -355,13 +355,13 @@ Kafka Authorization management CLI can be found under bin directory
with all the
     </tr>
     <tr>
         <td>--allow-host</td>
-        <td>Host from which principals listed in --allow-principal will have access.</td>
+        <td>IP address from which principals listed in --allow-principal will have
access.</td>
         <td> if --allow-principal is specified defaults to * which translates to "all
hosts"</td>
         <td>Host</td>
     </tr>
     <tr>
         <td>--deny-host</td>
-        <td>Host from which principals listed in --deny-principal will be denied access.</td>
+        <td>IP address from which principals listed in --deny-principal will be denied
access.</td>
         <td>if --deny-principal is specified defaults to * which translates to "all
hosts"</td>
         <td>Host</td>
     </tr>
@@ -390,25 +390,26 @@ Kafka Authorization management CLI can be found under bin directory
with all the
 <h4><a id="security_authz_examples" href="#security_authz_examples">Examples</a></h4>
 <ul>
     <li><b>Adding Acls</b><br>
-Suppose you want to add an acl "Principals User:Bob and User:Alice are allowed to perform
Operation Read and Write on Topic Test-Topic from Host1 and Host2". You can do that by executing
the CLI with following options:
-        <pre>bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer
--authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob
--allow-principal User:Alice --allow-host Host1 --allow-host Host2 --operation Read --operation
Write --topic Test-topic</pre>
-        By default all principals that don't have an explicit acl that allows access for
an operation to a resource are denied. In rare cases where an allow acl is defined that allows
access to all but some principal we will have to use the --deny-principal and --deny-host
option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob
from host bad-host we can do so using following commands:
-        <pre>bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer
--authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --allow-host
* --deny-principal User:BadBob --deny-host bad-host --operation Read --topic Test-topic</pre>
-        Above examples add acls to a topic by specifying --topic [topic-name] as the resource
option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group
by specifying --consumer-group [group-name].</li>
+Suppose you want to add an acl "Principals User:Bob and User:Alice are allowed to perform
Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1". You
can do that by executing the CLI with following options:
+        <pre>bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
--add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host
198.51.100.1 --operation Read --operation Write --topic Test-topic</pre>
+        By default all principals that don't have an explicit acl that allows access for
an operation to a resource are denied. In rare cases where an allow acl is defined that allows
access to all but some principal we will have to use the --deny-principal and --deny-host
option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob
from IP 198.51.100.3 we can do so using following commands:
+        <pre>bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
--add --allow-principal User:* --allow-host * --deny-principal User:BadBob --deny-host 198.51.100.3
--operation Read --topic Test-topic</pre>
+        Note that ``--allow-host`` and ``deny-host`` only support IP addresses (hostnames
are not supported).
+        Above examples add acls to a topic by specifying --topic [topic-name] as the resource
option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group
by specifying --group [group-name].</li>
 
     <li><b>Removing Acls</b><br>
             Removing acls is pretty much the same. The only difference is instead of --add
option users will have to specify --remove option. To remove the acls added by the first example
above we can execute the CLI with following options:
-           <pre> bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer
--authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Bob
--allow-principal User:Alice --allow-host Host1 --allow-host Host2 --operation Read --operation
Write --topic Test-topic </pre></li>
+           <pre> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
--remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0
--allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic </pre></li>
 
     <li><b>List Acls</b><br>
             We can list acls for any resource by specifying the --list option with the resource.
To list all acls for Test-topic we can execute the CLI with following options:
-            <pre>bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer
--authorizer-properties zookeeper.connect=localhost:2181 --list --topic Test-topic</pre></li>
+            <pre>bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
--list --topic Test-topic</pre></li>
 
     <li><b>Adding or removing a principal as producer or consumer</b><br>
             The most common use case for acl management are adding/removing a principal as
producer or consumer so we added convenience options to handle these cases. In order to add
User:Bob as a producer of  Test-topic we can execute the following command:
-           <pre> bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer
--authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob
--producer --topic Test-topic</pre>
+           <pre> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
--add --allow-principal User:Bob --producer --topic Test-topic</pre>
             Similarly to add Alice as a consumer of Test-topic with consumer group Group-1
we just have to pass --consumer option:
-           <pre> bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer
--authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob
--consumer --topic test-topic --consumer-group Group-1 </pre>
+           <pre> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
--add --allow-principal User:Bob --consumer --topic test-topic --group Group-1 </pre>
             Note that for consumer option we must also specify the consumer group.
             In order to remove a principal from producer or consumer role we just need to
pass --remove option. </li>
     </ul>


Mime
View raw message