kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ewe...@apache.org
Subject kafka-site git commit: Update consumer_config, kafka_config and producer_config
Date Tue, 12 Jan 2016 18:47:18 GMT
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site d0ddbb47b -> 2fb26e0a7


Update consumer_config, kafka_config and producer_config

Files generated from the 0.9.0 branch of the kafka repo. Excluded
log.cleaner changes as they only apply once 0.9.0.1 is released.


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/2fb26e0a
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/2fb26e0a
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/2fb26e0a

Branch: refs/heads/asf-site
Commit: 2fb26e0a720d17786fd090192992fadd3bd951bb
Parents: d0ddbb4
Author: Ismael Juma <ismael@juma.me.uk>
Authored: Sat Jan 9 14:15:13 2016 +0000
Committer: Ismael Juma <ismael@juma.me.uk>
Committed: Sat Jan 9 14:15:13 2016 +0000

----------------------------------------------------------------------
 090/consumer_config.html | 10 +++++-----
 090/kafka_config.html    | 18 +++++++++---------
 090/producer_config.html | 12 ++++++------
 3 files changed, 20 insertions(+), 20 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2fb26e0a/090/consumer_config.html
----------------------------------------------------------------------
diff --git a/090/consumer_config.html b/090/consumer_config.html
index c894e71..5a2bf9c 100644
--- a/090/consumer_config.html
+++ b/090/consumer_config.html
@@ -48,19 +48,19 @@
 <tr>
 <td>sasl.kerberos.service.name</td><td>The Kerberos principal name that
Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
-<td>security.protocol</td><td>Protocol used to communicate with brokers.
Currently only PLAINTEXT and SSL are supported.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers.
Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
 <tr>
 <td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF)
to use when sending data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL
connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default.</td><td>list</td><td>[TLSv1.2,
TLSv1.1, TLSv1]</td><td></td><td>medium</td></tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL
connections.</td><td>list</td><td>[TLSv1.2, TLSv1.1, TLSv1]</td><td></td><td>medium</td></tr>
 <tr>
-<td>ssl.keystore.type</td><td>The file format of the key store file. This
is optional for client. Default value is JKS</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This
is optional for client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
 <tr>
 <td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext.
Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS,
TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage
is discouraged due to known security vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
 <tr>
 <td>ssl.provider</td><td>The name of the security provider used for SSL
connections. Default value is the default security provider of the JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
-<td>ssl.truststore.type</td><td>The file format of the trust store file.
Default value is JKS.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
 <tr>
 <td>auto.commit.interval.ms</td><td>The frequency in milliseconds that
the consumer offsets are auto-committed to Kafka if <code>enable.auto.commit</code>
is set to <code>true</code>.</td><td>long</td><td>5000</td><td>[0,...]</td><td>low</td></tr>
 <tr>
@@ -82,7 +82,7 @@
 <tr>
 <td>retry.backoff.ms</td><td>The amount of time to wait before attempting
to retry a failed fetch request to a given topic partition. This avoids repeated fetching-and-failing
in a tight loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
 <tr>
-<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path. Default
is /usr/bin/kinit</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
 <tr>
 <td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time
between refresh attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2fb26e0a/090/kafka_config.html
----------------------------------------------------------------------
diff --git a/090/kafka_config.html b/090/kafka_config.html
index 4cac226..f851160 100644
--- a/090/kafka_config.html
+++ b/090/kafka_config.html
@@ -200,19 +200,19 @@
 <tr>
 <td>num.partitions</td><td>The default number of log partitions per topic</td><td>int</td><td>1</td><td>[1,...]</td><td>medium</td></tr>
 <tr>
-<td>principal.builder.class</td><td>The fully qualified name of a class
that implements the PrincipalBuilder interface, which is currently used to build the Principal
for connections with the SSL SecurityProtocol. Default is DefaultPrincipalBuilder.</td><td>class</td><td>class
org.apache.kafka.common.security.auth.DefaultPrincipalBuilder</td><td></td><td>medium</td></tr>
+<td>principal.builder.class</td><td>The fully qualified name of a class
that implements the PrincipalBuilder interface, which is currently used to build the Principal
for connections with the SSL SecurityProtocol.</td><td>class</td><td>class
org.apache.kafka.common.security.auth.DefaultPrincipalBuilder</td><td></td><td>medium</td></tr>
 <tr>
 <td>producer.purgatory.purge.interval.requests</td><td>The purge interval
(in number of requests) of the producer request purgatory</td><td>int</td><td>1000</td><td></td><td>medium</td></tr>
 <tr>
 <td>replica.fetch.backoff.ms</td><td>The amount of time to sleep when fetch
partition error occurs.</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>reserved.broker.max.id</td><td>reserved.broker.max.id</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
+<td>reserved.broker.max.id</td><td>Max number that can be used for a broker.id</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path. Default
is /usr/bin/kinit</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>medium</td></tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>medium</td></tr>
 <tr>
 <td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time
between refresh attempts.</td><td>long</td><td>60000</td><td></td><td>medium</td></tr>
 <tr>
-<td>sasl.kerberos.principal.to.local.rules</td><td>A list of rules for
mapping from principal names to short names (typically operating system usernames). The rules
are evaluated in order and the first rule that matches a principal name is used to map it
to a short name. Any later rules in the list are ignored. By default, principal names of the
form <username>/<hostname>@<REALM> are mapped to <username>.</td><td>list</td><td>[DEFAULT]</td><td></td><td>medium</td></tr>
+<td>sasl.kerberos.principal.to.local.rules</td><td>A list of rules for
mapping from principal names to short names (typically operating system usernames). The rules
are evaluated in order and the first rule that matches a principal name is used to map it
to a short name. Any later rules in the list are ignored. By default, principal names of the
form {username}/{hostname}@{REALM} are mapped to {username}.</td><td>list</td><td>[DEFAULT]</td><td></td><td>medium</td></tr>
 <tr>
 <td>sasl.kerberos.service.name</td><td>The Kerberos principal name that
Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
@@ -220,13 +220,13 @@
 <tr>
 <td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will
sleep until the specified window factor of time from last refresh to ticket's expiry has been
reached, at which time it will try to renew the ticket.</td><td>double</td><td>0.8</td><td></td><td>medium</td></tr>
 <tr>
-<td>security.inter.broker.protocol</td><td>Security protocol used to communicate
between brokers. Defaults to plain text.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<td>security.inter.broker.protocol</td><td>Security protocol used to communicate
between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
 <tr>
 <td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named
combination of authentication, encryption, MAC and key exchange algorithm used to negotiate
the security settings for a network connection using TLS or SSL network protocol.By default
all the available cipher suites are supported.</td><td>list</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
-<td>ssl.client.auth</td><td>Configures kafka broker to request client authentication.
The following settings are common:  <ul> <li><code>ssl.want.client.auth=required</code>
If set to required client authentication is required. <li><code>ssl.client.auth=requested</code>
This means client authentication is optional. unlike requested , if this option is set client
can choose not to provide authentication information about itself <li><code>ssl.client.auth=none</code>
This means client authentication is not needed.</td><td>string</td><td>none</td><td>[required,
requested, none]</td><td>medium</td></tr>
+<td>ssl.client.auth</td><td>Configures kafka broker to request client authentication.
The following settings are common:  <ul> <li><code>ssl.client.auth=required</code>
If set to required client authentication is required. <li><code>ssl.client.auth=requested</code>
This means client authentication is optional. unlike requested , if this option is set client
can choose not to provide authentication information about itself <li><code>ssl.client.auth=none</code>
This means client authentication is not needed.</td><td>string</td><td>none</td><td>[required,
requested, none]</td><td>medium</td></tr>
 <tr>
-<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL
connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default.</td><td>list</td><td>[TLSv1.2,
TLSv1.1, TLSv1]</td><td></td><td>medium</td></tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL
connections.</td><td>list</td><td>[TLSv1.2, TLSv1.1, TLSv1]</td><td></td><td>medium</td></tr>
 <tr>
 <td>ssl.key.password</td><td>The password of the private key in the key
store file. This is optional for client.</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
@@ -236,7 +236,7 @@
 <tr>
 <td>ssl.keystore.password</td><td>The store password for the key store
file.This is optional for client and only needed if ssl.keystore.location is configured. </td><td>password</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
-<td>ssl.keystore.type</td><td>The file format of the key store file. This
is optional for client. Default value is JKS</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This
is optional for client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
 <tr>
 <td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext.
Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS,
TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage
is discouraged due to known security vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
 <tr>
@@ -248,7 +248,7 @@
 <tr>
 <td>ssl.truststore.password</td><td>The password for the trust store file.
</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
-<td>ssl.truststore.type</td><td>The file format of the trust store file.
Default value is JKS.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
 <tr>
 <td>authorizer.class.name</td><td>The authorizer class that should be used
for authorization</td><td>string</td><td>""</td><td></td><td>low</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2fb26e0a/090/producer_config.html
----------------------------------------------------------------------
diff --git a/090/producer_config.html b/090/producer_config.html
index 7ddd2cc..9d97e86 100644
--- a/090/producer_config.html
+++ b/090/producer_config.html
@@ -40,7 +40,7 @@
 <tr>
 <td>linger.ms</td><td>The producer groups together any records that arrive
in between request transmissions into a single batched request. Normally this occurs only
under load when records arrive faster than they can be sent out. However in some circumstances
the client may want to reduce the number of requests even under moderate load. This setting
accomplishes this by adding a small amount of artificial delay&mdash;that is, rather than
immediately sending out a record the producer will wait for up to the given delay to allow
other records to be sent so that the sends can be batched together. This can be thought of
as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay
for batching: once we get <code>batch.size</code> worth of records for a partition
it will be sent immediately regardless of this setting, however if we have fewer than this
many bytes accumulated for this partition we will 'linger' for the specified time waiting
for more records to
  show up. This setting defaults to 0 (i.e. no delay). Setting <code>linger.ms=5</code>,
for example, would have the effect of reducing the number of requests sent but would add up
to 5ms of latency to records sent in the absense of load.</td><td>long</td><td>0</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>max.block.ms</td><td>The configuration controls how long {@link KafkaProducer#send()}
and {@link KafkaProducer#partitionsFor} will block.These methods can be blocked for multiple
reasons. For e.g: buffer full, metadata unavailable.This configuration imposes maximum limit
on the total time spent in fetching metadata, serialization of key and value, partitioning
and allocation of buffer memory when doing a send(). In case of partitionsFor(), this configuration
imposes a maximum time threshold on waiting for metadata</td><td>long</td><td>60000</td><td>[0,...]</td><td>medium</td></tr>
+<td>max.block.ms</td><td>The configuration controls how long {@link KafkaProducer#send()}
and {@link KafkaProducer#partitionsFor} will block.These methods can be blocked either because
the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner
will not be counted against this timeout.</td><td>long</td><td>60000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
 <td>max.request.size</td><td>The maximum size of a request. This is also
effectively a cap on the maximum record size. Note that the server has its own cap on record
size which may be different from this. This setting will limit the number of record batches
the producer will send in a single request to avoid sending huge requests.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
@@ -52,19 +52,19 @@
 <tr>
 <td>sasl.kerberos.service.name</td><td>The Kerberos principal name that
Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
-<td>security.protocol</td><td>Protocol used to communicate with brokers.
Currently only PLAINTEXT and SSL are supported.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers.
Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
 <tr>
 <td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF)
to use when sending data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL
connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default.</td><td>list</td><td>[TLSv1.2,
TLSv1.1, TLSv1]</td><td></td><td>medium</td></tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL
connections.</td><td>list</td><td>[TLSv1.2, TLSv1.1, TLSv1]</td><td></td><td>medium</td></tr>
 <tr>
-<td>ssl.keystore.type</td><td>The file format of the key store file. This
is optional for client. Default value is JKS</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This
is optional for client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
 <tr>
 <td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext.
Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS,
TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage
is discouraged due to known security vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
 <tr>
 <td>ssl.provider</td><td>The name of the security provider used for SSL
connections. Default value is the default security provider of the JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
-<td>ssl.truststore.type</td><td>The file format of the trust store file.
Default value is JKS.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
 <tr>
 <td>timeout.ms</td><td>The configuration controls the maximum amount of
time the server will wait for acknowledgments from followers to meet the acknowledgment requirements
the producer has specified with the <code>acks</code> configuration. If the requested
number of acknowledgments are not met when the timeout elapses an error will be returned.
This timeout is measured on the server side and does not include the network latency of the
request.</td><td>int</td><td>30000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
@@ -86,7 +86,7 @@
 <tr>
 <td>retry.backoff.ms</td><td>The amount of time to wait before attempting
to retry a failed fetch request to a given topic partition. This avoids repeated fetching-and-failing
in a tight loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
 <tr>
-<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path. Default
is /usr/bin/kinit</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
 <tr>
 <td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time
between refresh attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
 <tr>


Mime
View raw message