kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mimai...@apache.org
Subject [kafka] branch 2.6 updated: MINOR: Move upgraded docs from site to Kafka docs (#9565)
Date Fri, 06 Nov 2020 14:39:26 GMT
This is an automated email from the ASF dual-hosted git repository.

mimaison pushed a commit to branch 2.6
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/2.6 by this push:
     new dbdd1b1  MINOR: Move upgraded docs from site to Kafka docs (#9565)
dbdd1b1 is described below

commit dbdd1b1bb72ab0e012f7a746cefb6e593da44683
Author: Mickael Maison <mimaison@users.noreply.github.com>
AuthorDate: Fri Nov 6 14:38:26 2020 +0000

    MINOR: Move upgraded docs from site to Kafka docs (#9565)
    
    A number of doc updates have been applied to the website. For the 2.6.1 release, resync Kafka's docs folder with the website.
    
    Reviewers: Bill Bejeck <bbejeck@gmail.com>
---
 docs/api.html                                      |  40 +-
 docs/configuration.html                            |  74 ++--
 docs/connect.html                                  | 182 +++-----
 docs/design.html                                   |  76 ++--
 docs/documentation.html                            |  38 +-
 .../developer-guide/dsl-topology-naming.html       |   2 +-
 docs/implementation.html                           |  98 ++---
 docs/introduction.html                             | 339 ++++++++-------
 docs/migration.html                                |   4 +-
 docs/ops.html                                      | 392 +++++++----------
 docs/protocol.html                                 |  42 +-
 docs/quickstart-docker.html                        | 204 +++++++++
 docs/quickstart-zookeeper.html                     | 277 ++++++++++++
 docs/security.html                                 | 481 ++++++++-------------
 docs/streams/architecture.html                     |  12 +-
 docs/streams/core-concepts.html                    |  20 +-
 docs/streams/developer-guide/app-reset-tool.html   |  10 +-
 docs/streams/developer-guide/config-streams.html   | 424 +++++++++---------
 docs/streams/developer-guide/datatypes.html        |  16 +-
 docs/streams/developer-guide/dsl-api.html          | 215 ++++-----
 .../developer-guide/dsl-topology-naming.html       |  64 +--
 docs/streams/developer-guide/index.html            |   4 +-
 .../developer-guide/interactive-queries.html       |  37 +-
 docs/streams/developer-guide/manage-topics.html    |   4 +-
 docs/streams/developer-guide/memory-mgmt.html      |  22 +-
 docs/streams/developer-guide/processor-api.html    |  34 +-
 docs/streams/developer-guide/running-app.html      |   9 +-
 docs/streams/developer-guide/security.html         |  13 +-
 docs/streams/developer-guide/testing.html          |  92 ++--
 docs/streams/developer-guide/write-streams.html    |  28 +-
 docs/streams/index.html                            |  32 +-
 docs/streams/quickstart.html                       | 106 ++---
 docs/streams/tutorial.html                         | 174 +++-----
 docs/streams/upgrade-guide.html                    |  37 +-
 docs/upgrade.html                                  | 192 +++++---
 docs/uses.html                                     |  14 +-
 36 files changed, 1906 insertions(+), 1902 deletions(-)

diff --git a/docs/api.html b/docs/api.html
index b6ab1fa..94d5f3e 100644
--- a/docs/api.html
+++ b/docs/api.html
@@ -26,7 +26,7 @@
 
 	Kafka exposes all its functionality over a language independent protocol which has clients available in many programming languages. However only the Java clients are maintained as part of the main Kafka project, the others are available as independent open source projects. A list of non-Java clients is available <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">here</a>.
 
-	<h3><a id="producerapi" href="#producerapi">2.1 Producer API</a></h3>
+	<h3 class="anchor-heading"><a id="producerapi" class="anchor-link"></a><a href="#producerapi">2.1 Producer API</a></h3>
 
 	The Producer API allows applications to send streams of data to topics in the Kafka cluster.
 	<p>
@@ -35,15 +35,13 @@
 	<p>
 	To use the producer, you can use the following maven dependency:
 
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-clients&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
-		&lt;/dependency&gt;
-	</pre>
+		&lt;/dependency&gt;</code></pre>
 
-	<h3><a id="consumerapi" href="#consumerapi">2.2 Consumer API</a></h3>
+	<h3 class="anchor-heading"><a id="consumerapi" class="anchor-link"></a><a href="#consumerapi">2.2 Consumer API</a></h3>
 
 	The Consumer API allows applications to read streams of data from topics in the Kafka cluster.
 	<p>
@@ -51,15 +49,13 @@
 	<a href="/{{version}}/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html" title="Kafka {{dotVersion}} Javadoc">javadocs</a>.
 	<p>
 	To use the consumer, you can use the following maven dependency:
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-clients&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
-		&lt;/dependency&gt;
-	</pre>
+		&lt;/dependency&gt;</code></pre>
 
-	<h3><a id="streamsapi" href="#streamsapi">2.3 Streams API</a></h3>
+	<h3 class="anchor-heading"><a id="streamsapi" class="anchor-link"></a><a href="#streamsapi">2.3 Streams API</a></h3>
 
 	The <a href="#streamsapi">Streams</a> API allows transforming streams of data from input topics to output topics.
 	<p>
@@ -70,28 +66,24 @@
 	<p>
 	To use Kafka Streams you can use the following maven dependency:
 
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-streams&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
-		&lt;/dependency&gt;
-	</pre>
+		&lt;/dependency&gt;</code></pre>
 
 	<p>
 	When using Scala you may optionally include the <code>kafka-streams-scala</code> library.  Additional documentation on using the Kafka Streams DSL for Scala is available <a href="/{{version}}/documentation/streams/developer-guide/dsl-api.html#scala-dsl">in the developer guide</a>.
 	<p>
 	To use Kafka Streams DSL for Scala for Scala {{scalaVersion}} you can use the following maven dependency:
 
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-streams-scala_{{scalaVersion}}&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
-		&lt;/dependency&gt;
-	</pre>
+		&lt;/dependency&gt;</code></pre>
 
-	<h3><a id="connectapi" href="#connectapi">2.4 Connect API</a></h3>
+	<h3 class="anchor-heading"><a id="connectapi" class="anchor-link"></a><a href="#connectapi">2.4 Connect API</a></h3>
 
 	The Connect API allows implementing connectors that continually pull from some source data system into Kafka or push from Kafka into some sink data system.
 	<p>
@@ -100,18 +92,16 @@
 	Those who want to implement custom connectors can see the <a href="/{{version}}/javadoc/index.html?org/apache/kafka/connect" title="Kafka {{dotVersion}} Javadoc">javadoc</a>.
 	<p>
 
-	<h3><a id="adminapi" href="#adminapi">2.5 Admin API</a></h3>
+	<h3 class="anchor-heading"><a id="adminapi" class="anchor-link"></a><a href="#adminapi">2.5 Admin API</a></h3>
 
 	The Admin API supports managing and inspecting topics, brokers, acls, and other Kafka objects.
 	<p>
 	To use the Admin API, add the following Maven dependency:
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-clients&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
-		&lt;/dependency&gt;
-	</pre>
+		&lt;/dependency&gt;</code></pre>
 	For more information about the Admin APIs, see the <a href="/{{version}}/javadoc/index.html?org/apache/kafka/clients/admin/Admin.html" title="Kafka {{dotVersion}} Javadoc">javadoc</a>.
 	<p>
 
diff --git a/docs/configuration.html b/docs/configuration.html
index 9e913a2..34340a9 100644
--- a/docs/configuration.html
+++ b/docs/configuration.html
@@ -18,7 +18,7 @@
 <script id="configuration-template" type="text/x-handlebars-template">
   Kafka uses key-value pairs in the <a href="http://en.wikipedia.org/wiki/.properties">property file format</a> for configuration. These values can be supplied either from a file or programmatically.
 
-  <h3><a id="brokerconfigs" href="#brokerconfigs">3.1 Broker Configs</a></h3>
+  <h3 class="anchor-heading"><a id="brokerconfigs" class="anchor-link"></a><a href="#brokerconfigs">3.1 Broker Configs</a></h3>
 
   The essential configurations are the following:
   <ul>
@@ -33,7 +33,7 @@
 
   <p>More details about broker configuration can be found in the scala class <code>kafka.server.KafkaConfig</code>.</p>
 
-  <h4><a id="dynamicbrokerconfigs" href="#dynamicbrokerconfigs">3.1.1 Updating Broker Configs</a></h4>
+  <h4 class="anchor-heading"><a id="dynamicbrokerconfigs" class="anchor-link"></a><a href="#dynamicbrokerconfigs">3.1.1 Updating Broker Configs</a></h4>
   From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker. See the
   <code>Dynamic Update Mode</code> column in <a href="#brokerconfigs">Broker Configs</a> for the update mode of each broker config.
   <ul>
@@ -43,31 +43,21 @@
   </ul>
 
   To alter the current broker configs for broker id 0 (for example, the number of log cleaner threads):
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2</code></pre>
 
   To describe the current dynamic broker configs for broker id 0:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe</code></pre>
 
   To delete a config override and revert to the statically configured or default value for broker id 0 (for example,
   the number of log cleaner threads):
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads</code></pre>
 
   Some configs may be configured as a cluster-wide default to maintain consistent values across the whole cluster.  All brokers
   in the cluster will process the cluster default update. For example, to update log cleaner threads on all brokers:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2</code></pre>
 
   To describe the currently configured dynamic cluster-wide default configs:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe</code></pre>
 
   All configs that are configurable at cluster level may also be configured at per-broker level (e.g. for testing).
   If a config value is defined at different levels, the following order of precedence is used:
@@ -99,10 +89,8 @@
   encoder configs will not be persisted in ZooKeeper. For example, to store SSL key password for listener <code>INTERNAL</code>
   on broker 0:
 
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --entity-type brokers --entity-name 0 --alter --add-config
-    'listener.name.internal.ssl.key.password=key-password,password.encoder.secret=secret,password.encoder.iterations=8192'
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --entity-type brokers --entity-name 0 --alter --add-config
+    'listener.name.internal.ssl.key.password=key-password,password.encoder.secret=secret,password.encoder.iterations=8192'</code></pre>
 
   The configuration <code>listener.name.internal.ssl.key.password</code> will be persisted in ZooKeeper in encrypted
   form using the provided encoder configs. The encoder secret and iterations are not persisted in ZooKeeper.
@@ -174,10 +162,8 @@
   In Kafka version 1.1.x, changes to <code>unclean.leader.election.enable</code> take effect only when a new controller is elected.
   Controller re-election may be forced by running:
 
-  <pre class="brush: bash;">
-  &gt; bin/zookeeper-shell.sh localhost
-  rmr /controller
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/zookeeper-shell.sh localhost
+  rmr /controller</code></pre>
 
   <h5>Updating Log Cleaner Configs</h5>
   Log cleaner configs may be updated dynamically at cluster-default level used by all brokers. The changes take effect
@@ -231,61 +217,53 @@
   Inter-broker listener must be configured using the static broker configuration <code>inter.broker.listener.name</code>
   or <code>inter.broker.security.protocol</code>.
 
-  <h3><a id="topicconfigs" href="#topicconfigs">3.2 Topic-Level Configs</a></h3>
+  <h3 class="anchor-heading"><a id="topicconfigs" class="anchor-link"></a><a href="#topicconfigs">3.2 Topic-Level Configs</a></h3>
 
   Configurations pertinent to topics have both a server default as well an optional per-topic override. If no per-topic configuration is given the server default is used. The override can be set at topic creation time by giving one or more <code>--config</code> options. This example creates a topic named <i>my-topic</i> with a custom max message size and flush rate:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 \
-      --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 \
+      --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1</code></pre>
   Overrides can also be changed or set later using the alter configs command. This example updates the max message size for <i>my-topic</i>:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic
-      --alter --add-config max.message.bytes=128000
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic
+      --alter --add-config max.message.bytes=128000</code></pre>
 
   To check overrides set on the topic you can do
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic --describe
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic --describe</code></pre>
 
   To remove an override you can do
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092  --entity-type topics --entity-name my-topic
-      --alter --delete-config max.message.bytes
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server localhost:9092  --entity-type topics --entity-name my-topic
+      --alter --delete-config max.message.bytes</code></pre>
 
   The following are the topic-level configurations. The server's default configuration for this property is given under the Server Default Property heading. A given server default config value only applies to a topic if it does not have an explicit topic config override.
 
   <!--#include virtual="generated/topic_config.html" -->
 
-  <h3><a id="producerconfigs" href="#producerconfigs">3.3 Producer Configs</a></h3>
+  <h3 class="anchor-heading"><a id="producerconfigs" class="anchor-link"></a><a href="#producerconfigs">3.3 Producer Configs</a></h3>
 
   Below is the configuration of the producer:
   <!--#include virtual="generated/producer_config.html" -->
 
-  <h3><a id="consumerconfigs" href="#consumerconfigs">3.4 Consumer Configs</a></h3>
+  <h3 class="anchor-heading"><a id="consumerconfigs" class="anchor-link"></a><a href="#consumerconfigs">3.4 Consumer Configs</a></h3>
 
   Below is the configuration for the consumer:
   <!--#include virtual="generated/consumer_config.html" -->
 
-  <h3><a id="connectconfigs" href="#connectconfigs">3.5 Kafka Connect Configs</a></h3>
+  <h3 class="anchor-heading"><a id="connectconfigs" class="anchor-link"></a><a href="#connectconfigs">3.5 Kafka Connect Configs</a></h3>
   Below is the configuration of the Kafka Connect framework.
   <!--#include virtual="generated/connect_config.html" -->
 
-  <h4><a id="sourceconnectconfigs" href="#sourceconnectconfigs">3.5.1 Source Connector Configs</a></h4>
+  <h4 class="anchor-heading"><a id="sourceconnectconfigs" class="anchor-link"></a><a href="#sourceconnectconfigs">3.5.1 Source Connector Configs</a></h4>
   Below is the configuration of a source connector.
   <!--#include virtual="generated/source_connector_config.html" -->
 
-  <h4><a id="sinkconnectconfigs" href="#sinkconnectconfigs">3.5.2 Sink Connector Configs</a></h4>
+  <h4 class="anchor-heading"><a id="sinkconnectconfigs" class="anchor-link"></a><a href="#sinkconnectconfigs">3.5.2 Sink Connector Configs</a></h4>
   Below is the configuration of a sink connector.
   <!--#include virtual="generated/sink_connector_config.html" -->
 
-  <h3><a id="streamsconfigs" href="#streamsconfigs">3.6 Kafka Streams Configs</a></h3>
+  <h3 class="anchor-heading"><a id="streamsconfigs" class="anchor-link"></a><a href="#streamsconfigs">3.6 Kafka Streams Configs</a></h3>
   Below is the configuration of the Kafka Streams client library.
   <!--#include virtual="generated/streams_config.html" -->
 
-  <h3><a id="adminclientconfigs" href="#adminclientconfigs">3.7 Admin Configs</a></h3>
+  <h3 class="anchor-heading"><a id="adminclientconfigs" class="anchor-link"></a><a href="#adminclientconfigs">3.7 Admin Configs</a></h3>
   Below is the configuration of the Kafka Admin client library.
   <!--#include virtual="generated/admin_client_config.html" -->
 </script>
diff --git a/docs/connect.html b/docs/connect.html
index 797c1fe..a0b129e 100644
--- a/docs/connect.html
+++ b/docs/connect.html
@@ -16,7 +16,7 @@
   ~-->
 
 <script id="connect-template" type="text/x-handlebars-template">
-    <h3><a id="connect_overview" href="#connect_overview">8.1 Overview</a></h3>
+    <h3 class="anchor-heading"><a id="connect_overview" class="anchor-link"></a><a href="#connect_overview">8.1 Overview</a></h3>
 
     <p>Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other systems. It makes it simple to quickly define <i>connectors</i> that move large collections of data into and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency. An export job can deliver data from Kafka topics into secondary storage and query syst [...]
 
@@ -30,19 +30,17 @@
         <li><b>Streaming/batch integration</b> - leveraging Kafka's existing capabilities, Kafka Connect is an ideal solution for bridging streaming and batch data systems</li>
     </ul>
 
-    <h3><a id="connect_user" href="#connect_user">8.2 User Guide</a></h3>
+    <h3 class="anchor-heading"><a id="connect_user" class="anchor-link"></a><a href="#connect_user">8.2 User Guide</a></h3>
 
     <p>The <a href="../quickstart">quickstart</a> provides a brief example of how to run a standalone version of Kafka Connect. This section describes how to configure, run, and manage Kafka Connect in more detail.</p>
 
-    <h4><a id="connect_running" href="#connect_running">Running Kafka Connect</a></h4>
+    <h4 class="anchor-heading"><a id="connect_running" class="anchor-link"></a><a href="#connect_running">Running Kafka Connect</a></h4>
 
     <p>Kafka Connect currently supports two modes of execution: standalone (single process) and distributed.</p>
 
     <p>In standalone mode all work is performed in a single process. This configuration is simpler to setup and get started with and may be useful in situations where only one worker makes sense (e.g. collecting log files), but it does not benefit from some of the features of Kafka Connect such as fault tolerance. You can start a standalone process with the following command:</p>
 
-    <pre class="brush: bash;">
-    &gt; bin/connect-standalone.sh config/connect-standalone.properties connector1.properties [connector2.properties ...]
-    </pre>
+    <pre class="line-numbers"><code class="language-bash">    &gt; bin/connect-standalone.sh config/connect-standalone.properties connector1.properties [connector2.properties ...]</code></pre>
 
     <p>The first parameter is the configuration for the worker. This includes settings such as the Kafka connection parameters, serialization format, and how frequently to commit offsets. The provided example should work well with a local cluster running with the default configuration provided by <code>config/server.properties</code>. It will require tweaking to use with a different configuration or production deployment. All workers (both standalone and distributed) require a few configs:</p>
     <ul>
@@ -64,9 +62,7 @@
 
     <p>Distributed mode handles automatic balancing of work, allows you to scale up (or down) dynamically, and offers fault tolerance both in the active tasks and for configuration and offset commit data. Execution is very similar to standalone mode:</p>
 
-    <pre class="brush: bash;">
-    &gt; bin/connect-distributed.sh config/connect-distributed.properties
-    </pre>
+    <pre class="line-numbers"><code class="language-bash">    &gt; bin/connect-distributed.sh config/connect-distributed.properties</code></pre>
 
     <p>The difference is in the class which is started and the configuration parameters which change how the Kafka Connect process decides where to store configurations, how to assign work, and where to store offsets and task statues. In the distributed mode, Kafka Connect stores the offsets, configs and task statuses in Kafka topics. It is recommended to manually create the topics for offset, configs and statuses in order to achieve the desired the number of partitions and replication f [...]
 
@@ -81,7 +77,7 @@
     <p>Note that in distributed mode the connector configurations are not passed on the command line. Instead, use the REST API described below to create, modify, and destroy connectors.</p>
 
 
-    <h4><a id="connect_configuring" href="#connect_configuring">Configuring Connectors</a></h4>
+    <h4 class="anchor-heading"><a id="connect_configuring" class="anchor-link"></a><a href="#connect_configuring">Configuring Connectors</a></h4>
 
     <p>Connector configurations are simple key-value mappings. For standalone mode these are defined in a properties file and passed to the Connect process on the command line. In distributed mode, they will be included in the JSON payload for the request that creates (or modifies) the connector.</p>
 
@@ -105,7 +101,7 @@
 
     <p>For any other options, you should consult the documentation for the connector.</p>
 
-    <h4><a id="connect_transforms" href="#connect_transforms">Transformations</a></h4>
+    <h4 class="anchor-heading"><a id="connect_transforms" class="anchor-link"></a><a href="#connect_transforms">Transformations</a></h4>
 
     <p>Connectors can be configured with transformations to make lightweight message-at-a-time modifications. They can be convenient for data massaging and event routing.</p>
 
@@ -121,10 +117,8 @@
 
     <p>Throughout the example we'll use schemaless JSON data format. To use schemaless format, we changed the following two lines in <code>connect-standalone.properties</code> from true to false:</p>
 
-    <pre class="brush: text;">
-        key.converter.schemas.enable
-        value.converter.schemas.enable
-    </pre>
+    <pre class="line-numbers"><code class="language-text">        key.converter.schemas.enable
+        value.converter.schemas.enable</code></pre>
 
     <p>The file source connector reads each line as a String. We will wrap each line in a Map and then add a second field to identify the origin of the event. To do this, we use two transformations:</p>
     <ul>
@@ -134,8 +128,7 @@
 
     <p>After adding the transformations, <code>connect-file-source.properties</code> file looks as following:</p>
 
-    <pre class="brush: text;">
-        name=local-file-source
+    <pre class="line-numbers"><code class="language-text">        name=local-file-source
         connector.class=FileStreamSource
         tasks.max=1
         file=test.txt
@@ -145,30 +138,25 @@
         transforms.MakeMap.field=line
         transforms.InsertSource.type=org.apache.kafka.connect.transforms.InsertField$Value
         transforms.InsertSource.static.field=data_source
-        transforms.InsertSource.static.value=test-file-source
-    </pre>
+        transforms.InsertSource.static.value=test-file-source</code></pre>
 
     <p>All the lines starting with <code>transforms</code> were added for the transformations. You can see the two transformations we created: "InsertSource" and "MakeMap" are aliases that we chose to give the transformations. The transformation types are based on the list of built-in transformations you can see below. Each transformation type has additional configuration: HoistField requires a configuration called "field", which is the name of the field in the map that will include the  [...]
 
     <p>When we ran the file source connector on my sample file without the transformations, and then read them using <code>kafka-console-consumer.sh</code>, the results were:</p>
 
-    <pre class="brush: text;">
-        "foo"
+    <pre class="line-numbers"><code class="language-text">        "foo"
         "bar"
-        "hello world"
-   </pre>
+        "hello world"</code></pre>
 
     <p>We then create a new file connector, this time after adding the transformations to the configuration file. This time, the results will be:</p>
 
-    <pre class="brush: json;">
-        {"line":"foo","data_source":"test-file-source"}
+    <pre class="line-numbers"><code class="language-json">        {"line":"foo","data_source":"test-file-source"}
         {"line":"bar","data_source":"test-file-source"}
-        {"line":"hello world","data_source":"test-file-source"}
-    </pre>
+        {"line":"hello world","data_source":"test-file-source"}</code></pre>
 
     <p>You can see that the lines we've read are now part of a JSON map, and there is an extra field with the static value we specified. This is just one example of what you can do with transformations.</p>
     
-    <h5><a id="connect_included_transformation" href="#connect_included_transformation">Included transformations</a></h5>
+    <h5 class="anchor-heading"><a id="connect_included_transformation" class="anchor-link"></a><a href="#connect_included_transformation">Included transformations</a></h5>
 
     <p>Several widely-applicable data and routing transformations are included with Kafka Connect:</p>
 
@@ -191,7 +179,7 @@
     <!--#include virtual="generated/connect_transforms.html" -->
 
 
-    <h5><a id="connect_predicates" href="#connect_predicates">Predicates</a></h5>
+    <h5 class="anchor-heading"><a id="connect_predicates" class="anchor-link"></a><a href="#connect_predicates">Predicates</a></h5>
 
     <p>Transformations can be configured with predicates so that the transformation is applied only to messages which satisfy some condition. In particular, when combined with the <b>Filter</b> transformation predicates can be used to selectively filter out certain messages.</p>
 
@@ -213,20 +201,17 @@
 
     <p>To do this we need first to filter out the records destined for the topic 'foo'. The Filter transformation removes records from further processing, and can use the TopicNameMatches predicate to apply the transformation only to records in topics which match a certain regular expression. TopicNameMatches's only configuration property is <code>pattern</code> which is a Java regular expression for matching against the topic name. The configuration would look like this:</p>
 
-    <pre class="brush: text;">
-        transforms=Filter
+    <pre class="line-numbers"><code class="language-text">        transforms=Filter
         transforms.Filter.type=org.apache.kafka.connect.transforms.Filter
         transforms.Filter.predicate=IsFoo
 
         predicates=IsFoo
         predicates.IsFoo.type=org.apache.kafka.connect.predicates.TopicNameMatches
-        predicates.IsFoo.pattern=foo
-    </pre>
+        predicates.IsFoo.pattern=foo</code></pre>
         
     <p>Next we need to apply ExtractField only when the topic name of the record is not 'bar'. We can't just use TopicNameMatches directly, because that would apply the transformation to matching topic names, not topic names which do <i>not</i> match. The transformation's implicit <code>negate</code> config properties allows us to invert the set of records which a predicate matches. Adding the configuration for this to the previous example we arrive at:</p>
 
-    <pre class="brush: text;">
-        transforms=Filter,Extract
+    <pre class="line-numbers"><code class="language-text">        transforms=Filter,Extract
         transforms.Filter.type=org.apache.kafka.connect.transforms.Filter
         transforms.Filter.predicate=IsFoo
 
@@ -240,8 +225,7 @@
         predicates.IsFoo.pattern=foo
 
         predicates.IsBar.type=org.apache.kafka.connect.predicates.TopicNameMatches
-        predicates.IsBar.pattern=bar
-    </pre>
+        predicates.IsBar.pattern=bar</code></pre>
 
     <p>Kafka Connect includes the following predicates:</p>
 
@@ -256,15 +240,13 @@
     <!--#include virtual="generated/connect_predicates.html" -->
 
 
-    <h4><a id="connect_rest" href="#connect_rest">REST API</a></h4>
+    <h4 class="anchor-heading"><a id="connect_rest" class="anchor-link"></a><a href="#connect_rest">REST API</a></h4>
 
     <p>Since Kafka Connect is intended to be run as a service, it also provides a REST API for managing connectors. The REST API server can be configured using the <code>listeners</code> configuration option.
         This field should contain a list of listeners in the following format: <code>protocol://host:port,protocol2://host2:port2</code>. Currently supported protocols are <code>http</code> and <code>https</code>.
         For example:</p>
 
-    <pre class="brush: text;">
-        listeners=http://localhost:8080,https://localhost:8443
-    </pre>
+    <pre class="line-numbers"><code class="language-text">        listeners=http://localhost:8080,https://localhost:8443</code></pre>
 
     <p>By default, if no <code>listeners</code> are specified, the REST server runs on port 8083 using the HTTP protocol. When using HTTPS, the configuration has to include the SSL configuration.
     By default, it will use the <code>ssl.*</code> settings. In case it is needed to use different configuration for the REST API than for connecting to Kafka brokers, the fields can be prefixed with <code>listeners.https</code>.
@@ -327,7 +309,7 @@
         <li><code>GET /</code>- return basic information about the Kafka Connect cluster such as the version of the Connect worker that serves the REST request (including git commit ID of the source code) and the Kafka cluster ID that is connected to.
     </ul>
 
-    <h4><a id="connect_errorreporting" href="#connect_errorreporting">Error Reporting in Connect</a></h4>
+    <h4 class="anchor-heading"><a id="connect_errorreporting" class="anchor-link"></a><a href="#connect_errorreporting">Error Reporting in Connect</a></h4>
 
     <p>Kafka Connect provides error reporting to handle errors encountered along various stages of processing. By default, any error encountered during conversion or within transformations will cause the connector to fail. Each connector configuration can also enable tolerating such errors by skipping them, optionally writing each error and the details of the failed operation and problematic record (with various levels of detail) to the Connect application log. These mechanisms also capt [...]
 
@@ -337,8 +319,7 @@
 
     <p>By default connectors exhibit "fail fast" behavior immediately upon an error or exception. This is equivalent to adding the following configuration properties with their defaults to a connector configuration:</p>
 
-    <pre class="brush: text;">
-        # disable retries on failure
+    <pre class="line-numbers"><code class="language-text">        # disable retries on failure
         errors.retry.timeout=0
 
         # do not log the error and their contexts
@@ -348,13 +329,11 @@
         errors.deadletterqueue.topic.name=
 
         # Fail on first error
-        errors.tolerance=none
-    </pre>
+        errors.tolerance=none</code></pre>
 
     <p>These and other related connector configuration properties can be changed to provide different behavior. For example, the following configuration properties can be added to a connector configuration to setup error handling with multiple retries, logging to the application logs and the <code>my-connector-errors</code> Kafka topic, and tolerating all errors by reporting them rather than failing the connector task:</p>
 
-    <pre class="brush: text;">
-        # retry for at most 10 minutes times waiting up to 30 seconds between consecutive failures
+    <pre class="line-numbers"><code class="language-text">        # retry for at most 10 minutes times waiting up to 30 seconds between consecutive failures
         errors.retry.timeout=600000
         errors.retry.delay.max.ms=30000
 
@@ -366,16 +345,15 @@
         errors.deadletterqueue.topic.name=my-connector-errors
 
         # Tolerate all errors.
-        errors.tolerance=all
-    </pre>
+        errors.tolerance=all</code></pre>
 
-    <h3><a id="connect_development" href="#connect_development">8.3 Connector Development Guide</a></h3>
+    <h3 class="anchor-heading"><a id="connect_development" class="anchor-link"></a><a href="#connect_development">8.3 Connector Development Guide</a></h3>
 
     <p>This guide describes how developers can write new connectors for Kafka Connect to move data between Kafka and other systems. It briefly reviews a few key concepts and then describes how to create a simple connector.</p>
 
-    <h4><a id="connect_concepts" href="#connect_concepts">Core Concepts and APIs</a></h4>
+    <h4 class="anchor-heading"><a id="connect_concepts" class="anchor-link"></a><a href="#connect_concepts">Core Concepts and APIs</a></h4>
 
-    <h5><a id="connect_connectorsandtasks" href="#connect_connectorsandtasks">Connectors and Tasks</a></h5>
+    <h5 class="anchor-heading"><a id="connect_connectorsandtasks" class="anchor-link"></a><a href="#connect_connectorsandtasks">Connectors and Tasks</a></h5>
 
     <p>To copy data between Kafka and another system, users create a <code>Connector</code> for the system they want to pull data from or push data to. Connectors come in two flavors: <code>SourceConnectors</code> import data from another system (e.g. <code>JDBCSourceConnector</code> would import a relational database into Kafka) and <code>SinkConnectors</code> export data (e.g. <code>HDFSSinkConnector</code> would export the contents of a Kafka topic to an HDFS file).</p>
 
@@ -384,46 +362,41 @@
     <p>With an assignment in hand, each <code>Task</code> must copy its subset of the data to or from Kafka. In Kafka Connect, it should always be possible to frame these assignments as a set of input and output streams consisting of records with consistent schemas. Sometimes this mapping is obvious: each file in a set of log files can be considered a stream with each parsed line forming a record using the same schema and offsets stored as byte offsets in the file. In other cases it may  [...]
 
 
-    <h5><a id="connect_streamsandrecords" href="#connect_streamsandrecords">Streams and Records</a></h5>
+    <h5 class="anchor-heading"><a id="connect_streamsandrecords" class="anchor-link"></a><a href="#connect_streamsandrecords">Streams and Records</a></h5>
 
     <p>Each stream should be a sequence of key-value records. Both the keys and values can have complex structure -- many primitive types are provided, but arrays, objects, and nested data structures can be represented as well. The runtime data format does not assume any particular serialization format; this conversion is handled internally by the framework.</p>
 
     <p>In addition to the key and value, records (both those generated by sources and those delivered to sinks) have associated stream IDs and offsets. These are used by the framework to periodically commit the offsets of data that have been processed so that in the event of failures, processing can resume from the last committed offsets, avoiding unnecessary reprocessing and duplication of events.</p>
 
-    <h5><a id="connect_dynamicconnectors" href="#connect_dynamicconnectors">Dynamic Connectors</a></h5>
+    <h5 class="anchor-heading"><a id="connect_dynamicconnectors" class="anchor-link"></a><a href="#connect_dynamicconnectors">Dynamic Connectors</a></h5>
 
     <p>Not all jobs are static, so <code>Connector</code> implementations are also responsible for monitoring the external system for any changes that might require reconfiguration. For example, in the <code>JDBCSourceConnector</code> example, the <code>Connector</code> might assign a set of tables to each <code>Task</code>. When a new table is created, it must discover this so it can assign the new table to one of the <code>Tasks</code> by updating its configuration. When it notices a c [...]
 
 
-    <h4><a id="connect_developing" href="#connect_developing">Developing a Simple Connector</a></h4>
+    <h4 class="anchor-heading"><a id="connect_developing" class="anchor-link"></a><a href="#connect_developing">Developing a Simple Connector</a></h4>
 
     <p>Developing a connector only requires implementing two interfaces, the <code>Connector</code> and <code>Task</code>. A simple example is included with the source code for Kafka in the <code>file</code> package. This connector is meant for use in standalone mode and has implementations of a <code>SourceConnector</code>/<code>SourceTask</code> to read each line of a file and emit it as a record and a <code>SinkConnector</code>/<code>SinkTask</code> that writes each record to a file.</p>
 
     <p>The rest of this section will walk through some code to demonstrate the key steps in creating a connector, but developers should also refer to the full example source code as many details are omitted for brevity.</p>
 
-    <h5><a id="connect_connectorexample" href="#connect_connectorexample">Connector Example</a></h5>
+    <h5 class="anchor-heading"><a id="connect_connectorexample" class="anchor-link"></a><a href="#connect_connectorexample">Connector Example</a></h5>
 
     <p>We'll cover the <code>SourceConnector</code> as a simple example. <code>SinkConnector</code> implementations are very similar. Start by creating the class that inherits from <code>SourceConnector</code> and add a couple of fields that will store parsed configuration information (the filename to read from and the topic to send data to):</p>
 
-    <pre class="brush: java;">
-    public class FileStreamSourceConnector extends SourceConnector {
+    <pre class="line-numbers"><code class="language-java">    public class FileStreamSourceConnector extends SourceConnector {
         private String filename;
-        private String topic;
-    </pre>
+        private String topic;</code></pre>
 
     <p>The easiest method to fill in is <code>taskClass()</code>, which defines the class that should be instantiated in worker processes to actually read the data:</p>
 
-    <pre class="brush: java;">
-    @Override
+    <pre class="line-numbers"><code class="language-java">    @Override
     public Class&lt;? extends Task&gt; taskClass() {
         return FileStreamSourceTask.class;
-    }
-    </pre>
+    }</code></pre>
 
     <p>We will define the <code>FileStreamSourceTask</code> class below. Next, we add some standard lifecycle methods, <code>start()</code> and <code>stop()</code>:</p>
 
-    <pre class="brush: java;">
-    @Override
+    <pre class="line-numbers"><code class="language-java">    @Override
     public void start(Map&lt;String, String&gt; props) {
         // The complete version includes error handling as well.
         filename = props.get(FILE_CONFIG);
@@ -433,15 +406,13 @@
     @Override
     public void stop() {
         // Nothing to do since no background monitoring is required.
-    }
-    </pre>
+    }</code></pre>
 
     <p>Finally, the real core of the implementation is in <code>taskConfigs()</code>. In this case we are only
     handling a single file, so even though we may be permitted to generate more tasks as per the
     <code>maxTasks</code> argument, we return a list with only one entry:</p>
 
-    <pre class="brush: java;">
-    @Override
+    <pre class="line-numbers"><code class="language-java">    @Override
     public List&lt;Map&lt;String, String&gt;&gt; taskConfigs(int maxTasks) {
         ArrayList&lt;Map&lt;String, String&gt;&gt; configs = new ArrayList&lt;&gt;();
         // Only one input stream makes sense.
@@ -451,8 +422,7 @@
         config.put(TOPIC_CONFIG, topic);
         configs.add(config);
         return configs;
-    }
-    </pre>
+    }</code></pre>
 
     <p>Although not used in the example, <code>SourceTask</code> also provides two APIs to commit offsets in the source system: <code>commit</code> and <code>commitRecord</code>. The APIs are provided for source systems which have an acknowledgement mechanism for messages. Overriding these methods allows the source connector to acknowledge messages in the source system, either in bulk or individually, once they have been written to Kafka.
     The <code>commit</code> API stores the offsets in the source system, up to the offsets that have been returned by <code>poll</code>. The implementation of this API should block until the commit is complete. The <code>commitRecord</code> API saves the offset in the source system for each <code>SourceRecord</code> after it is written to Kafka. As Kafka Connect will record offsets automatically, <code>SourceTask</code>s are not required to implement them. In cases where a connector does [...]
@@ -461,15 +431,14 @@
 
     <p>Note that this simple example does not include dynamic input. See the discussion in the next section for how to trigger updates to task configs.</p>
 
-    <h5><a id="connect_taskexample" href="#connect_taskexample">Task Example - Source Task</a></h5>
+    <h5 class="anchor-heading"><a id="connect_taskexample" class="anchor-link"></a><a href="#connect_taskexample">Task Example - Source Task</a></h5>
 
     <p>Next we'll describe the implementation of the corresponding <code>SourceTask</code>. The implementation is short, but too long to cover completely in this guide. We'll use pseudo-code to describe most of the implementation, but you can refer to the source code for the full example.</p>
 
     <p>Just as with the connector, we need to create a class inheriting from the appropriate base <code>Task</code> class. It also has some standard lifecycle methods:</p>
 
 
-    <pre class="brush: java;">
-    public class FileStreamSourceTask extends SourceTask {
+    <pre class="line-numbers"><code class="language-java">    public class FileStreamSourceTask extends SourceTask {
         String filename;
         InputStream stream;
         String topic;
@@ -484,15 +453,13 @@
         @Override
         public synchronized void stop() {
             stream.close();
-        }
-    </pre>
+        }</code></pre>
 
     <p>These are slightly simplified versions, but show that these methods should be relatively simple and the only work they should perform is allocating or freeing resources. There are two points to note about this implementation. First, the <code>start()</code> method does not yet handle resuming from a previous offset, which will be addressed in a later section. Second, the <code>stop()</code> method is synchronized. This will be necessary because <code>SourceTasks</code> are given a [...]
 
     <p>Next, we implement the main functionality of the task, the <code>poll()</code> method which gets events from the input system and returns a <code>List&lt;SourceRecord&gt;</code>:</p>
 
-    <pre class="brush: java;">
-    @Override
+    <pre class="line-numbers"><code class="language-java">    @Override
     public List&lt;SourceRecord&gt; poll() throws InterruptedException {
         try {
             ArrayList&lt;SourceRecord&gt; records = new ArrayList&lt;&gt;();
@@ -512,19 +479,17 @@
             // null, and driving thread will handle any shutdown if necessary.
         }
         return null;
-    }
-    </pre>
+    }</code></pre>
 
     <p>Again, we've omitted some details, but we can see the important steps: the <code>poll()</code> method is going to be called repeatedly, and for each call it will loop trying to read records from the file. For each line it reads, it also tracks the file offset. It uses this information to create an output <code>SourceRecord</code> with four pieces of information: the source partition (there is only one, the single file being read), source offset (byte offset in the file), output to [...]
 
     <p>Note that this implementation uses the normal Java <code>InputStream</code> interface and may sleep if data is not available. This is acceptable because Kafka Connect provides each task with a dedicated thread. While task implementations have to conform to the basic <code>poll()</code> interface, they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation would be more efficient, but this simple approach works, is quick to implement, and i [...]
 
-    <h5><a id="connect_sinktasks" href="#connect_sinktasks">Sink Tasks</a></h5>
+    <h5 class="anchor-heading"><a id="connect_sinktasks" class="anchor-link"></a><a href="#connect_sinktasks">Sink Tasks</a></h5>
 
     <p>The previous section described how to implement a simple <code>SourceTask</code>. Unlike <code>SourceConnector</code> and <code>SinkConnector</code>, <code>SourceTask</code> and <code>SinkTask</code> have very different interfaces because <code>SourceTask</code> uses a pull interface and <code>SinkTask</code> uses a push interface. Both share the common lifecycle methods, but the <code>SinkTask</code> interface is quite different:</p>
 
-    <pre class="brush: java;">
-    public abstract class SinkTask implements Task {
+    <pre class="line-numbers"><code class="language-java">    public abstract class SinkTask implements Task {
         public void initialize(SinkTaskContext context) {
             this.context = context;
         }
@@ -532,8 +497,7 @@
         public abstract void put(Collection&lt;SinkRecord&gt; records);
 
         public void flush(Map&lt;TopicPartition, OffsetAndMetadata&gt; currentOffsets) {
-        }
-    </pre>
+        }</code></pre>
 
     <p>The <code>SinkTask</code> documentation contains full details, but this interface is nearly as simple as the <code>SourceTask</code>. The <code>put()</code> method should contain most of the implementation, accepting sets of <code>SinkRecords</code>, performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering [...]
 
@@ -544,8 +508,7 @@
 
     <p>When <a href="#connect_errorreporting">error reporting</a> is enabled for a connector, the connector can use an <code>ErrantRecordReporter</code> to report problems with individual records sent to a sink connector. The following example shows how a connector's <code>SinkTask</code> subclass might obtain and use the <code>ErrantRecordReporter</code>, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn [...]
 
-    <pre class="brush: java;">
-        private ErrantRecordReporter reporter;
+    <pre class="line-numbers"><code class="language-java">        private ErrantRecordReporter reporter;
 
         @Override
         public void start(Map&lt;String, String&gt; props) {
@@ -574,24 +537,21 @@
                     }
                 }
             }
-        }
-    </pre>
+        }</code></pre>
 
-    <h5><a id="connect_resuming" href="#connect_resuming">Resuming from Previous Offsets</a></h5>
+    <h5 class="anchor-heading"><a id="connect_resuming" class="anchor-link"></a><a href="#connect_resuming">Resuming from Previous Offsets</a></h5>
 
     <p>The <code>SourceTask</code> implementation included a stream ID (the input filename) and offset (position in the file) with each record. The framework uses this to commit offsets periodically so that in the case of a failure, the task can recover and minimize the number of events that are reprocessed and possibly duplicated (or to resume from the most recent offset if Kafka Connect was stopped gracefully, e.g. in standalone mode or due to a job reconfiguration). This commit proces [...]
 
     <p>To correctly resume upon startup, the task can use the <code>SourceContext</code> passed into its <code>initialize()</code> method to access the offset data. In <code>initialize()</code>, we would add a bit more code to read the offset (if it exists) and seek to that position:</p>
 
-    <pre class="brush: java;">
-        stream = new FileInputStream(filename);
+    <pre class="line-numbers"><code class="language-java">        stream = new FileInputStream(filename);
         Map&lt;String, Object&gt; offset = context.offsetStorageReader().offset(Collections.singletonMap(FILENAME_FIELD, filename));
         if (offset != null) {
             Long lastRecordedOffset = (Long) offset.get("position");
             if (lastRecordedOffset != null)
                 seekToOffset(stream, lastRecordedOffset);
-        }
-    </pre>
+        }</code></pre>
 
     <p>Of course, you might need to read many keys for each of the input streams. The <code>OffsetStorageReader</code> interface also allows you to issue bulk reads to efficiently load all offsets, then apply them by seeking each input stream to the appropriate position.</p>
 
@@ -601,10 +561,8 @@
 
     <p>Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the <code>ConnectorContext</code> object that reconfiguration is necessary. For example, in a <code>SourceConnector</code>:</p>
 
-    <pre class="brush: java;">
-        if (inputsChanged())
-            this.context.requestTaskReconfiguration();
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        if (inputsChanged())
+            this.context.requestTaskReconfiguration();</code></pre>
 
     <p>The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the <code>SourceConnector</code> this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.</p>
 
@@ -612,27 +570,25 @@
 
     <p><code>SinkConnectors</code> usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. <code>SinkTasks</code> should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handl [...]
 
-    <h4><a id="connect_configs" href="#connect_configs">Connect Configuration Validation</a></h4>
+    <h4 class="anchor-heading"><a id="connect_configs" class="anchor-link"></a><a href="#connect_configs">Connect Configuration Validation</a></h4>
 
     <p>Kafka Connect allows you to validate connector configurations before submitting a connector to be executed and can provide feedback about errors and recommended values. To take advantage of this, connector developers need to provide an implementation of <code>config()</code> to expose the configuration definition to the framework.</p>
 
     <p>The following code in <code>FileStreamSourceConnector</code> defines the configuration and exposes it to the framework.</p>
 
-    <pre class="brush: java;">
-        private static final ConfigDef CONFIG_DEF = new ConfigDef()
+    <pre class="line-numbers"><code class="language-java">        private static final ConfigDef CONFIG_DEF = new ConfigDef()
             .define(FILE_CONFIG, Type.STRING, Importance.HIGH, "Source filename.")
             .define(TOPIC_CONFIG, Type.STRING, Importance.HIGH, "The topic to publish data to");
 
         public ConfigDef config() {
             return CONFIG_DEF;
-        }
-    </pre>
+        }</code></pre>
 
     <p><code>ConfigDef</code> class is used for specifying the set of expected configurations. For each configuration, you can specify the name, the type, the default value, the documentation, the group information, the order in the group, the width of the configuration value and the name suitable for display in the UI. Plus, you can provide special validation logic used for single configuration validation by overriding the <code>Validator</code> class. Moreover, as there may be dependen [...]
 
     <p>Also, the <code>validate()</code> method in <code>Connector</code> provides a default validation implementation which returns a list of allowed configurations together with configuration errors and recommended values for each configuration. However, it does not use the recommended values for configuration validation. You may provide an override of the default implementation for customized configuration validation, which may use the recommended values.</p>
 
-    <h4><a id="connect_schemas" href="#connect_schemas">Working with Schemas</a></h4>
+    <h4 class="anchor-heading"><a id="connect_schemas" class="anchor-link"></a><a href="#connect_schemas">Working with Schemas</a></h4>
 
     <p>The FileStream connectors are good examples because they are simple, but they also have trivially structured data -- each line is just a string. Almost all practical connectors will need schemas with more complex data formats.</p>
 
@@ -640,8 +596,7 @@
 
     <p>The API documentation provides a complete reference, but here is a simple example creating a <code>Schema</code> and <code>Struct</code>:</p>
 
-    <pre class="brush: java;">
-    Schema schema = SchemaBuilder.struct().name(NAME)
+    <pre class="line-numbers"><code class="language-java">    Schema schema = SchemaBuilder.struct().name(NAME)
         .field("name", Schema.STRING_SCHEMA)
         .field("age", Schema.INT_SCHEMA)
         .field("admin", SchemaBuilder.bool().defaultValue(false).build())
@@ -649,8 +604,7 @@
 
     Struct struct = new Struct(schema)
         .put("name", "Barbara Liskov")
-        .put("age", 75);
-    </pre>
+        .put("age", 75);</code></pre>
 
     <p>If you are implementing a source connector, you'll need to decide when and how to create schemas. Where possible, you should avoid recomputing them as much as possible. For example, if your connector is guaranteed to have a fixed schema, create it statically and reuse a single instance.</p>
 
@@ -658,7 +612,7 @@
 
     <p>Sink connectors are usually simpler because they are consuming data and therefore do not need to create schemas. However, they should take just as much care to validate that the schemas they receive have the expected format. When the schema does not match -- usually indicating the upstream producer is generating invalid data that cannot be correctly translated to the destination system -- sink connectors should throw an exception to indicate this error to the system.</p>
 
-    <h4><a id="connect_administration" href="#connect_administration">Kafka Connect Administration</a></h4>
+    <h4 class="anchor-heading"><a id="connect_administration" class="anchor-link"></a><a href="#connect_administration">Kafka Connect Administration</a></h4>
 
     <p>
     Kafka Connect's <a href="#connect_rest">REST layer</a> provides a set of APIs to enable administration of the cluster. This includes APIs to view the configuration of connectors and the status of their tasks, as well as to alter their current behavior (e.g. changing configuration and restarting tasks).
@@ -700,8 +654,7 @@
     You can use the REST API to view the current status of a connector and its tasks, including the ID of the worker to which each was assigned. For example, the <code>GET /connectors/file-source/status</code> request shows the status of a connector named <code>file-source</code>:
     </p>
 
-    <pre class="brush: json;">
-    {
+    <pre class="line-numbers"><code class="language-json">    {
     "name": "file-source",
     "connector": {
         "state": "RUNNING",
@@ -714,8 +667,7 @@
         "worker_id": "192.168.1.209:8083"
         }
     ]
-    }
-    </pre>
+    }</code></pre>
 
     <p>
     Connectors and their tasks publish status updates to a shared topic (configured with <code>status.storage.topic</code>) which all workers in the cluster monitor. Because the workers consume this topic asynchronously, there is typically a (short) delay before a state change is visible through the status API. The following states are possible for a connector or one of its tasks:
diff --git a/docs/design.html b/docs/design.html
index 3745ab5..e6edc2f 100644
--- a/docs/design.html
+++ b/docs/design.html
@@ -16,7 +16,7 @@
 -->
 
 <script id="design-template" type="text/x-handlebars-template">
-    <h3><a id="majordesignelements" href="#majordesignelements">4.1 Motivation</a></h3>
+    <h3 class="anchor-heading"><a id="majordesignelements" class="anchor-link"></a><a href="#majordesignelements">4.1 Motivation</a></h3>
     <p>
     We designed Kafka to be able to act as a unified platform for handling all the real-time data feeds <a href="#introduction">a large company might have</a>. To do this we had to think through a fairly broad set of use cases.
     <p>
@@ -32,7 +32,7 @@
     <p>
     Supporting these uses led us to a design with a number of unique elements, more akin to a database log than a traditional messaging system. We will outline some elements of the design in the following sections.
 
-    <h3><a id="persistence" href="#persistence">4.2 Persistence</a></h3>
+    <h3 class="anchor-heading"><a id="persistence" class="anchor-link"></a><a href="#persistence">4.2 Persistence</a></h3>
     <h4><a id="design_filesystem" href="#design_filesystem">Don't fear the filesystem!</a></h4>
     <p>
     Kafka relies heavily on the filesystem for storing and caching messages. There is a general perception that "disks are slow" which makes people skeptical that a persistent structure can offer competitive performance.
@@ -66,7 +66,7 @@
     <p>
     This style of pagecache-centric design is described in an <a href="http://varnish-cache.org/wiki/ArchitectNotes">article</a> on the design of Varnish here (along with a healthy dose of arrogance).
 
-    <h4><a id="design_constanttime" href="#design_constanttime">Constant Time Suffices</a></h4>
+    <h4 class="anchor-heading"><a id="design_constanttime" class="anchor-link"></a><a href="#design_constanttime">Constant Time Suffices</a></h4>
     <p>
     The persistent data structure used in messaging systems are often a per-consumer queue with an associated BTree or other general-purpose random access data structures to maintain metadata about messages.
     BTrees are the most versatile data structure available, and make it possible to support a wide variety of transactional and non-transactional semantics in the messaging system.
@@ -82,7 +82,7 @@
     Having access to virtually unlimited disk space without any performance penalty means that we can provide some features not usually found in a messaging system. For example, in Kafka, instead of attempting to
     delete messages as soon as they are consumed, we can retain messages for a relatively long period (say a week). This leads to a great deal of flexibility for consumers, as we will describe.
 
-    <h3><a id="maximizingefficiency" href="#maximizingefficiency">4.3 Efficiency</a></h3>
+    <h3 class="anchor-heading"><a id="maximizingefficiency" class="anchor-link"></a><a href="#maximizingefficiency">4.3 Efficiency</a></h3>
     <p>
     We have put significant effort into efficiency. One of our primary use cases is handling web activity data, which is very high volume: each page view may generate dozens of writes. Furthermore, we assume each
     message published is read by at least one consumer (often many), hence we strive to make consumption as cheap as possible.
@@ -127,7 +127,7 @@
     <p>
     For more background on the sendfile and zero-copy support in Java, see this <a href="https://developer.ibm.com/articles/j-zerocopy/">article</a>.
 
-    <h4><a id="design_compression" href="#design_compression">End-to-end Batch Compression</a></h4>
+    <h4 class="anchor-heading"><a id="design_compression" class="anchor-link"></a><a href="#design_compression">End-to-end Batch Compression</a></h4>
     <p>
     In some cases the bottleneck is actually not CPU or disk but network bandwidth. This is particularly true for a data pipeline that needs to send messages between data centers over a wide-area network. Of course,
     the user can always compress its messages one at a time without any support needed from Kafka, but this can lead to very poor compression ratios as much of the redundancy is due to repetition between messages of
@@ -138,9 +138,9 @@
     <p>
     Kafka supports GZIP, Snappy, LZ4 and ZStandard compression protocols. More details on compression can be found <a href="https://cwiki.apache.org/confluence/display/KAFKA/Compression">here</a>.
 
-    <h3><a id="theproducer" href="#theproducer">4.4 The Producer</a></h3>
+    <h3 class="anchor-heading"><a id="theproducer" class="anchor-link"></a><a href="#theproducer">4.4 The Producer</a></h3>
 
-    <h4><a id="design_loadbalancing" href="#design_loadbalancing">Load balancing</a></h4>
+    <h4 class="anchor-heading"><a id="design_loadbalancing" class="anchor-link"></a><a href="#design_loadbalancing">Load balancing</a></h4>
     <p>
     The producer sends data directly to the broker that is the leader for the partition without any intervening routing tier. To help the producer do this all Kafka nodes can answer a request for metadata about which
     servers are alive and where the leaders for the partitions of a topic are at any given time to allow the producer to appropriately direct its requests.
@@ -150,7 +150,7 @@
     chosen was a user id then all data for a given user would be sent to the same partition. This in turn will allow consumers to make locality assumptions about their consumption. This style of partitioning is explicitly
     designed to allow locality-sensitive processing in consumers.
 
-    <h4><a id="design_asyncsend" href="#design_asyncsend">Asynchronous send</a></h4>
+    <h4 class="anchor-heading"><a id="design_asyncsend" class="anchor-link"></a><a href="#design_asyncsend">Asynchronous send</a></h4>
     <p>
     Batching is one of the big drivers of efficiency, and to enable batching the Kafka producer will attempt to accumulate data in memory and to send out larger batches in a single request. The batching can be configured
     to accumulate no more than a fixed number of messages and to wait no longer than some fixed latency bound (say 64k or 10 ms). This allows the accumulation of more bytes to send, and few larger I/O operations on the
@@ -159,12 +159,12 @@
     Details on <a href="#producerconfigs">configuration</a> and the <a href="http://kafka.apache.org/082/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html">api</a> for the producer can be found
     elsewhere in the documentation.
 
-    <h3><a id="theconsumer" href="#theconsumer">4.5 The Consumer</a></h3>
+    <h3 class="anchor-heading"><a id="theconsumer" class="anchor-link"></a><a href="#theconsumer">4.5 The Consumer</a></h3>
 
     The Kafka consumer works by issuing "fetch" requests to the brokers leading the partitions it wants to consume. The consumer specifies its offset in the log with each request and receives back a chunk of log
     beginning from that position. The consumer thus has significant control over this position and can rewind it to re-consume data if need be.
 
-    <h4><a id="design_pull" href="#design_pull">Push vs. pull</a></h4>
+    <h4 class="anchor-heading"><a id="design_pull" class="anchor-link"></a><a href="#design_pull">Push vs. pull</a></h4>
     <p>
     An initial question we considered is whether consumers should pull data from brokers or brokers should push data to the consumer. In this respect Kafka follows a more traditional design, shared by most messaging
     systems, where data is pushed to the broker from the producer and pulled from the broker by the consumer. Some logging-centric systems, such as <a href="http://github.com/facebook/scribe">Scribe</a> and
@@ -187,7 +187,7 @@
     scale led us to feel that involving thousands of disks in the system across many applications would not actually make things more reliable and would be a nightmare to operate. And in practice we have found that we
     can run a pipeline with strong SLAs at large scale without a need for producer persistence.
 
-    <h4><a id="design_consumerposition" href="#design_consumerposition">Consumer Position</a></h4>
+    <h4 class="anchor-heading"><a id="design_consumerposition" class="anchor-link"></a><a href="#design_consumerposition">Consumer Position</a></h4>
     Keeping track of <i>what</i> has been consumed is, surprisingly, one of the key performance points of a messaging system.
     <p>
     Most messaging systems keep metadata about what messages have been consumed on the broker. That is, as a message is handed out to a consumer, the broker either records that fact locally immediately or it may wait
@@ -208,7 +208,7 @@
     There is a side benefit of this decision. A consumer can deliberately <i>rewind</i> back to an old offset and re-consume data. This violates the common contract of a queue, but turns out to be an essential feature
     for many consumers. For example, if the consumer code has a bug and is discovered after some messages are consumed, the consumer can re-consume those messages once the bug is fixed.
 
-    <h4><a id="design_offlineload" href="#design_offlineload">Offline Data Load</a></h4>
+    <h4 class="anchor-heading"><a id="design_offlineload" class="anchor-link"></a><a href="#design_offlineload">Offline Data Load</a></h4>
 
     Scalable persistence allows for the possibility of consumers that only periodically consume such as batch data loads that periodically bulk-load data into an offline system such as Hadoop or a relational data
     warehouse.
@@ -216,7 +216,7 @@
     In the case of Hadoop we parallelize the data load by splitting the load over individual map tasks, one for each node/topic/partition combination, allowing full parallelism in the loading. Hadoop provides the task
     management, and tasks which fail can restart without danger of duplicate data&mdash;they simply restart from their original position.
 
-    <h4><a id="static_membership" href="#static_membership">Static Membership</a></h4>
+    <h4 class="anchor-heading"><a id="static_membership" class="anchor-link"></a><a href="#static_membership">Static Membership</a></h4>
     Static membership aims to improve the availability of stream applications, consumer groups and other applications built on top of the group rebalance protocol.
     The rebalance protocol relies on the group coordinator to allocate entity ids to group members. These generated ids are ephemeral and will change when members restart and rejoin.
     For consumer based apps, this "dynamic membership" can cause a large percentage of tasks re-assigned to different instances during administrative operations
@@ -238,7 +238,7 @@
     For more details, see
     <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances">KIP-345</a>
 
-    <h3><a id="semantics" href="#semantics">4.6 Message Delivery Semantics</a></h3>
+    <h3 class="anchor-heading"><a id="semantics" class="anchor-link"></a><a href="#semantics">4.6 Message Delivery Semantics</a></h3>
     <p>
     Now that we understand a little about how producers and consumers work, let's discuss the semantic guarantees Kafka provides between producer and consumer. Clearly there are multiple possible message delivery
     guarantees that could be provided:
@@ -303,7 +303,7 @@
     offset which makes implementing this feasible (see also <a href="https://kafka.apache.org/documentation/#connect">Kafka Connect</a>). Otherwise, Kafka guarantees at-least-once delivery by default, and allows
     the user to implement at-most-once delivery by disabling retries on the producer and committing offsets in the consumer prior to processing a batch of messages.
 
-    <h3><a id="replication" href="#replication">4.7 Replication</a></h3>
+    <h3 class="anchor-heading"><a id="replication" class="anchor-link"></a><a href="#replication">4.7 Replication</a></h3>
     <p>
     Kafka replicates the log for each topic's partitions across a configurable number of servers (you can set this replication factor on a topic-by-topic basis). This allows automatic failover to these replicas when a
     server in the cluster fails so messages remain available in the presence of failures.
@@ -413,7 +413,7 @@
     your data or violate consistency by taking what remains on an existing server as your new source of truth.
 
 
-    <h4><a id="design_ha" href="#design_ha">Availability and Durability Guarantees</a></h4>
+    <h4 class="anchor-heading"><a id="design_ha" class="anchor-link"></a><a href="#design_ha">Availability and Durability Guarantees</a></h4>
 
     When writing to Kafka, producers can choose whether they wait for the message to be acknowledged by 0,1 or all (-1) replicas.
     Note that "acknowledgement by all replicas" does not guarantee that the full set of assigned replicas have received the message. By default, when acks=all, acknowledgement happens as soon as all the current in-sync
@@ -432,7 +432,7 @@
     </ol>
 
 
-    <h4><a id="design_replicamanagment" href="#design_replicamanagment">Replica Management</a></h4>
+    <h4 class="anchor-heading"><a id="design_replicamanagment" class="anchor-link"></a><a href="#design_replicamanagment">Replica Management</a></h4>
 
     The above discussion on replicated logs really covers only a single log, i.e. one topic partition. However a Kafka cluster will manage hundreds or thousands of these partitions. We attempt to balance partitions
     within a cluster in a round-robin fashion to avoid clustering all partitions for high-volume topics on a small number of nodes. Likewise we try to balance leadership so that each node is the leader for a proportional
@@ -443,7 +443,7 @@
     affected partitions in a failed broker. The result is that we are able to batch together many of the required leadership change notifications which makes the election process far cheaper and faster for a large number
     of partitions. If the controller fails, one of the surviving brokers will become the new controller.
 
-    <h3><a id="compaction" href="#compaction">4.8 Log Compaction</a></h3>
+    <h3 class="anchor-heading"><a id="compaction" class="anchor-link"></a><a href="#compaction">4.8 Log Compaction</a></h3>
 
     Log compaction ensures that Kafka will always retain at least the last known value for each message key within the log of data for a single topic partition.  It addresses use cases and scenarios such as restoring
     state after application crashes or system failure, or reloading caches after application restarts during operational maintenance. Let's dive into these use cases in more detail and then describe how compaction works.
@@ -453,8 +453,7 @@
     <p>
     Let's discuss a concrete example of such a stream. Say we have a topic containing user email addresses; every time a user updates their email address we send a message to this topic using their user id as the
     primary key. Now say we send the following messages over some time period for a user with id 123, each message corresponding to a change in email address (messages for other ids are omitted):
-    <pre class="brush: text;">
-        123 => bill@microsoft.com
+    <pre class="line-numbers"><code class="language-text">        123 => bill@microsoft.com
                 .
                 .
                 .
@@ -462,8 +461,7 @@
                 .
                 .
                 .
-        123 => bill@gmail.com
-    </pre>
+        123 => bill@gmail.com</code></pre>
     Log compaction gives us a more granular retention mechanism so that we are guaranteed to retain at least the last update for each primary key (e.g. <code>bill@gmail.com</code>). By doing this we guarantee that the
     log contains a full snapshot of the final value for every key not just keys that changed recently. This means downstream consumers can restore their own state off this topic without us having to retain a complete
     log of all changes.
@@ -497,7 +495,7 @@
     Unlike most log-structured storage systems Kafka is built for subscription and organizes data for fast linear reads and writes. Unlike Databus, Kafka acts as a source-of-truth store so it is useful even in
     situations where the upstream data source would not otherwise be replayable.
 
-    <h4><a id="design_compactionbasics" href="#design_compactionbasics">Log Compaction Basics</a></h4>
+    <h4 class="anchor-heading"><a id="design_compactionbasics" class="anchor-link"></a><a href="#design_compactionbasics">Log Compaction Basics</a></h4>
 
     Here is a high-level picture that shows the logical structure of a Kafka log with the offset for each message.
     <p>
@@ -517,7 +515,10 @@
     <p>
     <img class="centered" src="/{{version}}/images/log_compaction.png">
     <p>
-    <h4><a id="design_compactionguarantees" href="#design_compactionguarantees">What guarantees does log compaction provide?</a></h4>
+    <h4 class="anchor-heading">
+        <a class="anchor-link" id="design_compactionguarantees" href="#design_compactionguarantees"></a>
+        <a href="#design_compactionguarantees">What guarantees does log compaction provide</a>?
+    </h4>
 
     Log compaction guarantees the following:
     <ol>
@@ -531,7 +532,7 @@
     concurrently with reads, it is possible for a consumer to miss delete markers if it lags by more than <code>delete.retention.ms</code>.
     </ol>
 
-    <h4><a id="design_compactiondetails" href="#design_compactiondetails">Log Compaction Details</a></h4>
+    <h4 class="anchor-heading"><a id="design_compactiondetails" class="anchor-link"></a><a href="#design_compactiondetails">Log Compaction Details</a></h4>
 
     Log compaction is handled by the log cleaner, a pool of background threads that recopy log segment files, removing records whose key appears in the head of the log. Each compactor thread works as follows:
     <ol>
@@ -543,11 +544,11 @@
     (assuming 1k messages).
     </ol>
     <p>
-    <h4><a id="design_compactionconfig" href="#design_compactionconfig">Configuring The Log Cleaner</a></h4>
+    <h4 class="anchor-heading"><a id="design_compactionconfig" class="anchor-link"></a><a href="#design_compactionconfig">Configuring The Log Cleaner</a></h4>
 
     The log cleaner is enabled by default. This will start the pool of cleaner threads.
     To enable log cleaning on a particular topic, add the log-specific property
-    <pre class="brush: text;"> log.cleanup.policy=compact</pre>
+    <pre class="language-text"> log.cleanup.policy=compact</code></pre>
 
     The <code>log.cleanup.policy</code> property is a broker configuration setting defined
     in the broker's <code>server.properties</code> file; it affects all of the topics
@@ -555,13 +556,13 @@
     <a href="/documentation.html#brokerconfigs">here</a>.
 
     The log cleaner can be configured to retain a minimum amount of the uncompacted "head" of the log. This is enabled by setting the compaction time lag.
-    <pre class="brush: text;">  log.cleaner.min.compaction.lag.ms</pre>
+    <pre class="language-text">  log.cleaner.min.compaction.lag.ms</code></pre>
 
     This can be used to prevent messages newer than a minimum message age from being subject to compaction. If not set, all log segments are eligible for compaction except for the last segment, i.e. the one currently
     being written to. The active segment will not be compacted even if all of its messages are older than the minimum compaction time lag.
 
     The log cleaner can be configured to ensure a maximum delay after which the uncompacted "head" of the log becomes eligible for log compaction.
-    <pre class="brush: text;">  log.cleaner.max.compaction.lag.ms</pre>
+    <pre class="language-text">  log.cleaner.max.compaction.lag.ms</code></pre>
 
     This can be used to prevent log with low produce rate from remaining ineligible for compaction for an unbounded duration. If not set, logs that do not exceed min.cleanable.dirty.ratio are not compacted.
     Note that this compaction deadline is not a hard guarantee since it is still subjected to the availability of log cleaner threads and the actual compaction time.
@@ -570,7 +571,7 @@
     <p>
     Further cleaner configurations are described <a href="/documentation.html#brokerconfigs">here</a>.
 
-    <h3><a id="design_quotas" href="#design_quotas">4.9 Quotas</a></h3>
+    <h3 class="anchor-heading"><a id="design_quotas" class="anchor-link"></a><a href="#design_quotas">4.9 Quotas</a></h3>
     <p>
     Kafka cluster has the ability to enforce quotas on requests to control the broker resources used by clients. Two types
     of client quotas can be enforced by Kafka brokers for each group of clients sharing a quota:
@@ -580,14 +581,17 @@
     </ol>
     </p>
 
-    <h4><a id="design_quotasnecessary" href="#design_quotasnecessary">Why are quotas necessary?</a></h4>
+    <h4 class="anchor-heading">
+        <a class="anchor-link" id="design_quotasnecessary" href="#design_quotasnecessary"></a>
+        <a href="#design_quotasnecessary">Why are quotas necessary</a>?
+    </h4>
     <p>
     It is possible for producers and consumers to produce/consume very high volumes of data or generate requests at a very high
     rate and thus monopolize broker resources, cause network saturation and generally DOS other clients and the brokers themselves.
     Having quotas protects against these issues and is all the more important in large multi-tenant clusters where a small set of badly behaved clients can degrade user experience for the well behaved ones.
     In fact, when running Kafka as a service this even makes it possible to enforce API limits according to an agreed upon contract.
     </p>
-    <h4><a id="design_quotasgroups" href="#design_quotasgroups">Client groups</a></h4>
+    <h4 class="anchor-heading"><a id="design_quotasgroups" class="anchor-link"></a><a href="#design_quotasgroups">Client groups</a></h4>
         The identity of Kafka clients is the user principal which represents an authenticated user in a secure cluster. In a cluster that supports unauthenticated clients, user principal is a grouping of unauthenticated
         users
         chosen by the broker using a configurable <code>PrincipalBuilder</code>. Client-id is a logical grouping of clients with a meaningful name chosen by the client application. The tuple (user, client-id) defines
@@ -596,7 +600,7 @@
         Quotas can be applied to (user, client-id), user or client-id groups. For a given connection, the most specific quota matching the connection is applied. All connections of a quota group share the quota configured for the group.
         For example, if (user="test-user", client-id="test-client") has a produce quota of 10MB/sec, this is shared across all producer instances of user "test-user" with the client-id "test-client".
     </p>
-    <h4><a id="design_quotasconfig" href="#design_quotasconfig">Quota Configuration</a></h4>
+    <h4 class="anchor-heading"><a id="design_quotasconfig" class="anchor-link"></a><a href="#design_quotasconfig">Quota Configuration</a></h4>
     <p>
         Quota configuration may be defined for (user, client-id), user and client-id groups. It is possible to override the default quota at any of the quota levels that needs a higher (or even lower) quota.
         The mechanism is similar to the per-topic log config overrides.
@@ -620,14 +624,14 @@
         Broker properties (quota.producer.default, quota.consumer.default) can also be used to set defaults of network bandwidth quotas for client-id groups. These properties are being deprecated and will be removed in a later release.
         Default quotas for client-id can be set in Zookeeper similar to the other quota overrides and defaults.
     </p>
-    <h4><a id="design_quotasbandwidth" href="#design_quotasbandwidth">Network Bandwidth Quotas</a></h4>
+    <h4 class="anchor-heading"><a id="design_quotasbandwidth" class="anchor-link"></a><a href="#design_quotasbandwidth">Network Bandwidth Quotas</a></h4>
     <p>
         Network bandwidth quotas are defined as the byte rate threshold for each group of clients sharing a quota.
         By default, each unique client group receives a fixed quota in bytes/sec as configured by the cluster.
         This quota is defined on a per-broker basis. Each group of clients can publish/fetch a maximum of X bytes/sec
         per broker before clients are throttled.
     </p>
-    <h4><a id="design_quotascpu" href="#design_quotascpu">Request Rate Quotas</a></h4>
+    <h4 class="anchor-heading"><a id="design_quotascpu" class="anchor-link"></a><a href="#design_quotascpu">Request Rate Quotas</a></h4>
     <p>
         Request rate quotas are defined as the percentage of time a client can utilize on request handler I/O
         threads and network threads of each broker within a quota window. A quota of <tt>n%</tt> represents
@@ -637,7 +641,7 @@
         on the number of cores available on the broker host, request rate quotas represent the total percentage of CPU
         that may be used by each group of clients sharing the quota.
     </p>
-    <h4><a id="design_quotasenforcement" href="#design_quotasenforcement">Enforcement</a></h4>
+    <h4 class="anchor-heading"><a id="design_quotasenforcement" class="anchor-link"></a><a href="#design_quotasenforcement">Enforcement</a></h4>
     <p>
         By default, each unique client group receives a fixed quota as configured by the cluster.
         This quota is defined on a per-broker basis. Each client can utilize this quota per broker before it gets throttled. We decided that defining these quotas per broker is much better than
diff --git a/docs/documentation.html b/docs/documentation.html
index ee914f2..5d4b5c5 100644
--- a/docs/documentation.html
+++ b/docs/documentation.html
@@ -23,50 +23,54 @@
 
 <div class="content documentation documentation--current">
 	<!--#include virtual="../includes/_nav.htm" -->
+  <div class="toc-handle-container">
+    <div class="toc-handle">&lt;</div>
+  </div>
+  <div class="docs-nav">
+    <!--#include virtual="toc.html" -->
+  </div>
 	<div class="right">
-		<!--#include virtual="../includes/_docs_banner.htm" -->
+		<!--//#include virtual="../includes/_docs_banner.htm" -->
     <h1>Documentation</h1>
     <h3>Kafka 2.6 Documentation</h3>
     Prior releases: <a href="/07/documentation.html">0.7.x</a>, <a href="/08/documentation.html">0.8.0</a>, <a href="/081/documentation.html">0.8.1.X</a>, <a href="/082/documentation.html">0.8.2.X</a>, <a href="/090/documentation.html">0.9.0.X</a>, <a href="/0100/documentation.html">0.10.0.X</a>, <a href="/0101/documentation.html">0.10.1.X</a>, <a href="/0102/documentation.html">0.10.2.X</a>, <a href="/0110/documentation.html">0.11.0.X</a>, <a href="/10/documentation.html">1.0.X</a>, <a  [...]
 
-    <!--#include virtual="toc.html" -->
-
-    <h2><a id="gettingStarted" href="#gettingStarted">1. Getting Started</a></h2>
-      <h3><a id="introduction" href="#introduction">1.1 Introduction</a></h3>
+    <h2 class="anchor-heading"><a id="gettingStarted" class="anchor-link"></a><a href="#gettingStarted">1. Getting Started</a></h2>
+      <h3 class="anchor-heading"><a id="introduction" class="anchor-link"></a><a href="#introduction">1.1 Introduction</a></h3>
       <!--#include virtual="introduction.html" -->
-      <h3><a id="uses" href="#uses">1.2 Use Cases</a></h3>
+      <h3 class="anchor-heading"><a id="uses" class="anchor-link"></a><a href="#uses">1.2 Use Cases</a></h3>
       <!--#include virtual="uses.html" -->
-      <h3><a id="quickstart" href="#quickstart">1.3 Quick Start</a></h3>
-      <!--#include virtual="quickstart.html" -->
-      <h3><a id="ecosystem" href="#ecosystem">1.4 Ecosystem</a></h3>
+      <h3 class="anchor-heading"><a id="quickstart" class="anchor-link"></a><a href="#quickstart">1.3 Quick Start</a></h3>
+      <!--#include virtual="quickstart-zookeeper.html" -->
+      <h3 class="anchor-heading"><a id="ecosystem" class="anchor-link"></a><a href="#ecosystem">1.4 Ecosystem</a></h3>
       <!--#include virtual="ecosystem.html" -->
-      <h3><a id="upgrade" href="#upgrade">1.5 Upgrading From Previous Versions</a></h3>
+      <h3 class="anchor-heading"><a id="upgrade" class="anchor-link"></a><a href="#upgrade">1.5 Upgrading From Previous Versions</a></h3>
       <!--#include virtual="upgrade.html" -->
 
-    <h2><a id="api" href="#api">2. APIs</a></h2>
+    <h2 class="anchor-heading"><a id="api" class="anchor-link"></a><a href="#api">2. APIs</a></h2>
 
     <!--#include virtual="api.html" -->
 
-    <h2><a id="configuration" href="#configuration">3. Configuration</a></h2>
+    <h2 class="anchor-heading"><a id="configuration" class="anchor-link"></a><a href="#configuration">3. Configuration</a></h2>
 
     <!--#include virtual="configuration.html" -->
 
-    <h2><a id="design" href="#design">4. Design</a></h2>
+    <h2 class="anchor-heading"><a id="design" class="anchor-link"></a><a href="#design">4. Design</a></h2>
 
     <!--#include virtual="design.html" -->
 
-    <h2><a id="implementation" href="#implementation">5. Implementation</a></h2>
+    <h2 class="anchor-heading"><a id="implementation" class="anchor-link"></a><a href="#implementation">5. Implementation</a></h2>
 
     <!--#include virtual="implementation.html" -->
 
-    <h2><a id="operations" href="#operations">6. Operations</a></h2>
+    <h2 class="anchor-heading"><a id="operations" class="anchor-link"></a><a href="#operations">6. Operations</a></h2>
 
     <!--#include virtual="ops.html" -->
 
-    <h2><a id="security" href="#security">7. Security</a></h2>
+    <h2 class="anchor-heading"><a id="security" class="anchor-link"></a><a href="#security">7. Security</a></h2>
     <!--#include virtual="security.html" -->
 
-    <h2><a id="connect" href="#connect">8. Kafka Connect</a></h2>
+    <h2 class="anchor-heading"><a id="connect" class="anchor-link"></a><a href="#connect">8. Kafka Connect</a></h2>
     <!--#include virtual="connect.html" -->
 
     <h2><a id="streams" href="/documentation/streams">9. Kafka Streams</a></h2>
diff --git a/docs/documentation/streams/developer-guide/dsl-topology-naming.html b/docs/documentation/streams/developer-guide/dsl-topology-naming.html
index db5eee3..9f42a04 100644
--- a/docs/documentation/streams/developer-guide/dsl-topology-naming.html
+++ b/docs/documentation/streams/developer-guide/dsl-topology-naming.html
@@ -16,4 +16,4 @@
 -->
 
 <!-- should always link the latest release's documentation -->
-<!--#include virtual="../../../streams/dsl-topology-naming.html" -->
+<!--#include virtual="../../../streams/developer-guide/dsl-topology-naming.html" -->
diff --git a/docs/implementation.html b/docs/implementation.html
index c28c33b..8c89546 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -16,24 +16,23 @@
 -->
 
 <script id="implementation-template" type="text/x-handlebars-template">
-    <h3><a id="networklayer" href="#networklayer">5.1 Network Layer</a></h3>
+    <h3 class="anchor-heading"><a id="networklayer" class="anchor-link"></a><a href="#networklayer">5.1 Network Layer</a></h3>
     <p>
     The network layer is a fairly straight-forward NIO server, and will not be described in great detail. The sendfile implementation is done by giving the <code>MessageSet</code> interface a <code>writeTo</code> method. This allows the file-backed message set to use the more efficient <code>transferTo</code> implementation instead of an in-process buffered write. The threading model is a single acceptor thread and <i>N</i> processor threads which handle a fixed number of connections eac [...]
     </p>
-    <h3><a id="messages" href="#messages">5.2 Messages</a></h3>
+    <h3 class="anchor-heading"><a id="messages" class="anchor-link"></a><a href="#messages">5.2 Messages</a></h3>
     <p>
     Messages consist of a variable-length header, a variable-length opaque key byte array and a variable-length opaque value byte array. The format of the header is described in the following section.
     Leaving the key and value opaque is the right decision: there is a great deal of progress being made on serialization libraries right now, and any particular choice is unlikely to be right for all uses. Needless to say a particular application using Kafka would likely mandate a particular serialization type as part of its usage. The <code>RecordBatch</code> interface is simply an iterator over messages with specialized methods for bulk reading and writing to an NIO <code>Channel</code>.</p>
 
-    <h3><a id="messageformat" href="#messageformat">5.3 Message Format</a></h3>
+    <h3 class="anchor-heading"><a id="messageformat" class="anchor-link"></a><a href="#messageformat">5.3 Message Format</a></h3>
     <p>
     Messages (aka Records) are always written in batches. The technical term for a batch of messages is a record batch, and a record batch contains one or more records. In the degenerate case, we could have a record batch containing a single record.
     Record batches and records have their own headers. The format of each is described below. </p>
 
-    <h4><a id="recordbatch" href="#recordbatch">5.3.1 Record Batch</a></h4>
+    <h4 class="anchor-heading"><a id="recordbatch" class="anchor-link"></a><a href="#recordbatch">5.3.1 Record Batch</a></h4>
 	<p> The following is the on-disk format of a RecordBatch. </p>
-	<p><pre class="brush: java;">
-		baseOffset: int64
+	<p><pre class="line-numbers"><code class="language-java">		baseOffset: int64
 		batchLength: int32
 		partitionLeaderEpoch: int32
 		magic: int8 (current magic value is 2)
@@ -55,8 +54,7 @@
 		producerId: int64
 		producerEpoch: int16
 		baseSequence: int32
-		records: [Record]
-	</pre></p>
+		records: [Record]</code></pre></p>
     <p> Note that when compression is enabled, the compressed record data is serialized directly following the count of the number of records. </p>
 
     <p>The CRC covers the data from the attributes to the end of the batch (i.e. all the bytes that follow the CRC). It is located after the magic byte, which
@@ -70,19 +68,16 @@
     it is possible to have empty batches in the log when all the records in the batch are cleaned but batch is still retained in order to preserve a producer's last sequence number. One oddity here is that the firstTimestamp
     field is not preserved during compaction, so it will change if the first record in the batch is compacted away.</p>
 
-    <h5><a id="controlbatch" href="#controlbatch">5.3.1.1 Control Batches</a></h5>
+    <h5 class="anchor-heading"><a id="controlbatch" class="anchor-link"></a><a href="#controlbatch">5.3.1.1 Control Batches</a></h5>
     <p>A control batch contains a single record called the control record. Control records should not be passed on to applications. Instead, they are used by consumers to filter out aborted transactional messages.</p>
     <p> The key of a control record conforms to the following schema: </p>
-    <p><pre class="brush: java">
-       version: int16 (current version is 0)
-       type: int16 (0 indicates an abort marker, 1 indicates a commit)
-    </pre></p>
+    <p><pre class="line-numbers"><code class="language-java">       version: int16 (current version is 0)
+       type: int16 (0 indicates an abort marker, 1 indicates a commit)</code></pre></p>
     <p>The schema for the value of a control record is dependent on the type. The value is opaque to clients.</p>
 
-    <h4><a id="record" href="#record">5.3.2 Record</a></h4>
+    <h4 class="anchor-heading"><a id="record" class="anchor-link"></a><a href="#record">5.3.2 Record</a></h4>
 	<p>Record level headers were introduced in Kafka 0.11.0. The on-disk format of a record with Headers is delineated below. </p>
-	<p><pre class="brush: java;">
-		length: varint
+	<p><pre class="line-numbers"><code class="language-java">		length: varint
 		attributes: int8
 			bit 0~7: unused
 		timestampDelta: varint
@@ -91,27 +86,23 @@
 		key: byte[]
 		valueLen: varint
 		value: byte[]
-		Headers => [Header]
-	</pre></p>
-	<h5><a id="recordheader" href="#recordheader">5.3.2.1 Record Header</a></h5>
-	<p><pre class="brush: java;">
-		headerKeyLength: varint
+		Headers => [Header]</code></pre></p>
+	<h5 class="anchor-heading"><a id="recordheader" class="anchor-link"></a><a href="#recordheader">5.3.2.1 Record Header</a></h5>
+	<p><pre class="line-numbers"><code class="language-java">		headerKeyLength: varint
 		headerKey: String
 		headerValueLength: varint
-		Value: byte[]
-	</pre></p>
+		Value: byte[]</code></pre></p>
     <p>We use the same varint encoding as Protobuf. More information on the latter can be found <a href="https://developers.google.com/protocol-buffers/docs/encoding#varints">here</a>. The count of headers in a record
     is also encoded as a varint.</p>
 
-    <h4><a id="messageset" href="#messageset">5.3.3 Old Message Format</a></h4>
+    <h4 class="anchor-heading"><a id="messageset" class="anchor-link"></a><a href="#messageset">5.3.3 Old Message Format</a></h4>
     <p>
         Prior to Kafka 0.11, messages were transferred and stored in <i>message sets</i>. In a message set, each message has its own metadata. Note that although message sets are represented as an array,
         they are not preceded by an int32 array size like other array elements in the protocol.
     </p>
 
     <b>Message Set:</b><br>
-    <p><pre class="brush: java;">
-    MessageSet (Version: 0) => [offset message_size message]
+    <p><pre class="line-numbers"><code class="language-java">    MessageSet (Version: 0) => [offset message_size message]
         offset => INT64
         message_size => INT32
         message => crc magic_byte attributes key value
@@ -124,10 +115,8 @@
                     2: snappy
                 bit 3~7: unused
             key => BYTES
-            value => BYTES
-    </pre></p>
-    <p><pre class="brush: java;">
-    MessageSet (Version: 1) => [offset message_size message]
+            value => BYTES</code></pre></p>
+    <p><pre class="line-numbers"><code class="language-java">    MessageSet (Version: 1) => [offset message_size message]
         offset => INT64
         message_size => INT32
         message => crc magic_byte attributes timestamp key value
@@ -145,8 +134,7 @@
                 bit 4~7: unused
             timestamp => INT64
             key => BYTES
-            value => BYTES
-    </pre></p>
+            value => BYTES</code></pre></p>
     <p>
         In versions prior to Kafka 0.10, the only supported message format version (which is indicated in the magic value) was 0. Message format version 1 was introduced with timestamp support in version 0.10.
     <ul>
@@ -170,7 +158,7 @@
 
     <p>The crc field contains the CRC32 (and not CRC-32C) of the subsequent message bytes (i.e. from magic byte to the value).</p>
 
-    <h3><a id="log" href="#log">5.4 Log</a></h3>
+    <h3 class="anchor-heading"><a id="log" class="anchor-link"></a><a href="#log">5.4 Log</a></h3>
     <p>
     A log for a topic named "my_topic" with two partitions consists of two directories (namely <code>my_topic_0</code> and <code>my_topic_1</code>) populated with data files containing the messages for that topic. The format of the log files is a sequence of "log entries""; each log entry is a 4 byte integer <i>N</i> storing the message length which is followed by the <i>N</i> message bytes. Each message is uniquely identified by a 64-bit integer <i>offset</i> giving the byte position of [...]
     </p>
@@ -181,11 +169,11 @@
     The use of the message offset as the message id is unusual. Our original idea was to use a GUID generated by the producer, and maintain a mapping from GUID to offset on each broker. But since a consumer must maintain an ID for each server, the global uniqueness of the GUID provides no value. Furthermore, the complexity of maintaining the mapping from a random id to an offset requires a heavy weight index structure which must be synchronized with disk, essentially requiring a full per [...]
     </p>
     <img class="centered" src="/{{version}}/images/kafka_log.png">
-    <h4><a id="impl_writes" href="#impl_writes">Writes</a></h4>
+    <h4 class="anchor-heading"><a id="impl_writes" class="anchor-link"></a><a href="#impl_writes">Writes</a></h4>
     <p>
     The log allows serial appends which always go to the last file. This file is rolled over to a fresh file when it reaches a configurable size (say 1GB). The log takes two configuration parameters: <i>M</i>, which gives the number of messages to write before forcing the OS to flush the file to disk, and <i>S</i>, which gives a number of seconds after which a flush is forced. This gives a durability guarantee of losing at most <i>M</i> messages or <i>S</i> seconds of data in the event o [...]
     </p>
-    <h4><a id="impl_reads" href="#impl_reads">Reads</a></h4>
+    <h4 class="anchor-heading"><a id="impl_reads" class="anchor-link"></a><a href="#impl_reads">Reads</a></h4>
     <p>
     Reads are done by giving the 64-bit logical offset of a message and an <i>S</i>-byte max chunk size. This will return an iterator over the messages contained in the <i>S</i>-byte buffer. <i>S</i> is intended to be larger than any single message, but in the event of an abnormally large message, the read can be retried multiple times, each time doubling the buffer size, until the message is read successfully. A maximum message and buffer size can be specified to make the server reject  [...]
     </p>
@@ -198,26 +186,22 @@
 
     <p> The following is the format of the results sent to the consumer.
 
-    <pre class="brush: text;">
-    MessageSetSend (fetch result)
+    <pre class="line-numbers"><code class="language-text">    MessageSetSend (fetch result)
 
     total length     : 4 bytes
     error code       : 2 bytes
     message 1        : x bytes
     ...
-    message n        : x bytes
-    </pre>
+    message n        : x bytes</code></pre>
 
-    <pre class="brush: text;">
-    MultiMessageSetSend (multiFetch result)
+    <pre class="line-numbers"><code class="language-text">    MultiMessageSetSend (multiFetch result)
 
     total length       : 4 bytes
     error code         : 2 bytes
     messageSetSend 1
     ...
-    messageSetSend n
-    </pre>
-    <h4><a id="impl_deletes" href="#impl_deletes">Deletes</a></h4>
+    messageSetSend n</code></pre>
+    <h4 class="anchor-heading"><a id="impl_deletes" class="anchor-link"></a><a href="#impl_deletes">Deletes</a></h4>
     <p>
     Data is deleted one log segment at a time. The log manager applies two metrics to identify segments which are
         eligible for deletion: time and size. For time-based policies, the record timestamps are considered, with the
@@ -229,7 +213,7 @@
         style segment list implementation that provides consistent views to allow a binary search to proceed on an
         immutable static snapshot view of the log segments while deletes are progressing.
     </p>
-    <h4><a id="impl_guarantees" href="#impl_guarantees">Guarantees</a></h4>
+    <h4 class="anchor-heading"><a id="impl_guarantees" class="anchor-link"></a><a href="#impl_guarantees">Guarantees</a></h4>
     <p>
     The log provides a configuration parameter <i>M</i> which controls the maximum number of messages that are written before forcing a flush to disk. On startup a log recovery process is run that iterates over all messages in the newest log segment and verifies that each message entry is valid. A message entry is valid if the sum of its size and offset are less than the length of the file AND the CRC32 of the message payload matches the CRC stored with the message. In the event corrupti [...]
     </p>
@@ -237,8 +221,8 @@
     Note that two kinds of corruption must be handled: truncation in which an unwritten block is lost due to a crash, and corruption in which a nonsense block is ADDED to the file. The reason for this is that in general the OS makes no guarantee of the write order between the file inode and the actual block data so in addition to losing written data the file can gain nonsense data if the inode is updated with a new size but a crash occurs before the block containing that data is written. [...]
     </p>
 
-    <h3><a id="distributionimpl" href="#distributionimpl">5.5 Distribution</a></h3>
-    <h4><a id="impl_offsettracking" href="#impl_offsettracking">Consumer Offset Tracking</a></h4>
+    <h3 class="anchor-heading"><a id="distributionimpl" class="anchor-link"></a><a href="#distributionimpl">5.5 Distribution</a></h3>
+    <h4 class="anchor-heading"><a id="impl_offsettracking" class="anchor-link"></a><a href="#impl_offsettracking">Consumer Offset Tracking</a></h4>
     <p>
     Kafka consumer tracks the maximum offset it has consumed in each partition and has the capability to commit offsets so
         that it can resume from those offsets in the event of a restart. Kafka provides the option to store all the offsets for
@@ -265,36 +249,32 @@
         CoordinatorLoadInProgressException and the consumer may retry the OffsetFetchRequest after backing off.
     </p>
 
-    <h4><a id="impl_zookeeper" href="#impl_zookeeper">ZooKeeper Directories</a></h4>
+    <h4 class="anchor-heading"><a id="impl_zookeeper" class="anchor-link"></a><a href="#impl_zookeeper">ZooKeeper Directories</a></h4>
     <p>
     The following gives the ZooKeeper structures and algorithms used for co-ordination between consumers and brokers.
     </p>
 
-    <h4><a id="impl_zknotation" href="#impl_zknotation">Notation</a></h4>
+    <h4 class="anchor-heading"><a id="impl_zknotation" class="anchor-link"></a><a href="#impl_zknotation">Notation</a></h4>
     <p>
     When an element in a path is denoted <code>[xyz]</code>, that means that the value of xyz is not fixed and there is in fact a ZooKeeper znode for each possible value of xyz. For example <code>/topics/[topic]</code> would be a directory named /topics containing a sub-directory for each topic name. Numerical ranges are also given such as <code>[0...5]</code> to indicate the subdirectories 0, 1, 2, 3, 4. An arrow <code>-></code> is used to indicate the contents of a znode. For example < [...]
     </p>
 
-    <h4><a id="impl_zkbroker" href="#impl_zkbroker">Broker Node Registry</a></h4>
-    <pre class="brush: json;">
-    /brokers/ids/[0...N] --> {"jmx_port":...,"timestamp":...,"endpoints":[...],"host":...,"version":...,"port":...} (ephemeral node)
-    </pre>
+    <h4 class="anchor-heading"><a id="impl_zkbroker" class="anchor-link"></a><a href="#impl_zkbroker">Broker Node Registry</a></h4>
+    <pre class="line-numbers"><code class="language-json">    /brokers/ids/[0...N] --> {"jmx_port":...,"timestamp":...,"endpoints":[...],"host":...,"version":...,"port":...} (ephemeral node)</code></pre>
     <p>
     This is a list of all present broker nodes, each of which provides a unique logical broker id which identifies it to consumers (which must be given as part of its configuration). On startup, a broker node registers itself by creating a znode with the logical broker id under /brokers/ids. The purpose of the logical broker id is to allow a broker to be moved to a different physical machine without affecting consumers. An attempt to register a broker id that is already in use (say becau [...]
     </p>
     <p>
     Since the broker registers itself in ZooKeeper using ephemeral znodes, this registration is dynamic and will disappear if the broker is shutdown or dies (thus notifying consumers it is no longer available).
     </p>
-    <h4><a id="impl_zktopic" href="#impl_zktopic">Broker Topic Registry</a></h4>
-    <pre class="brush: json;">
-    /brokers/topics/[topic]/partitions/[0...N]/state --> {"controller_epoch":...,"leader":...,"version":...,"leader_epoch":...,"isr":[...]} (ephemeral node)
-    </pre>
+    <h4 class="anchor-heading"><a id="impl_zktopic" class="anchor-link"></a><a href="#impl_zktopic">Broker Topic Registry</a></h4>
+    <pre class="line-numbers"><code class="language-json">    /brokers/topics/[topic]/partitions/[0...N]/state --> {"controller_epoch":...,"leader":...,"version":...,"leader_epoch":...,"isr":[...]} (ephemeral node)</code></pre>
 
     <p>
     Each broker registers itself under the topics it maintains and stores the number of partitions for that topic.
     </p>
 
-    <h4><a id="impl_clusterid" href="#impl_clusterid">Cluster Id</a></h4>
+    <h4 class="anchor-heading"><a id="impl_clusterid" class="anchor-link"></a><a href="#impl_clusterid">Cluster Id</a></h4>
 
     <p>
         The cluster id is a unique and immutable identifier assigned to a Kafka cluster. The cluster id can have a maximum of 22 characters and the allowed characters are defined by the regular expression [a-zA-Z0-9_\-]+, which corresponds to the characters used by the URL-safe Base64 variant with no padding. Conceptually, it is auto-generated when a cluster is started for the first time.
@@ -303,7 +283,7 @@
         Implementation-wise, it is generated when a broker with version 0.10.1 or later is successfully started for the first time. The broker tries to get the cluster id from the <code>/cluster/id</code> znode during startup. If the znode does not exist, the broker generates a new cluster id and creates the znode with this cluster id.
     </p>
 
-    <h4><a id="impl_brokerregistration" href="#impl_brokerregistration">Broker node registration</a></h4>
+    <h4 class="anchor-heading"><a id="impl_brokerregistration" class="anchor-link"></a><a href="#impl_brokerregistration">Broker node registration</a></h4>
 
     <p>
     The broker nodes are basically independent, so they only publish information about what they have. When a broker joins, it registers itself under the broker node registry directory and writes information about its host name and port. The broker also register the list of existing topics and their logical partitions in the broker topic registry. New topics are registered dynamically when they are created on the broker.
diff --git a/docs/introduction.html b/docs/introduction.html
index 4b19079..da79386 100644
--- a/docs/introduction.html
+++ b/docs/introduction.html
@@ -18,198 +18,203 @@
 <script><!--#include virtual="js/templateData.js" --></script>
 
 <script id="introduction-template" type="text/x-handlebars-template">
-  <h3> Apache Kafka&reg; is <i>a distributed streaming platform</i>. What exactly does that mean?</h3>
-  <p>A streaming platform has three key capabilities:</p>
-  <ul>
-    <li>Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
-    <li>Store streams of records in a fault-tolerant durable way.
-    <li>Process streams of records as they occur.
-  </ul>
-  <p>Kafka is generally used for two broad classes of applications:</p>
-  <ul>
-    <li>Building real-time streaming data pipelines that reliably get data between systems or applications
-    <li>Building real-time streaming applications that transform or react to the streams of data
-  </ul>
-  <p>To understand how Kafka does these things, let's dive in and explore Kafka's capabilities from the bottom up.</p>
-  <p>First a few concepts:</p>
-  <ul>
-    <li>Kafka is run as a cluster on one or more servers that can span multiple datacenters.
-      <li>The Kafka cluster stores streams of <i>records</i> in categories called <i>topics</i>.
-    <li>Each record consists of a key, a value, and a timestamp.
-  </ul>
-  <p>Kafka has five core APIs:</p>
-  <div style="overflow: hidden;">
-      <ul style="float: left; width: 40%;">
-      <li>The <a href="/documentation.html#producerapi">Producer API</a> allows an application to publish a stream of records to one or more Kafka topics.
-      <li>The <a href="/documentation.html#consumerapi">Consumer API</a> allows an application to subscribe to one or more topics and process the stream of records produced to them.
-    <li>The <a href="/documentation/streams">Streams API</a> allows an application to act as a <i>stream processor</i>, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
-    <li>The <a href="/documentation.html#connect">Connector API</a> allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.
-    <li>The <a href="/documentation.html#adminapi">Admin API</a> allows managing and inspecting topics, brokers and other Kafka objects.
-  </ul>
-      <img src="/{{version}}/images/kafka-apis.png" style="float: right; width: 50%;">
-      </div>
-  <p>
-  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older versions. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
-
-  <h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
-  <p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the topic.</p>
-  <p>A topic is a category or feed name to which records are published. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it.</p>
-  <p> For each topic, the Kafka cluster maintains a partitioned log that looks like this: </p>
-  <img class="centered" src="/{{version}}/images/log_anatomy.png">
-
-  <p> Each partition is an ordered, immutable sequence of records that is continually appended to&mdash;a structured commit log. The records in the partitions are each assigned a sequential id number called the <i>offset</i> that uniquely identifies each record within the partition.
-  </p>
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_streaming" href="#intro_streaming"></a>
+    <a href="#intro_streaming">What is event streaming?</a>
+  </h4>
   <p>
-  The Kafka cluster durably persists all published records&mdash;whether or not they have been consumed&mdash;using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so storing data for a long time is not a problem.
+    Event streaming is the digital equivalent of the human body's central nervous system. It is the
+    technological foundation for the 'always-on' world where businesses are increasingly software-defined 
+    and automated, and where the user of software is more software.
   </p>
-  <img class="centered" src="/{{version}}/images/log_consumer.png" style="width:400px">
   <p>
-  In fact, the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes. For example a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from "now".
+    Technically speaking, event streaming is the practice of capturing data in real-time from event sources
+    like databases, sensors, mobile devices, cloud services, and software applications in the form of streams
+    of events; storing these event streams durably for later retrieval; manipulating, processing, and reacting
+    to the event streams in real-time as well as retrospectively; and routing the event streams to different
+    destination technologies as needed. Event streaming thus ensures a continuous flow and interpretation of
+    data so that the right information is at the right place, at the right time.
   </p>
-  <p>
-  This combination of features means that Kafka consumers are very cheap&mdash;they can come and go without much impact on the cluster or on other consumers. For example, you can use our command line tools to "tail" the contents of any topic without changing what is consumed by any existing consumers.
-  </p>
-  <p>
-  The partitions in the log serve several purposes. First, they allow the log to scale beyond a size that will fit on a single server. Each individual partition must fit on the servers that host it, but a topic may have many partitions so it can handle an arbitrary amount of data. Second they act as the unit of parallelism&mdash;more on that in a bit.
-  </p>
-
-  <h4><a id="intro_distribution" href="#intro_distribution">Distribution</a></h4>
 
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_usage" href="#intro_usage"></a>
+    <a href="#intro_usage">What can I use event streaming for?</a>
+  </h4>
   <p>
-  The partitions of the log are distributed over the servers in the Kafka cluster with each server handling data and requests for a share of the partitions. Each partition is replicated across a configurable number of servers for fault tolerance.
-  </p>
-  <p>
-  Each partition has one server which acts as the "leader" and zero or more servers which act as "followers". The leader handles all read and write requests for the partition while the followers passively replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so load is well balanced within the cluster.
-  </p>
-
-  <h4><a id="intro_geo-replication" href="#intro_geo-replication">Geo-Replication</a></h4>
-
-  <p>Kafka MirrorMaker provides geo-replication support for your clusters. With MirrorMaker, messages are replicated across multiple datacenters or cloud regions. You can use this in active/passive scenarios for backup and recovery; or in active/active scenarios to place data closer to your users, or support data locality requirements. </p>
-
-  <h4><a id="intro_producers" href="#intro_producers">Producers</a></h4>
-  <p>
-  Producers publish data to the topics of their choice. The producer is responsible for choosing which record to assign to which partition within the topic. This can be done in a round-robin fashion simply to balance load or it can be done according to some semantic partition function (say based on some key in the record). More on the use of partitioning in a second!
-  </p>
-
-  <h4><a id="intro_consumers" href="#intro_consumers">Consumers</a></h4>
-
-  <p>
-  Consumers label themselves with a <i>consumer group</i> name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
-  </p>
-  <p>
-  If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.</p>
-  <p>
-  If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.
-  </p>
-  <img class="centered" src="/{{version}}/images/consumer-groups.png">
-  <p>
-    A two server Kafka cluster hosting four partitions (P0-P3) with two consumer groups. Consumer group A has two consumer instances and group B has four.
+    Event streaming is applied to a <a href="/powered-by">wide variety of use cases</a>
+    across a plethora of industries and organizations. Its many examples include:
   </p>
+  <ul>
+    <li>
+      To process payments and financial transactions in real-time, such as in stock exchanges, banks, and insurances.
+    </li>
+    <li>
+      To track and monitor cars, trucks, fleets, and shipments in real-time, such as in logistics and the automotive industry.
+    </li>
+    <li>
+      To continuously capture and analyze sensor data from IoT devices or other equipment, such as in factories and wind parks.
+    </li>
+    <li>
+      To collect and immediately react to customer interactions and orders, such as in retail, the hotel and travel industry, and mobile applications.
+    </li>
+    <li>
+      To monitor patients in hospital care and predict changes in condition to ensure timely treatment in emergencies.
+    </li>
+    <li>
+      To connect, store, and make available data produced by different divisions of a company.
+    </li>
+    <li>
+      To serve as the foundation for data platforms, event-driven architectures, and microservices.
+    </li>
+  </ul>
 
-  <p>
-  More commonly, however, we have found that topics have a small number of consumer groups, one for each "logical subscriber". Each group is composed of many consumer instances for scalability and fault tolerance. This is nothing more than publish-subscribe semantics where the subscriber is a cluster of consumers instead of a single process.
-  </p>
-  <p>
-  The way consumption is implemented in Kafka is by dividing up the partitions in the log over the consumer instances so that each instance is the exclusive consumer of a "fair share" of partitions at any point in time. This process of maintaining membership in the group is handled by the Kafka protocol dynamically. If new instances join the group they will take over some partitions from other members of the group; if an instance dies, its partitions will be distributed to the remaining  [...]
-  </p>
-  <p>
-  Kafka only provides a total order over records <i>within</i> a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.
-  </p>
-  <h4><a id="intro_multi-tenancy" href="#intro_multi-tenancy">Multi-tenancy</a></h4>
-  <p>You can deploy Kafka as a multi-tenant solution. Multi-tenancy is enabled by configuring which topics can produce or consume data. There is also operations support for quotas.  Administrators can define and enforce quotas on requests to control the broker resources that are used by clients.  For more information, see the <a href="https://kafka.apache.org/documentation/#security">security documentation</a>. </p>
-  <h4><a id="intro_guarantees" href="#intro_guarantees">Guarantees</a></h4>
-  <p>
-  At a high-level Kafka gives the following guarantees:
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_platform" href="#intro_platform"></a>
+    <a href="#intro_platform">Apache Kafka&reg; is an event streaming platform. What does that mean?</a>
+  </h4>
+  <p>
+    Kafka combines three key capabilities so you can implement
+    <a href="/powered-by">your use cases</a>
+    for event streaming end-to-end with a single battle-tested solution:
+  </p>
+  <ol>
+    <li>
+      To <strong>publish</strong> (write) and <strong>subscribe to</strong> (read) streams of events, including continuous import/export of
+      your data from other systems.
+    </li>
+    <li>
+      To <strong>store</strong> streams of events durably and reliably for as long as you want.
+    </li>
+    <li>
+      To <strong>process</strong> streams of events as they occur or retrospectively.
+    </li>
+  </ol>
+  <p>
+    And all this functionality is provided in a distributed, highly scalable, elastic, fault-tolerant, and
+    secure manner. Kafka can be deployed on bare-metal hardware, virtual machines, and containers, and on-premises
+    as well as in the cloud. You can choose between self-managing your Kafka environments and using fully managed
+    services offered by a variety of vendors.
+  </p>
+
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_nutshell" href="#intro_nutshell"></a>
+    <a href="#intro_nutshell">How does Kafka work in a nutshell?</a>
+  </h4>
+  <p>
+    Kafka is a distributed system consisting of <strong>servers</strong> and <strong>clients</strong> that
+    communicate via a high-performance <a href="/protocol.html">TCP network protocol</a>.
+    It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud
+    environments.
+  </p>
+  <p>
+    <strong>Servers</strong>: Kafka is run as a cluster of one or more servers that can span multiple datacenters
+    or cloud regions. Some of these servers form the storage layer, called the brokers. Other servers run
+    <a href="/documentation/#connect">Kafka Connect</a> to continuously import and export
+    data as event streams to integrate Kafka with your existing systems such as relational databases as well as
+    other Kafka clusters. To let you implement mission-critical use cases, a Kafka cluster is highly scalable
+    and fault-tolerant: if any of its servers fails, the other servers will take over their work to ensure
+    continuous operations without any data loss.
+  </p>
+  <p>
+    <strong>Clients</strong>: They allow you to write distributed applications and microservices that read, write,
+    and process streams of events in parallel, at scale, and in a fault-tolerant manner even in the case of network
+    problems or machine failures. Kafka ships with some such clients included, which are augmented by
+    <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">dozens of clients</a> provided by the Kafka
+    community: clients are available for Java and Scala including the higher-level
+    <a href="/documentation/streams/">Kafka Streams</a> library, for Go, Python, C/C++, and
+    many other programming languages as well as REST APIs.
+  </p>
+
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_concepts_and_terms" href="#intro_concepts_and_terms"></a>
+    <a href="#intro_concepts_and_terms">Main Concepts and Terminology</a>
+  </h4>
+  <p>
+    An <strong>event</strong> records the fact that "something happened" in the world or in your business. It is also called record or message in the documentation. When you read or write data to Kafka, you do this in the form of events. Conceptually, an event has a key, value, timestamp, and optional metadata headers. Here's an example event:
   </p>
   <ul>
-    <li>Messages sent by a producer to a particular topic partition will be appended in the order they are sent. That is, if a record M1 is sent by the same producer as a record M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log.
-    <li>A consumer instance sees records in the order they are stored in the log.
-    <li>For a topic with replication factor N, we will tolerate up to N-1 server failures without losing any records committed to the log.
+    <li>
+      Event key: "Alice"
+    </li>
+    <li>
+      Event value: "Made a payment of $200 to Bob"
+    </li>
+    <li>
+      Event timestamp: "Jun. 25, 2020 at 2:06 p.m."
+    </li>
   </ul>
   <p>
-  More details on these guarantees are given in the design section of the documentation.
-  </p>
-  <h4><a id="kafka_mq" href="#kafka_mq">Kafka as a Messaging System</a></h4>
-  <p>
-  How does Kafka's notion of streams compare to a traditional enterprise messaging system?
+    <strong>Producers</strong> are those client applications that publish (write) events to Kafka, and <strong>consumers</strong> are those that subscribe to (read and process) these events. In Kafka, producers and consumers are fully decoupled and agnostic of each other, which is a key design element to achieve the high scalability that Kafka is known for. For example, producers never need to wait for consumers. Kafka provides various <a href="/documentation/#intro_guarantees">guarantee [...]
   </p>
   <p>
-  Messaging traditionally has two models: <a href="http://en.wikipedia.org/wiki/Message_queue">queuing</a> and <a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern">publish-subscribe</a>. In a queue, a pool of consumers may read from a server and each record goes to one of them; in publish-subscribe the record is broadcast to all consumers. Each of these two models has a strength and a weakness. The strength of queuing is that it allows you to divide up the processing  [...]
+    Events are organized and durably stored in <strong>topics</strong>. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder. An example topic name could be "payments". Topics in Kafka are always multi-producer and multi-subscriber: a topic can have zero, one, or many producers that write events to it, as well as zero, one, or many consumers that subscribe to these events. Events in a topic can be read as often as needed—unlike trad [...]
   </p>
   <p>
-  The consumer group concept in Kafka generalizes these two concepts. As with a queue the consumer group allows you to divide up processing over a collection of processes (the members of the consumer group). As with publish-subscribe, Kafka allows you to broadcast messages to multiple consumer groups.
+    Topics are <strong>partitioned</strong>, meaning a topic is spread over a number of "buckets" located on different Kafka brokers. This distributed placement of your data is very important for scalability because it allows client applications to both read and write the data from/to many brokers at the same time. When a new event is published to a topic, it is actually appended to one of the topic's partitions. Events with the same event key (e.g., a customer or vehicle ID) are written [...]
   </p>
+  <figure class="figure">
+    <img src="/images/streams-and-tables-p1_p4.png" class="figure-image" />
+    <figcaption class="figure-caption">
+      Figure: This example topic has four partitions P1–P4. Two different producer clients are publishing,
+      independently from each other, new events to the topic by writing events over the network to the topic's
+      partitions. Events with the same key (denoted by their color in the figure) are written to the same
+      partition. Note that both producers can write to the same partition if appropriate.
+    </figcaption>
+  </figure>
   <p>
-  The advantage of Kafka's model is that every topic has both these properties&mdash;it can scale processing and is also multi-subscriber&mdash;there is no need to choose one or the other.
+    To make your data fault-tolerant and highly-available, every topic can be <strong>replicated</strong>, even across geo-regions or datacenters, so that there are always multiple brokers that have a copy of the data just in case things go wrong, you want to do maintenance on the brokers, and so on. A common production setting is a replication factor of 3, i.e., there will always be three copies of your data. This replication is performed at the level of topic-partitions.
   </p>
   <p>
-  Kafka has stronger ordering guarantees than a traditional messaging system, too.
-  </p>
-  <p>
-  A traditional queue retains records in-order on the server, and if multiple consumers consume from the queue then the server hands out records in the order they are stored. However, although the server hands out records in order, the records are delivered asynchronously to consumers, so they may arrive out of order on different consumers. This effectively means the ordering of the records is lost in the presence of parallel consumption. Messaging systems often work around this by havin [...]
+    This primer should be sufficient for an introduction. The <a href="/documentation/#design">Design</a> section of the documentation explains Kafka's various concepts in full detail, if you are interested.
   </p>
+
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_apis" href="#intro_apis"></a>
+    <a href="#intro_apis">Kafka APIs</a>
+  </h4>
   <p>
-  Kafka does it better. By having a notion of parallelism&mdash;the partition&mdash;within the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool of consumer processes. This is achieved by assigning the partitions in the topic to the consumers in the consumer group so that each partition is consumed by exactly one consumer in the group. By doing this we ensure that the consumer is the only reader of that partition and consumes the data in order. Sinc [...]
+    In addition to command line tooling for management and administration tasks, Kafka has five core APIs for Java and Scala:
   </p>
+  <ul>
+    <li>
+      The <a href="/documentation.html#adminapi">Admin API</a> to manage and inspect topics, brokers, and other Kafka objects.
+    </li>
+    <li>
+      The <a href="/documentation.html#producerapi">Producer API</a> to publish (write) a stream of events to one or more Kafka topics.
+    </li>
+    <li>
+      The <a href="/documentation.html#consumerapi">Consumer API</a> to subscribe to (read) one or more topics and to process the stream of events produced to them.
+    </li>
+    <li>
+      The <a href="/documentation/streams">Kafka Streams API</a> to implement stream processing applications and microservices. It provides higher-level functions to process event streams, including transformations, stateful operations like aggregations and joins, windowing, processing based on event-time, and more. Input is read from one or more topics in order to generate output to one or more topics, effectively transforming the input streams to output streams.
+    </li>
+    <li>
+      The <a href="/documentation.html#connect">Kafka Connect API</a> to build and run reusable data import/export connectors that consume (read) or produce (write) streams of events from and to external systems and applications so they can integrate with Kafka. For example, a connector to a relational database like PostgreSQL might capture every change to a set of tables. However, in practice, you typically don't need to implement your own connectors because the Kafka community already  [...]
+    </li>
+  </ul>
 
-  <h4 id="kafka_storage">Kafka as a Storage System</h4>
+  <!-- TODO: add new section once supporting page is written -->
 
-  <p>
-  Any message queue that allows publishing messages decoupled from consuming them is effectively acting as a storage system for the in-flight messages. What is different about Kafka is that it is a very good storage system.
-  </p>
-  <p>
-  Data written to Kafka is written to disk and replicated for fault-tolerance. Kafka allows producers to wait on acknowledgement so that a write isn't considered complete until it is fully replicated and guaranteed to persist even if the server written to fails.
-  </p>
-  <p>
-  The disk structures Kafka uses scale well&mdash;Kafka will perform the same whether you have 50 KB or 50 TB of persistent data on the server.
-  </p>
-  <p>
-  As a result of taking storage seriously and allowing the clients to control their read position, you can think of Kafka as a kind of special purpose distributed filesystem dedicated to high-performance, low-latency commit log storage, replication, and propagation.
-  </p>
-  <p>
-  For details about the Kafka's commit log storage and replication design, please read <a href="https://kafka.apache.org/documentation/#design">this</a> page.
-  </p>
-  <h4>Kafka for Stream Processing</h4>
-  <p>
-  It isn't enough to just read, write, and store streams of data, the purpose is to enable real-time processing of streams.
-  </p>
-  <p>
-  In Kafka a stream processor is anything that takes continual streams of  data from input topics, performs some processing on this input, and produces continual streams of data to output topics.
-  </p>
-  <p>
-  For example, a retail application might take in input streams of sales and shipments, and output a stream of reorders and price adjustments computed off this data.
-  </p>
-  <p>
-  It is possible to do simple processing directly using the producer and consumer APIs. However for more complex transformations Kafka provides a fully integrated <a href="/documentation/streams">Streams API</a>. This allows building applications that do non-trivial processing that compute aggregations off of streams or join streams together.
-  </p>
-  <p>
-  This facility helps solve the hard problems this type of application faces: handling out-of-order data, reprocessing input as code changes, performing stateful computations, etc.
-  </p>
-  <p>
-  The streams API builds on the core primitives Kafka provides: it uses the producer and consumer APIs for input, uses Kafka for stateful storage, and uses the same group mechanism for fault tolerance among the stream processor instances.
-  </p>
-  <h4>Putting the Pieces Together</h4>
-  <p>
-  This combination of messaging, storage, and stream processing may seem unusual but it is essential to Kafka's role as a streaming platform.
-  </p>
-  <p>
-  A distributed file system like HDFS allows storing static files for batch processing. Effectively a system like this allows storing and processing <i>historical</i> data from the past.
-  </p>
-  <p>
-  A traditional enterprise messaging system allows processing future messages that will arrive after you subscribe. Applications built in this way process future data as it arrives.
-  </p>
-  <p>
-  Kafka combines both of these capabilities, and the combination is critical both for Kafka usage as a platform for streaming applications as well as for streaming data pipelines.
-  </p>
-  <p>
-  By combining storage and low-latency subscriptions, streaming applications can treat both past and future data the same way. That is a single application can process historical, stored data but rather than ending when it reaches the last record it can keep processing as future data arrives. This is a generalized notion of stream processing that subsumes batch processing as well as message-driven applications.
-  </p>
-  <p>
-  Likewise for streaming data pipelines the combination of subscription to real-time events make it possible to use Kafka for very low-latency pipelines; but the ability to store data reliably make it possible to use it for critical data where the delivery of data must be guaranteed or for integration with offline systems that load data only periodically or may go down for extended periods of time for maintenance. The stream processing facilities make it possible to transform data as it  [...]
-  </p>
-  <p>
-  For more information on the guarantees, APIs, and capabilities Kafka provides see the rest of the <a href="/documentation.html">documentation</a>.
-  </p>
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_more" href="#intro_more"></a>
+    <a href="#intro_more">Where to go from here</a>
+  </h4>
+  <ul>
+    <li>
+      To get hands-on experience with Kafka, follow the <a href="/quickstart">Quickstart</a>.
+    </li>
+    <li>
+      To understand Kafka in more detail, read the <a href="/documentation/">Documentation</a>.
+      You also have your choice of <a href="/books-and-papers">Kafka books and academic papers</a>.
+    </li>
+    <li>
+      Browse through the <a href="/powered-by">Use Cases</a> to learn how other users in our world-wide community are getting value out of Kafka.
+    </li>
+    <li>
+      Join a <a href="/events">local Kafka meetup group</a> and
+      <a href="https://kafka-summit.org/past-events/">watch talks from Kafka Summit</a>, the main conference of the Kafka community.
+    </li>
+  </ul>
 </script>
 
 <div class="p-introduction"></div>
diff --git a/docs/migration.html b/docs/migration.html
index 08a6271..95fc87f 100644
--- a/docs/migration.html
+++ b/docs/migration.html
@@ -16,11 +16,11 @@
 -->
 
 <!--#include virtual="../includes/_header.htm" -->
-<h2><a id="migration" href="#migration">Migrating from 0.7.x to 0.8</a></h2>
+<h2 class="anchor-heading"><a id="migration" class="anchor-link"></a><a href="#migration">Migrating from 0.7.x to 0.8</a></h2>
 
 0.8 is our first (and hopefully last) release with a non-backwards-compatible wire protocol, ZooKeeper     layout, and on-disk data format. This was a chance for us to clean up a lot of cruft and start fresh. This means performing a no-downtime upgrade is more painful than normal&mdash;you cannot just swap in the new code in-place.
 
-<h3><a id="migration_steps" href="#migration_steps">Migration Steps</a></h3>
+<h3 class="anchor-heading"><a id="migration_steps" class="anchor-link"></a><a href="#migration_steps">Migration Steps</a></h3>
 
 <ol>
     <li>Setup a new cluster running 0.8.
diff --git a/docs/ops.html b/docs/ops.html
index 931d2b2..e835341 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -18,19 +18,17 @@
 
   Here is some information on actually running Kafka as a production system based on usage and experience at LinkedIn. Please send us any additional tips you know of.
 
-  <h3><a id="basic_ops" href="#basic_ops">6.1 Basic Kafka Operations</a></h3>
+  <h3 class="anchor-heading"><a id="basic_ops" class="anchor-link"></a><a href="#basic_ops">6.1 Basic Kafka Operations</a></h3>
 
   This section will review the most common operations you will perform on your Kafka cluster. All of the tools reviewed in this section are available under the <code>bin/</code> directory of the Kafka distribution and each tool will print details on all possible commandline options if it is run with no arguments.
 
-  <h4><a id="basic_ops_add_topic" href="#basic_ops_add_topic">Adding and removing topics</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_add_topic" class="anchor-link"></a><a href="#basic_ops_add_topic">Adding and removing topics</a></h4>
 
   You have the option of either adding topics manually or having them be created automatically when data is first published to a non-existent topic. If topics are auto-created then you may want to tune the default <a href="#topicconfigs">topic configurations</a> used for auto-created topics.
   <p>
   Topics are added and modified using the topic tool:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-topics.sh --bootstrap-server broker_host:port --create --topic my_topic_name \
-        --partitions 20 --replication-factor 3 --config x=y
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-topics.sh --bootstrap-server broker_host:port --create --topic my_topic_name \
+        --partitions 20 --replication-factor 3 --config x=y</code></pre>
   The replication factor controls how many servers will replicate each message that is written. If you have a replication factor of 3 then up to 2 servers can fail before you will lose access to your data. We recommend you use a replication factor of 2 or 3 so that you can transparently bounce machines without interrupting data consumption.
   <p>
   The partition count controls how many logs the topic will be sharded into. There are several impacts of the partition count. First each partition must fit entirely on a single server. So if you have 20 partitions the full data set (and read and write load) will be handled by no more than 20 servers (not counting replicas). Finally the partition count impacts the maximum parallelism of your consumers. This is discussed in greater detail in the <a href="#intro_consumers">concepts section</a>.
@@ -39,35 +37,27 @@
   <p>
   The configurations added on the command line override the default settings the server has for things like the length of time data should be retained. The complete set of per-topic configurations is documented <a href="#topicconfigs">here</a>.
 
-  <h4><a id="basic_ops_modify_topic" href="#basic_ops_modify_topic">Modifying topics</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_modify_topic" class="anchor-link"></a><a href="#basic_ops_modify_topic">Modifying topics</a></h4>
 
   You can change the configuration or partitioning of a topic using the same topic tool.
   <p>
   To add partitions you can do
-  <pre class="brush: bash;">
-  &gt; bin/kafka-topics.sh --bootstrap-server broker_host:port --alter --topic my_topic_name \
-        --partitions 40
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-topics.sh --bootstrap-server broker_host:port --alter --topic my_topic_name \
+        --partitions 40</code></pre>
   Be aware that one use case for partitions is to semantically partition data, and adding partitions doesn't change the partitioning of existing data so this may disturb consumers if they rely on that partition. That is if data is partitioned by <code>hash(key) % number_of_partitions</code> then this partitioning will potentially be shuffled by adding partitions but Kafka will not attempt to automatically redistribute data in any way.
   <p>
   To add configs:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server broker_host:port --entity-type topics --entity-name my_topic_name --alter --add-config x=y
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server broker_host:port --entity-type topics --entity-name my_topic_name --alter --add-config x=y</code></pre>
   To remove a config:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --bootstrap-server broker_host:port --entity-type topics --entity-name my_topic_name --alter --delete-config x
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --bootstrap-server broker_host:port --entity-type topics --entity-name my_topic_name --alter --delete-config x</code></pre>
   And finally deleting a topic:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-topics.sh --bootstrap-server broker_host:port --delete --topic my_topic_name
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-topics.sh --bootstrap-server broker_host:port --delete --topic my_topic_name</code></pre>
   <p>
   Kafka does not currently support reducing the number of partitions for a topic.
   <p>
   Instructions for changing the replication factor of a topic can be found <a href="#basic_ops_increase_replication_factor">here</a>.
 
-  <h4><a id="basic_ops_restarting" href="#basic_ops_restarting">Graceful shutdown</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_restarting" class="anchor-link"></a><a href="#basic_ops_restarting">Graceful shutdown</a></h4>
 
   The Kafka cluster will automatically detect any broker shutdown or failure and elect new leaders for the partitions on that machine. This will occur whether a server fails or it is brought down intentionally for maintenance or configuration changes. For the latter cases Kafka supports a more graceful mechanism for stopping a server than just killing it.
 
@@ -78,37 +68,31 @@
   </ol>
 
   Syncing the logs will happen automatically whenever the server is stopped other than by a hard kill, but the controlled leadership migration requires using a special setting:
-  <pre class="brush: text;">
-      controlled.shutdown.enable=true
-  </pre>
+  <pre class="line-numbers"><code class="language-text">      controlled.shutdown.enable=true</code></pre>
   Note that controlled shutdown will only succeed if <i>all</i> the partitions hosted on the broker have replicas (i.e. the replication factor is greater than 1 <i>and</i> at least one of these replicas is alive). This is generally what you want since shutting down the last replica would make that topic partition unavailable.
 
-  <h4><a id="basic_ops_leader_balancing" href="#basic_ops_leader_balancing">Balancing leadership</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_leader_balancing" class="anchor-link"></a><a href="#basic_ops_leader_balancing">Balancing leadership</a></h4>
 
   Whenever a broker stops or crashes, leadership for that broker's partitions transfers to other replicas. When the broker is restarted it will only be a follower for all its partitions, meaning it will not be used for client reads and writes.
   <p>
   To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to either node 5 or 9 because it is earlier in the replica list. By default the Kafka cluster will try to restore leadership to the restored replicas.  This behaviour is configured with:
 
-  <pre class="brush: text;">
-      auto.leader.rebalance.enable=true
-  </pre>
+  <pre class="line-numbers"><code class="language-text">      auto.leader.rebalance.enable=true</code></pre>
     You can also set this to false, but you will then need to manually restore leadership to the restored replicas by running the command:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-preferred-replica-election.sh --bootstrap-server broker_host:port
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-preferred-replica-election.sh --bootstrap-server broker_host:port</code></pre>
 
-  <h4><a id="basic_ops_racks" href="#basic_ops_racks">Balancing Replicas Across Racks</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_racks" class="anchor-link"></a><a href="#basic_ops_racks">Balancing Replicas Across Racks</a></h4>
   The rack awareness feature spreads replicas of the same partition across different racks. This extends the guarantees Kafka provides for broker-failure to cover rack-failure, limiting the risk of data loss should all the brokers on a rack fail at once. The feature can also be applied to other broker groupings such as availability zones in EC2.
   <p></p>
   You can specify that a broker belongs to a particular rack by adding a property to the broker config:
-  <pre class="brush: text;">   broker.rack=my-rack-id</pre>
+  <pre class="language-text">   broker.rack=my-rack-id</code></pre>
   When a topic is <a href="#basic_ops_add_topic">created</a>, <a href="#basic_ops_modify_topic">modified</a> or replicas are <a href="#basic_ops_cluster_expansion">redistributed</a>, the rack constraint will be honoured, ensuring replicas span as many racks as they can (a partition will span min(#racks, replication-factor) different racks).
   <p></p>
   The algorithm used to assign replicas to brokers ensures that the number of leaders per broker will be constant, regardless of how brokers are distributed across racks. This ensures balanced throughput.
   <p></p>
   However if racks are assigned different numbers of brokers, the assignment of replicas will not be even. Racks with fewer brokers will get more replicas, meaning they will use more storage and put more resources into replication. Hence it is sensible to configure an equal number of brokers per rack.
 
-  <h4><a id="basic_ops_mirror_maker" href="#basic_ops_mirror_maker">Mirroring data between clusters</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_mirror_maker" class="anchor-link"></a><a href="#basic_ops_mirror_maker">Mirroring data between clusters</a></h4>
 
   We refer to the process of replicating data <i>between</i> Kafka clusters "mirroring" to avoid confusion with the replication that happens amongst the nodes in a single cluster. Kafka comes with a tool for mirroring data between Kafka clusters. The tool consumes from a source cluster and produces to a destination cluster.
 
@@ -121,42 +105,35 @@
   The source and destination clusters are completely independent entities: they can have different numbers of partitions and the offsets will not be the same. For this reason the mirror cluster is not really intended as a fault-tolerance mechanism (as the consumer position will be different); for that we recommend using normal in-cluster replication. The mirror maker process will, however, retain and use the message key for partitioning so order is preserved on a per-key basis.
   <p>
   Here is an example showing how to mirror a single topic (named <i>my-topic</i>) from an input cluster:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-mirror-maker.sh
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-mirror-maker.sh
         --consumer.config consumer.properties
-        --producer.config producer.properties --whitelist my-topic
-  </pre>
+        --producer.config producer.properties --whitelist my-topic</code></pre>
   Note that we specify the list of topics with the <code>--whitelist</code> option. This option allows any regular expression using <a href="http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html">Java-style regular expressions</a>. So you could mirror two topics named <i>A</i> and <i>B</i> using <code>--whitelist 'A|B'</code>. Or you could mirror <i>all</i> topics using <code>--whitelist '*'</code>. Make sure to quote any regular expression to ensure the shell doesn't try [...]
 
   Combining mirroring with the configuration <code>auto.create.topics.enable=true</code> makes it possible to have a replica cluster that will automatically create and replicate all data in a source cluster even as new topics are added.
 
-  <h4><a id="basic_ops_consumer_lag" href="#basic_ops_consumer_lag">Checking consumer position</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_consumer_lag" class="anchor-link"></a><a href="#basic_ops_consumer_lag">Checking consumer position</a></h4>
   Sometimes it's useful to see the position of your consumers. We have a tool that will show the position of all consumers in a consumer group as well as how far behind the end of the log they are. To run this tool on a consumer group named <i>my-group</i> consuming a topic named <i>my-topic</i> would look like this:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
 
   TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG        CONSUMER-ID                                       HOST                           CLIENT-ID
   my-topic                       0          2               4               2          consumer-1-029af89c-873c-4751-a720-cefd41a669d6   /127.0.0.1                     consumer-1
   my-topic                       1          2               3               1          consumer-1-029af89c-873c-4751-a720-cefd41a669d6   /127.0.0.1                     consumer-1
-  my-topic                       2          2               3               1          consumer-2-42c1abd4-e3b2-425d-a8bb-e1ea49b29bb2   /127.0.0.1                     consumer-2
-  </pre>
+  my-topic                       2          2               3               1          consumer-2-42c1abd4-e3b2-425d-a8bb-e1ea49b29bb2   /127.0.0.1                     consumer-2</code></pre>
 
-  <h4><a id="basic_ops_consumer_group" href="#basic_ops_consumer_group">Managing Consumer Groups</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_consumer_group" class="anchor-link"></a><a href="#basic_ops_consumer_group">Managing Consumer Groups</a></h4>
 
   With the ConsumerGroupCommand tool, we can list, describe, or delete the consumer groups. The consumer group can be deleted manually, or automatically when the last committed offset for that group expires. Manual deletion works only if the group does not have any active members.
 
   For example, to list all consumer groups across all topics:
 
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
 
-  test-consumer-group
-  </pre>
+  test-consumer-group</code></pre>
 
   To view offsets, as mentioned earlier, we "describe" the consumer group like this:
 
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
 
   TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                    HOST            CLIENT-ID
   topic3          0          241019          395308          154289          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2
@@ -164,50 +141,41 @@
   topic3          1          241018          398817          157799          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2
   topic1          0          854144          855809          1665            consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1
   topic2          0          460537          803290          342753          consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1
-  topic3          2          243655          398812          155157          consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1      consumer4
-  </pre>
+  topic3          2          243655          398812          155157          consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1      consumer4</code></pre>
 
   There are a number of additional "describe" options that can be used to provide more detailed information about a consumer group:
   <ul>
     <li>--members: This option provides the list of all active members in the consumer group.
-      <pre class="brush: bash;">
-      &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --members
+      <pre class="line-numbers"><code class="language-bash">      &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --members
 
       CONSUMER-ID                                    HOST            CLIENT-ID       #PARTITIONS
       consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1       2
       consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1      consumer4       1
       consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2       3
-      consumer3-ecea43e4-1f01-479f-8349-f9130b75d8ee /127.0.0.1      consumer3       0
-      </pre>
+      consumer3-ecea43e4-1f01-479f-8349-f9130b75d8ee /127.0.0.1      consumer3       0</code></pre>
     </li>
     <li>--members --verbose: On top of the information reported by the "--members" options above, this option also provides the partitions assigned to each member.
-      <pre class="brush: bash;">
-      &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --members --verbose
+      <pre class="line-numbers"><code class="language-bash">      &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --members --verbose
 
       CONSUMER-ID                                    HOST            CLIENT-ID       #PARTITIONS     ASSIGNMENT
       consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1       2               topic1(0), topic2(0)
       consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1      consumer4       1               topic3(2)
       consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2       3               topic2(1), topic3(0,1)
-      consumer3-ecea43e4-1f01-479f-8349-f9130b75d8ee /127.0.0.1      consumer3       0               -
-      </pre>
+      consumer3-ecea43e4-1f01-479f-8349-f9130b75d8ee /127.0.0.1      consumer3       0               -</code></pre>
     </li>
     <li>--offsets: This is the default describe option and provides the same output as the "--describe" option.</li>
     <li>--state: This option provides useful group-level information.
-      <pre class="brush: bash;">
-      &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --state
+      <pre class="line-numbers"><code class="language-bash">      &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --state
 
       COORDINATOR (ID)          ASSIGNMENT-STRATEGY       STATE                #MEMBERS
-      localhost:9092 (0)        range                     Stable               4
-      </pre>
+      localhost:9092 (0)        range                     Stable               4</code></pre>
     </li>
   </ul>
 
   To manually delete one or multiple consumer groups, the "--delete" option can be used:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --delete --group my-group --group my-other-group
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --delete --group my-group --group my-other-group
 
-  Deletion of requested consumer groups ('my-group', 'my-other-group') was successful.
-  </pre>
+  Deletion of requested consumer groups ('my-group', 'my-other-group') was successful.</code></pre>
 
   <p>
   To reset offsets of a consumer group, "--reset-offsets" option can be used.
@@ -263,23 +231,19 @@
   <p>
   For example, to reset offsets of a consumer group to the latest offset:
 
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --reset-offsets --group consumergroup1 --topic topic1 --to-latest
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --reset-offsets --group consumergroup1 --topic topic1 --to-latest
 
   TOPIC                          PARTITION  NEW-OFFSET
-  topic1                         0          0
-  </pre>
+  topic1                         0          0</code></pre>
 
   <p>
 
   If you are using the old high-level consumer and storing the group metadata in ZooKeeper (i.e. <code>offsets.storage=zookeeper</code>), pass
   <code>--zookeeper</code> instead of <code>--bootstrap-server</code>:
 
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list</code></pre>
 
-  <h4><a id="basic_ops_cluster_expansion" href="#basic_ops_cluster_expansion">Expanding your cluster</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_cluster_expansion" class="anchor-link"></a><a href="#basic_ops_cluster_expansion">Expanding your cluster</a></h4>
 
   Adding servers to a Kafka cluster is easy, just assign them a unique broker id and start up Kafka on your new servers. However these new servers will not automatically be assigned any data partitions, so unless partitions are moved to them they won't be doing any work until new topics are created. So usually when you add machines to your cluster you will want to migrate some existing data to these machines.
   <p>
@@ -293,22 +257,19 @@
   <li>--execute: In this mode, the tool kicks off the reassignment of partitions based on the user provided reassignment plan. (using the --reassignment-json-file option). This can either be a custom reassignment plan hand crafted by the admin or provided by using the --generate option</li>
   <li>--verify: In this mode, the tool verifies the status of the reassignment for all partitions listed during the last --execute. The status can be either of successfully completed, failed or in progress</li>
   </ul>
-  <h5><a id="basic_ops_automigrate" href="#basic_ops_automigrate">Automatically migrating data to new machines</a></h5>
+  <h5 class="anchor-heading"><a id="basic_ops_automigrate" class="anchor-link"></a><a href="#basic_ops_automigrate">Automatically migrating data to new machines</a></h5>
   The partition reassignment tool can be used to move some topics off of the current set of brokers to the newly added brokers. This is typically useful while expanding an existing cluster since it is easier to move entire topics to the new set of brokers, than moving one partition at a time. When used to do this, the user should provide a list of topics that should be moved to the new set of brokers and a target list of new brokers. The tool then evenly distributes all partitions for th [...]
   <p>
   For instance, the following example will move all partitions for topics foo1,foo2 to the new set of brokers 5,6. At the end of this move, all partitions for topics foo1 and foo2 will <i>only</i> exist on brokers 5,6.
   <p>
   Since the tool accepts the input list of topics as a json file, you first need to identify the topics you want to move and create the json file as follows:
-  <pre class="brush: bash;">
-  > cat topics-to-move.json
+  <pre class="line-numbers"><code class="language-bash">  > cat topics-to-move.json
   {"topics": [{"topic": "foo1"},
               {"topic": "foo2"}],
   "version":1
-  }
-  </pre>
+  }</code></pre>
   Once the json file is ready, use the partition reassignment tool to generate a candidate assignment:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
   Current partition replica assignment
 
   {"version":1,
@@ -329,12 +290,10 @@
                 {"topic":"foo2","partition":0,"replicas":[5,6]},
                 {"topic":"foo1","partition":1,"replicas":[5,6]},
                 {"topic":"foo2","partition":1,"replicas":[5,6]}]
-  }
-  </pre>
+  }</code></pre>
   <p>
   The tool generates a candidate assignment that will move all partitions from topics foo1,foo2 to brokers 5,6. Note, however, that at this point, the partition movement has not started, it merely tells you the current assignment and the proposed new assignment. The current assignment should be saved in case you want to rollback to it. The new assignment should be saved in a json file (e.g. expand-cluster-reassignment.json) to be input to the tool with the --execute option as follows:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --execute
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --execute
   Current partition replica assignment
 
   {"version":1,
@@ -355,34 +314,28 @@
                 {"topic":"foo2","partition":0,"replicas":[5,6]},
                 {"topic":"foo1","partition":1,"replicas":[5,6]},
                 {"topic":"foo2","partition":1,"replicas":[5,6]}]
-  }
-  </pre>
+  }</code></pre>
   <p>
   Finally, the --verify option can be used with the tool to check the status of the partition reassignment. Note that the same expand-cluster-reassignment.json (used with the --execute option) should be used with the --verify option:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --verify
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --verify
   Status of partition reassignment:
   Reassignment of partition [foo1,0] completed successfully
   Reassignment of partition [foo1,1] is in progress
   Reassignment of partition [foo1,2] is in progress
   Reassignment of partition [foo2,0] completed successfully
   Reassignment of partition [foo2,1] completed successfully
-  Reassignment of partition [foo2,2] completed successfully
-  </pre>
+  Reassignment of partition [foo2,2] completed successfully</code></pre>
 
-  <h5><a id="basic_ops_partitionassignment" href="#basic_ops_partitionassignment">Custom partition assignment and migration</a></h5>
+  <h5 class="anchor-heading"><a id="basic_ops_partitionassignment" class="anchor-link"></a><a href="#basic_ops_partitionassignment">Custom partition assignment and migration</a></h5>
   The partition reassignment tool can also be used to selectively move replicas of a partition to a specific set of brokers. When used in this manner, it is assumed that the user knows the reassignment plan and does not require the tool to generate a candidate reassignment, effectively skipping the --generate step and moving straight to the --execute step
   <p>
   For instance, the following example moves partition 0 of topic foo1 to brokers 5,6 and partition 1 of topic foo2 to brokers 2,3:
   <p>
   The first step is to hand craft the custom reassignment plan in a json file:
-  <pre class="brush: bash;">
-  > cat custom-reassignment.json
-  {"version":1,"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},{"topic":"foo2","partition":1,"replicas":[2,3]}]}
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > cat custom-reassignment.json
+  {"version":1,"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},{"topic":"foo2","partition":1,"replicas":[2,3]}]}</code></pre>
   Then, use the json file with the --execute option to start the reassignment process:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --execute
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --execute
   Current partition replica assignment
 
   {"version":1,
@@ -395,34 +348,28 @@
   {"version":1,
   "partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},
                 {"topic":"foo2","partition":1,"replicas":[2,3]}]
-  }
-  </pre>
+  }</code></pre>
   <p>
   The --verify option can be used with the tool to check the status of the partition reassignment. Note that the same custom-reassignment.json (used with the --execute option) should be used with the --verify option:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --verify
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --verify
   Status of partition reassignment:
   Reassignment of partition [foo1,0] completed successfully
-  Reassignment of partition [foo2,1] completed successfully
-  </pre>
+  Reassignment of partition [foo2,1] completed successfully</code></pre>
 
-  <h4><a id="basic_ops_decommissioning_brokers" href="#basic_ops_decommissioning_brokers">Decommissioning brokers</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_decommissioning_brokers" class="anchor-link"></a><a href="#basic_ops_decommissioning_brokers">Decommissioning brokers</a></h4>
   The partition reassignment tool does not have the ability to automatically generate a reassignment plan for decommissioning brokers yet. As such, the admin has to come up with a reassignment plan to move the replica for all partitions hosted on the broker to be decommissioned, to the rest of the brokers. This can be relatively tedious as the reassignment needs to ensure that all the replicas are not moved from the decommissioned broker to only one other broker. To make this process eff [...]
 
-  <h4><a id="basic_ops_increase_replication_factor" href="#basic_ops_increase_replication_factor">Increasing replication factor</a></h4>
+  <h4 class="anchor-heading"><a id="basic_ops_increase_replication_factor" class="anchor-link"></a><a href="#basic_ops_increase_replication_factor">Increasing replication factor</a></h4>
   Increasing the replication factor of an existing partition is easy. Just specify the extra replicas in the custom reassignment json file and use it with the --execute option to increase the replication factor of the specified partitions.
   <p>
   For instance, the following example increases the replication factor of partition 0 of topic foo from 1 to 3. Before increasing the replication factor, the partition's only replica existed on broker 5. As part of increasing the replication factor, we will add more replicas on brokers 6 and 7.
   <p>
   The first step is to hand craft the custom reassignment plan in a json file:
-  <pre class="brush: bash;">
-  > cat increase-replication-factor.json
+  <pre class="line-numbers"><code class="language-bash">  > cat increase-replication-factor.json
   {"version":1,
-  "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}
-  </pre>
+  "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}</code></pre>
   Then, use the json file with the --execute option to start the reassignment process:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --execute
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --execute
   Current partition replica assignment
 
   {"version":1,
@@ -431,37 +378,31 @@
   Save this to use as the --reassignment-json-file option during rollback
   Successfully started reassignment of partitions
   {"version":1,
-  "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}
-  </pre>
+  "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}</code></pre>
   <p>
   The --verify option can be used with the tool to check the status of the partition reassignment. Note that the same increase-replication-factor.json (used with the --execute option) should be used with the --verify option:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --verify
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --verify
   Status of partition reassignment:
-  Reassignment of partition [foo,0] completed successfully
-  </pre>
+  Reassignment of partition [foo,0] completed successfully</code></pre>
   You can also verify the increase in replication factor with the kafka-topics tool:
-  <pre class="brush: bash;">
-  > bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic foo --describe
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic foo --describe
   Topic:foo	PartitionCount:1	ReplicationFactor:3	Configs:
-    Topic: foo	Partition: 0	Leader: 5	Replicas: 5,6,7	Isr: 5,6,7
-  </pre>
+    Topic: foo	Partition: 0	Leader: 5	Replicas: 5,6,7	Isr: 5,6,7</code></pre>
 
-  <h4><a id="rep-throttle" href="#rep-throttle">Limiting Bandwidth Usage during Data Migration</a></h4>
+  <h4 class="anchor-heading"><a id="rep-throttle" class="anchor-link"></a><a href="#rep-throttle">Limiting Bandwidth Usage during Data Migration</a></h4>
   Kafka lets you apply a throttle to replication traffic, setting an upper bound on the bandwidth used to move replicas from machine to machine. This is useful when rebalancing a cluster, bootstrapping a new broker or adding or removing brokers, as it limits the impact these data-intensive operations will have on users.
   <p></p>
   There are two interfaces that can be used to engage a throttle. The simplest, and safest, is to apply a throttle when invoking the kafka-reassign-partitions.sh, but kafka-configs.sh can also be used to view and alter the throttle values directly.
   <p></p>
   So for example, if you were to execute a rebalance, with the below command, it would move partitions at no more than 50MB/s.
-  <pre class="brush: bash;">$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --execute --reassignment-json-file bigger-cluster.json --throttle 50000000</pre>
+  <pre class="language-bash">$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --execute --reassignment-json-file bigger-cluster.json --throttle 50000000</code></pre>
   When you execute this script you will see the throttle engage:
-  <pre class="brush: bash;">
-  The throttle limit was set to 50000000 B/s
-  Successfully started reassignment of partitions.</pre>
+  <pre class="line-numbers"><code class="language-bash">  The throttle limit was set to 50000000 B/s
+  Successfully started reassignment of partitions.</code></pre>
   <p>Should you wish to alter the throttle, during a rebalance, say to increase the throughput so it completes quicker, you can do this by re-running the execute command passing the same reassignment-json-file:</p>
-  <pre class="brush: bash;">$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
+  <pre class="language-bash">$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
   There is an existing assignment running.
-  The throttle limit was set to 700000000 B/s</pre>
+  The throttle limit was set to 700000000 B/s</code></pre>
 
   <p>Once the rebalance completes the administrator can check the status of the rebalance using the --verify option.
       If the rebalance has completed, the throttle will be removed via the --verify command. It is important that
@@ -469,28 +410,23 @@
       the --verify option. Failure to do so could cause regular replication traffic to be throttled. </p>
   <p>When the --verify option is executed, and the reassignment has completed, the script will confirm that the throttle was removed:</p>
 
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --verify --reassignment-json-file bigger-cluster.json
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --verify --reassignment-json-file bigger-cluster.json
   Status of partition reassignment:
   Reassignment of partition [my-topic,1] completed successfully
   Reassignment of partition [mytopic,0] completed successfully
-  Throttle was removed.</pre>
+  Throttle was removed.</code></pre>
 
   <p>The administrator can also validate the assigned configs using the kafka-configs.sh. There are two pairs of throttle
       configuration used to manage the throttling process. First pair refers to the throttle value itself. This is configured, at a broker
       level, using the dynamic properties: </p>
 
-  <pre class="brush: text;">
-    leader.replication.throttled.rate
-    follower.replication.throttled.rate
-  </pre>
+  <pre class="line-numbers"><code class="language-text">    leader.replication.throttled.rate
+    follower.replication.throttled.rate</code></pre>
 
   <p>Then there is the configuration pair of enumerated sets of throttled replicas: </p>
 
-  <pre class="brush: text;">
-    leader.replication.throttled.replicas
-    follower.replication.throttled.replicas
-  </pre>
+  <pre class="line-numbers"><code class="language-text">    leader.replication.throttled.replicas
+    follower.replication.throttled.replicas</code></pre>
 
   <p>Which are configured per topic. </p>
 
@@ -498,20 +434,18 @@
 
   <p>To view the throttle limit configuration:</p>
 
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type brokers
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type brokers
   Configs for brokers '2' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000
-  Configs for brokers '1' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000</pre>
+  Configs for brokers '1' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000</code></pre>
 
   <p>This shows the throttle applied to both leader and follower side of the replication protocol. By default both sides
       are assigned the same throttled throughput value. </p>
 
   <p>To view the list of throttled replicas:</p>
 
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type topics
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type topics
   Configs for topic 'my-topic' are leader.replication.throttled.replicas=1:102,0:101,
-      follower.replication.throttled.replicas=1:101,0:102</pre>
+      follower.replication.throttled.replicas=1:101,0:102</code></pre>
 
   <p>Here we see the leader throttle is applied to partition 1 on broker 102 and partition 0 on broker 101. Likewise the
       follower throttle is applied to partition 1 on
@@ -538,98 +472,74 @@
   <p><i>(2) Ensuring Progress:</i></p>
   <p>If the throttle is set too low, in comparison to the incoming write rate, it is possible for replication to not
       make progress. This occurs when:</p>
-  <pre>max(BytesInPerSec) > throttle</pre>
+  <pre><code>max(BytesInPerSec) > throttle</code></pre>
   <p>
       Where BytesInPerSec is the metric that monitors the write throughput of producers into each broker. </p>
   <p>The administrator can monitor whether replication is making progress, during the rebalance, using the metric:</p>
 
-  <pre>kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)</pre>
+  <pre><code>kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)</code></pre>
 
   <p>The lag should constantly decrease during replication.  If the metric does not decrease the administrator should
       increase the
       throttle throughput as described above. </p>
 
 
-  <h4><a id="quotas" href="#quotas">Setting quotas</a></h4>
+  <h4 class="anchor-heading"><a id="quotas" class="anchor-link"></a><a href="#quotas">Setting quotas</a></h4>
   Quotas overrides and defaults may be configured at (user, client-id), user or client-id levels as described <a href="#design_quotas">here</a>.
   By default, clients receive an unlimited quota.
 
   It is possible to set custom quotas for each (user, client-id), user or client-id group.
   <p>
   Configure custom quota for (user=user1, client-id=clientA):
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
-  Updated config for entity: user-principal 'user1', client-id 'clientA'.
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
+  Updated config for entity: user-principal 'user1', client-id 'clientA'.</code></pre>
 
   Configure custom quota for user=user1:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
-  Updated config for entity: user-principal 'user1'.
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
+  Updated config for entity: user-principal 'user1'.</code></pre>
 
   Configure custom quota for client-id=clientA:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA
-  Updated config for entity: client-id 'clientA'.
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA
+  Updated config for entity: client-id 'clientA'.</code></pre>
 
   It is possible to set default quotas for each (user, client-id), user or client-id group by specifying <i>--entity-default</i> option instead of <i>--entity-name</i>.
   <p>
   Configure default client-id quota for user=userA:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-default
-  Updated config for entity: user-principal 'user1', default client-id.
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-default
+  Updated config for entity: user-principal 'user1', default client-id.</code></pre>
 
   Configure default quota for user:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
-  Updated config for entity: default user-principal.
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
+  Updated config for entity: default user-principal.</code></pre>
 
   Configure default quota for client-id:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
-  Updated config for entity: default client-id.
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
+  Updated config for entity: default client-id.</code></pre>
 
   Here's how to describe the quota for a given (user, client-id):
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
-  Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
+  Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200</code></pre>
   Describe quota for a given user:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1
-  Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1
+  Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200</code></pre>
   Describe quota for a given client-id:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type clients --entity-name clientA
-  Configs for client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type clients --entity-name clientA
+  Configs for client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200</code></pre>
   If entity name is not specified, all entities of the specified type are described. For example, describe all users:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users
   Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  Configs for default user-principal are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  Configs for default user-principal are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200</code></pre>
   Similarly for (user, client):
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-type clients
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-type clients
   Configs for user-principal 'user1', default client-id are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200</code></pre>
   <p>
   It is possible to set default quotas that apply to all client-ids by setting these configs on the brokers. These properties are applied only if quota overrides or defaults are not configured in Zookeeper. By default, each client-id receives an unlimited quota. The following sets the default quota per producer and consumer client-id to 10MB/sec.
-  <pre class="brush: text;">
-    quota.producer.default=10485760
-    quota.consumer.default=10485760
-  </pre>
+  <pre class="line-numbers"><code class="language-text">    quota.producer.default=10485760
+    quota.consumer.default=10485760</code></pre>
   Note that these properties are being deprecated and may be removed in a future release. Defaults configured using kafka-configs.sh take precedence over these properties.
 
-  <h3><a id="datacenters" href="#datacenters">6.2 Datacenters</a></h3>
+  <h3 class="anchor-heading"><a id="datacenters" class="anchor-link"></a><a href="#datacenters">6.2 Datacenters</a></h3>
 
   Some deployments will need to manage a data pipeline that spans multiple datacenters. Our recommended approach to this is to deploy a local Kafka cluster in each datacenter with application instances in each datacenter interacting only with their local cluster and mirroring between clusters (see the documentation on the <a href="#basic_ops_mirror_maker">mirror maker tool</a> for how to do this).
   <p>
@@ -643,9 +553,9 @@
   <p>
   It is generally <i>not</i> advisable to run a <i>single</i> Kafka cluster that spans multiple datacenters over a high-latency link. This will incur very high replication latency both for Kafka writes and ZooKeeper writes, and neither Kafka nor ZooKeeper will remain available in all locations if the network between locations is unavailable.
 
-  <h3><a id="config" href="#config">6.3 Kafka Configuration</a></h3>
+  <h3 class="anchor-heading"><a id="config" class="anchor-link"></a><a href="#config">6.3 Kafka Configuration</a></h3>
 
-  <h4><a id="clientconfig" href="#clientconfig">Important Client Configurations</a></h4>
+  <h4 class="anchor-heading"><a id="clientconfig" class="anchor-link"></a><a href="#clientconfig">Important Client Configurations</a></h4>
 
   The most important producer configurations are:
   <ul>
@@ -657,10 +567,9 @@
   <p>
   All configurations are documented in the <a href="#configuration">configuration</a> section.
   <p>
-  <h4><a id="prodconfig" href="#prodconfig">A Production Server Config</a></h4>
+  <h4 class="anchor-heading"><a id="prodconfig" class="anchor-link"></a><a href="#prodconfig">A Production Server Config</a></h4>
   Here is an example production server configuration:
-  <pre class="brush: text;">
-  # ZooKeeper
+  <pre class="line-numbers"><code class="language-text">  # ZooKeeper
   zookeeper.connect=[list of ZooKeeper servers]
 
   # Log configuration
@@ -673,12 +582,11 @@
   listeners=[list of listeners]
   auto.create.topics.enable=false
   min.insync.replicas=2
-  queued.max.requests=[number of concurrent requests]
-  </pre>
+  queued.max.requests=[number of concurrent requests]</code></pre>
 
   Our client configuration varies a fair amount between different use cases.
 
-  <h3><a id="java" href="#java">6.4 Java Version</a></h3>
+  <h3 class="anchor-heading"><a id="java" class="anchor-link"></a><a href="#java">6.4 Java Version</a></h3>
 
   Java 8 and Java 11 are supported. Java 11 performs significantly better if TLS is enabled, so it is highly recommended (it also includes a number of other
   performance improvements: G1GC, CRC32C, Compact Strings, Thread-Local Handshakes and more).
@@ -687,11 +595,9 @@
 
   Typical arguments for running Kafka with OpenJDK-based Java implementations (including Oracle JDK) are:
   
-  <pre class="brush: text;">
-  -Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
+  <pre class="line-numbers"><code class="language-text">  -Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
   -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
-  -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+ExplicitGCInvokesConcurrent
-  </pre>
+  -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+ExplicitGCInvokesConcurrent</code></pre>
 
   For reference, here are the stats for one of LinkedIn's busiest clusters (at peak) that uses said Java arguments:
   <ul>
@@ -703,14 +609,14 @@
 
   All of the brokers in that cluster have a 90% GC pause time of about 21ms with less than 1 young GC per second.
 
-  <h3><a id="hwandos" href="#hwandos">6.5 Hardware and OS</a></h3>
+  <h3 class="anchor-heading"><a id="hwandos" class="anchor-link"></a><a href="#hwandos">6.5 Hardware and OS</a></h3>
   We are using dual quad-core Intel Xeon machines with 24GB of memory.
   <p>
   You need sufficient memory to buffer active readers and writers. You can do a back-of-the-envelope estimate of memory needs by assuming you want to be able to buffer for 30 seconds and compute your memory need as write_throughput*30.
   <p>
   The disk throughput is important. We have 8x7200 rpm SATA drives. In general disk throughput is the performance bottleneck, and more disks is better. Depending on how you configure flush behavior you may or may not benefit from more expensive disks (if you force flush often then higher RPM SAS drives may be better).
 
-  <h4><a id="os" href="#os">OS</a></h4>
+  <h4 class="anchor-heading"><a id="os" class="anchor-link"></a><a href="#os">OS</a></h4>
   Kafka should run well on any unix system and has been tested on Linux and Solaris.
   <p>
   We have seen a few issues running on Windows and Windows is not currently a well supported platform though we would be happy to change that.
@@ -723,7 +629,7 @@
   </ul>
   <p>
 
-  <h4><a id="diskandfs" href="#diskandfs">Disks and Filesystem</a></h4>
+  <h4 class="anchor-heading"><a id="diskandfs" class="anchor-link"></a><a href="#diskandfs">Disks and Filesystem</a></h4>
   We recommend using multiple drives to get good throughput and not sharing the same drives used for Kafka data with application logs or other OS filesystem activity to ensure good latency. You can either RAID these drives together into a single volume or format and mount each drive as its own directory. Since Kafka has replication the redundancy provided by RAID can also be provided at the application level. This choice has several tradeoffs.
   <p>
   If you configure multiple data directories partitions will be assigned round-robin to data directories. Each partition will be entirely in one of the data directories. If data is not well balanced among partitions this can lead to load imbalance between disks.
@@ -732,7 +638,7 @@
   <p>
   Another potential benefit of RAID is the ability to tolerate disk failures. However our experience has been that rebuilding the RAID array is so I/O intensive that it effectively disables the server, so this does not provide much real availability improvement.
 
-  <h4><a id="appvsosflush" href="#appvsosflush">Application vs. OS Flush Management</a></h4>
+  <h4 class="anchor-heading"><a id="appvsosflush" class="anchor-link"></a><a href="#appvsosflush">Application vs. OS Flush Management</a></h4>
   Kafka always immediately writes all data to the filesystem and supports the ability to configure the flush policy that controls when data is forced out of the OS cache and onto disk using the flush. This flush policy can be controlled to force data to disk after a period of time or after a certain number of messages has been written. There are several choices in this configuration.
   <p>
   Kafka must eventually call fsync to know that data was flushed. When recovering from a crash for any log segment not known to be fsync'd Kafka will check the integrity of each message by checking its CRC and also rebuild the accompanying offset index file as part of the recovery process executed on startup.
@@ -745,7 +651,7 @@
   <p>
   In general you don't need to do any low-level tuning of the filesystem, but in the next few sections we will go over some of this in case it is useful.
 
-  <h4><a id="linuxflush" href="#linuxflush">Understanding Linux OS Flush Behavior</a></h4>
+  <h4 class="anchor-heading"><a id="linuxflush" class="anchor-link"></a><a href="#linuxflush">Understanding Linux OS Flush Behavior</a></h4>
 
   In Linux, data written to the filesystem is maintained in <a href="http://en.wikipedia.org/wiki/Page_cache">pagecache</a> until it must be written out to disk (due to an application-level fsync or the OS's own flush policy). The flushing of data is done by a set of background threads called pdflush (or in post 2.6.32 kernels "flusher threads").
   <p>
@@ -754,7 +660,7 @@
   When Pdflush cannot keep up with the rate of data being written it will eventually cause the writing process to block incurring latency in the writes to slow down the accumulation of data.
   <p>
   You can see the current state of OS memory usage by doing
-  <pre class="brush: bash;"> &gt; cat /proc/meminfo </pre>
+  <pre class="language-bash"> &gt; cat /proc/meminfo </code></pre>
   The meaning of these values are described in the link above.
   <p>
   Using pagecache has several advantages over an in-process cache for storing data that will be written out to disk:
@@ -764,21 +670,21 @@
     <li>It automatically uses all the free memory on the machine
   </ul>
 
-  <h4><a id="filesystems" href="#filesystems">Filesystem Selection</a></h4>
+  <h4 class="anchor-heading"><a id="filesystems" class="anchor-link"></a><a href="#filesystems">Filesystem Selection</a></h4>
   <p>Kafka uses regular files on disk, and as such it has no hard dependency on a specific filesystem. The two filesystems which have the most usage, however, are EXT4 and XFS. Historically, EXT4 has had more usage, but recent improvements to the XFS filesystem have shown it to have better performance characteristics for Kafka's workload with no compromise in stability.</p>
   <p>Comparison testing was performed on a cluster with significant message loads, using a variety of filesystem creation and mount options. The primary metric in Kafka that was monitored was the "Request Local Time", indicating the amount of time append operations were taking. XFS resulted in much better local times (160ms vs. 250ms+ for the best EXT4 configuration), as well as lower average wait times. The XFS performance also showed less variability in disk performance.</p>
-  <h5><a id="generalfs" href="#generalfs">General Filesystem Notes</a></h5>
+  <h5 class="anchor-heading"><a id="generalfs" class="anchor-link"></a><a href="#generalfs">General Filesystem Notes</a></h5>
   For any filesystem used for data directories, on Linux systems, the following options are recommended to be used at mount time:
   <ul>
     <li>noatime: This option disables updating of a file's atime (last access time) attribute when the file is read. This can eliminate a significant number of filesystem writes, especially in the case of bootstrapping consumers. Kafka does not rely on the atime attributes at all, so it is safe to disable this.</li>
   </ul>
-  <h5><a id="xfs" href="#xfs">XFS Notes</a></h5>
+  <h5 class="anchor-heading"><a id="xfs" class="anchor-link"></a><a href="#xfs">XFS Notes</a></h5>
   The XFS filesystem has a significant amount of auto-tuning in place, so it does not require any change in the default settings, either at filesystem creation time or at mount. The only tuning parameters worth considering are:
   <ul>
     <li>largeio: This affects the preferred I/O size reported by the stat call. While this can allow for higher performance on larger disk writes, in practice it had minimal or no effect on performance.</li>
     <li>nobarrier: For underlying devices that have battery-backed cache, this option can provide a little more performance by disabling periodic write flushes. However, if the underlying device is well-behaved, it will report to the filesystem that it does not require flushes, and this option will have no effect.</li>
   </ul>
-  <h5><a id="ext4" href="#ext4">EXT4 Notes</a></h5>
+  <h5 class="anchor-heading"><a id="ext4" class="anchor-link"></a><a href="#ext4">EXT4 Notes</a></h5>
   EXT4 is a serviceable choice of filesystem for the Kafka data directories, however getting the most performance out of it will require adjusting several mount options. In addition, these options are generally unsafe in a failure scenario, and will result in much more data loss and corruption. For a single broker failure, this is not much of a concern as the disk can be wiped and the replicas rebuilt from the cluster. In a multiple-failure scenario, such as a power outage, this can mean [...]
   <ul>
     <li>data=writeback: Ext4 defaults to data=ordered which puts a strong order on some writes. Kafka does not require this ordering as it does very paranoid data recovery on all unflushed log. This setting removes the ordering constraint and seems to significantly reduce latency.
@@ -788,7 +694,7 @@
     <li>delalloc: Delayed allocation means that the filesystem avoid allocating any blocks until the physical write occurs. This allows ext4 to allocate a large extent instead of smaller pages and helps ensure the data is written sequentially. This feature is great for throughput. It does seem to involve some locking in the filesystem which adds a bit of latency variance.
   </ul>
 
-  <h3><a id="monitoring" href="#monitoring">6.6 Monitoring</a></h3>
+  <h3 class="anchor-heading"><a id="monitoring" class="anchor-link"></a><a href="#monitoring">6.6 Monitoring</a></h3>
 
   Kafka uses Yammer Metrics for metrics reporting in the server. The Java clients use Kafka Metrics, a built-in metrics registry that minimizes transitive dependencies pulled into client applications. Both expose metrics via JMX and can be configured to report stats using pluggable stats reporters to hook up to your monitoring system.
   <p>
@@ -797,7 +703,7 @@
   <p>
   The easiest way to see the available metrics is to fire up jconsole and point it at a running kafka client or server; this will allow browsing all metrics with JMX.
 
-  <h4><a id="remote_jmx" href="#remote_jmx">Security Considerations for Remote Monitoring using JMX</a></h4>
+  <h4 class="anchor-heading"><a id="remote_jmx" class="anchor-link"></a><a href="#remote_jmx">Security Considerations for Remote Monitoring using JMX</a></h4>
   Apache Kafka disables remote JMX by default. You can enable remote monitoring using JMX by setting the environment variable
   <code>JMX_PORT</code> for processes started using the CLI or standard Java system properties to enable remote JMX programmatically.
   You must enable security when enabling remote JMX in production scenarios to ensure that unauthorized users cannot monitor or
@@ -1389,7 +1295,7 @@
     </tbody>
   </table>
 
-  <h4><a id="producer_monitoring" href="#producer_monitoring">Producer monitoring</a></h4>
+  <h4 class="anchor-heading"><a id="producer_monitoring" class="anchor-link"></a><a href="#producer_monitoring">Producer monitoring</a></h4>
 
   The following metrics are available on producer instances.
 
@@ -1422,12 +1328,12 @@
 
   </tbody></table>
 
-  <h5><a id="producer_sender_monitoring" href="#producer_sender_monitoring">Producer Sender Metrics</a></h5>
+  <h5 class="anchor-heading"><a id="producer_sender_monitoring" class="anchor-link"></a><a href="#producer_sender_monitoring">Producer Sender Metrics</a></h5>
 
   <!--#include virtual="generated/producer_metrics.html" -->
 
 
-  <h4><a id="consumer_monitoring" href="#consumer_monitoring">consumer monitoring</a></h4>
+  <h4 class="anchor-heading"><a id="consumer_monitoring" class="anchor-link"></a><a href="#consumer_monitoring">consumer monitoring</a></h4>
 
   The following metrics are available on consumer instances.
 
@@ -1461,7 +1367,7 @@
     </tbody>
   </table>
 
-  <h5><a id="consumer_group_monitoring" href="#consumer_group_monitoring">Consumer Group Metrics</a></h5>
+  <h5 class="anchor-heading"><a id="consumer_group_monitoring" class="anchor-link"></a><a href="#consumer_group_monitoring">Consumer Group Metrics</a></h5>
   <table class="data-table">
     <tbody>
       <tr>
@@ -1627,18 +1533,18 @@
     </tbody>
   </table>
 
-  <h5><a id="consumer_fetch_monitoring" href="#consumer_fetch_monitoring">Consumer Fetch Metrics</a></h5>
+  <h5 class="anchor-heading"><a id="consumer_fetch_monitoring" class="anchor-link"></a><a href="#consumer_fetch_monitoring">Consumer Fetch Metrics</a></h5>
 
   <!--#include virtual="generated/consumer_metrics.html" -->
 
-  <h4><a id="connect_monitoring" href="#connect_monitoring">Connect Monitoring</a></h4>
+  <h4 class="anchor-heading"><a id="connect_monitoring" class="anchor-link"></a><a href="#connect_monitoring">Connect Monitoring</a></h4>
 
   A Connect worker process contains all the producer and consumer metrics as well as metrics specific to Connect.
   The worker process itself has a number of metrics, while each connector and task have additional metrics.
 
   <!--#include virtual="generated/connect_metrics.html" -->
 
-  <h4><a id="kafka_streams_monitoring" href="#kafka_streams_monitoring">Streams Monitoring</a></h4>
+  <h4 class="anchor-heading"><a id="kafka_streams_monitoring" class="anchor-link"></a><a href="#kafka_streams_monitoring">Streams Monitoring</a></h4>
 
   A Kafka Streams instance contains all the producer and consumer metrics as well as additional metrics specific to Streams.
   By default Kafka Streams has metrics with two recording levels: <code>debug</code> and <code>info</code>.
@@ -1653,9 +1559,9 @@
   Use the following configuration option to specify which metrics
   you want collected:
 
-<pre>metrics.recording.level="info"</pre>
+<pre><code>metrics.recording.level="info"</code></pre>
 
-<h5><a id="kafka_streams_client_monitoring" href="#kafka_streams_client_monitoring">Client Metrics</a></h5>
+<h5 class="anchor-heading"><a id="kafka_streams_client_monitoring" class="anchor-link"></a><a href="#kafka_streams_client_monitoring">Client Metrics</a></h5>
 All of the following metrics have a recording level of <code>info</code>:
 <table class="data-table">
   <tbody>
@@ -1692,7 +1598,7 @@ All of the following metrics have a recording level of <code>info</code>:
   </tbody>
 </table>
 
-<h5><a id="kafka_streams_thread_monitoring" href="#kafka_streams_thread_monitoring">Thread Metrics</a></h5>
+<h5 class="anchor-heading"><a id="kafka_streams_thread_monitoring" class="anchor-link"></a><a href="#kafka_streams_thread_monitoring">Thread Metrics</a></h5>
 All of the following metrics have a recording level of <code>info</code>:
 <table class="data-table">
     <tbody>
@@ -1804,7 +1710,7 @@ All of the following metrics have a recording level of <code>info</code>:
  </tbody>
 </table>
 
-<h5><a id="kafka_streams_task_monitoring" href="#kafka_streams_task_monitoring">Task Metrics</a></h5>
+<h5 class="anchor-heading"><a id="kafka_streams_task_monitoring" class="anchor-link"></a><a href="#kafka_streams_task_monitoring">Task Metrics</a></h5>
 All of the following metrics have a recording level of <code>debug</code>, except for metrics
 dropped-records-rate and dropped-records-total which have a recording level of <code>info</code>:
  <table class="data-table">
@@ -1887,7 +1793,7 @@ dropped-records-rate and dropped-records-total which have a recording level of <
  </tbody>
 </table>
 
- <h5><a id="kafka_streams_node_monitoring" href="#kafka_streams_node_monitoring">Processor Node Metrics</a></h5>
+ <h5 class="anchor-heading"><a id="kafka_streams_node_monitoring" class="anchor-link"></a><a href="#kafka_streams_node_monitoring">Processor Node Metrics</a></h5>
  The following metrics are only available on certain types of nodes, i.e., process-rate and process-total are
  only available for source processor nodes and suppression-emit-rate and suppression-emit-total are only available
  for suppression operation nodes. All of the metrics have a recording level of <code>debug</code>:
@@ -1921,7 +1827,7 @@ dropped-records-rate and dropped-records-total which have a recording level of <
  </tbody>
  </table>
 
- <h5><a id="kafka_streams_store_monitoring" href="#kafka_streams_store_monitoring">State Store Metrics</a></h5>
+ <h5 class="anchor-heading"><a id="kafka_streams_store_monitoring" class="anchor-link"></a><a href="#kafka_streams_store_monitoring">State Store Metrics</a></h5>
  All of the following metrics have a recording level of <code>debug</code>. Note that the <code>store-scope</code> value is specified in <code>StoreSupplier#metricsScope()</code> for user's customized
  state stores; for built-in state stores, currently we have:
   <ul>
@@ -2101,7 +2007,7 @@ dropped-records-rate and dropped-records-total which have a recording level of <
     </tbody>
  </table>
 
-  <h5><a id="kafka_streams_rocksdb_monitoring" href="#kafka_streams_rocksdb_monitoring">RocksDB Metrics</a></h5>
+  <h5 class="anchor-heading"><a id="kafka_streams_rocksdb_monitoring" class="anchor-link"></a><a href="#kafka_streams_rocksdb_monitoring">RocksDB Metrics</a></h5>
   All of the following metrics have a recording level of <code>debug</code>.
   The metrics are collected every minute from the RocksDB state stores.
   If a state store consists of multiple RocksDB instances as it is the case for aggregations over time and session windows,
@@ -2203,7 +2109,7 @@ dropped-records-rate and dropped-records-total which have a recording level of <
     </tbody>
   </table>
 
-  <h5><a id="kafka_streams_cache_monitoring" href="#kafka_streams_cache_monitoring">Record Cache Metrics</a></h5>
+  <h5 class="anchor-heading"><a id="kafka_streams_cache_monitoring" class="anchor-link"></a><a href="#kafka_streams_cache_monitoring">Record Cache Metrics</a></h5>
   All of the following metrics have a recording level of <code>debug</code>:
 
   <table class="data-table">
@@ -2231,18 +2137,18 @@ dropped-records-rate and dropped-records-total which have a recording level of <
     </tbody>
  </table>
 
-  <h4><a id="others_monitoring" href="#others_monitoring">Others</a></h4>
+  <h4 class="anchor-heading"><a id="others_monitoring" class="anchor-link"></a><a href="#others_monitoring">Others</a></h4>
 
   We recommend monitoring GC time and other stats and various server stats such as CPU utilization, I/O service time, etc.
 
   On the client side, we recommend monitoring the message/byte rate (global and per topic), request rate/size/time, and on the consumer side, max lag in messages among all partitions and min fetch request rate. For a consumer to keep up, max lag needs to be less than a threshold and min fetch rate needs to be larger than 0.
 
-  <h3><a id="zk" href="#zk">6.7 ZooKeeper</a></h3>
+  <h3 class="anchor-heading"><a id="zk" class="anchor-link"></a><a href="#zk">6.7 ZooKeeper</a></h3>
 
-  <h4><a id="zkversion" href="#zkversion">Stable version</a></h4>
+  <h4 class="anchor-heading"><a id="zkversion" class="anchor-link"></a><a href="#zkversion">Stable version</a></h4>
   The current stable branch is 3.5. Kafka is regularly updated to include the latest release in the 3.5 series.
 
-  <h4><a id="zkops" href="#zkops">Operationalizing ZooKeeper</a></h4>
+  <h4 class="anchor-heading"><a id="zkops" class="anchor-link"></a><a href="#zkops">Operationalizing ZooKeeper</a></h4>
   Operationally, we do the following for a healthy ZooKeeper installation:
   <ul>
     <li>Redundancy in the physical/hardware/network layout: try not to put them all in the same rack, decent (but don't go nuts) hardware, try to keep redundant power and network paths, etc. A typical ZooKeeper ensemble has 5 or 7 servers, which tolerates 2 and 3 servers down, respectively. If you have a small deployment, then using 3 servers is acceptable, but keep in mind that you'll only be able to tolerate 1 server down in this case. </li>
diff --git a/docs/protocol.html b/docs/protocol.html
index 5759aa3..29811a2 100644
--- a/docs/protocol.html
+++ b/docs/protocol.html
@@ -59,9 +59,9 @@
     <li><a href="#protocol_philosophy">Some Common Philosophical Questions</a></li>
 </ul>
 
-<h4><a id="protocol_preliminaries" href="#protocol_preliminaries">Preliminaries</a></h4>
+<h4 class="anchor-heading"><a id="protocol_preliminaries" class="anchor-link"></a><a href="#protocol_preliminaries">Preliminaries</a></h4>
 
-<h5><a id="protocol_network" href="#protocol_network">Network</a></h5>
+<h5 class="anchor-heading"><a id="protocol_network" class="anchor-link"></a><a href="#protocol_network">Network</a></h5>
 
 <p>Kafka uses a binary protocol over TCP. The protocol defines all APIs as request response message pairs. All messages are size delimited and are made up of the following primitive types.</p>
 
@@ -73,7 +73,7 @@
 
 <p>The server has a configurable maximum limit on request size and any request that exceeds this limit will result in the socket being disconnected.</p>
 
-<h5><a id="protocol_partitioning" href="#protocol_partitioning">Partitioning and bootstrapping</a></h5>
+<h5 class="anchor-heading"><a id="protocol_partitioning" class="anchor-link"></a><a href="#protocol_partitioning">Partitioning and bootstrapping</a></h5>
 
 <p>Kafka is a partitioned system so not all servers have the complete data set. Instead recall that topics are split into a pre-defined number of partitions, P, and each partition is replicated with some replication factor, N. Topic partitions themselves are just ordered "commit logs" numbered 0, 1, ..., P-1.</p>
 
@@ -92,7 +92,7 @@
     <li>If we get an appropriate error, refresh the metadata and try again.</li>
 </ol>
 
-<h5><a id="protocol_partitioning_strategies" href="#protocol_partitioning_strategies">Partitioning Strategies</a></h5>
+<h5 class="anchor-heading"><a id="protocol_partitioning_strategies" class="anchor-link"></a><a href="#protocol_partitioning_strategies">Partitioning Strategies</a></h5>
 
 <p>As mentioned above the assignment of messages to partitions is something the producing client controls. That said, how should this functionality be exposed to the end-user?</p>
 
@@ -108,13 +108,13 @@
 
 <p>Semantic partitioning means using some key in the message to assign messages to partitions. For example if you were processing a click message stream you might want to partition the stream by the user id so that all data for a particular user would go to a single consumer. To accomplish this the client can take a key associated with the message and use some hash of this key to choose the partition to which to deliver the message.</p>
 
-<h5><a id="protocol_batching" href="#protocol_batching">Batching</a></h5>
+<h5 class="anchor-heading"><a id="protocol_batching" class="anchor-link"></a><a href="#protocol_batching">Batching</a></h5>
 
 <p>Our APIs encourage batching small things together for efficiency. We have found this is a very significant performance win. Both our API to send messages and our API to fetch messages always work with a sequence of messages not a single message to encourage this. A clever client can make use of this and support an "asynchronous" mode in which it batches together messages sent individually and sends them in larger clumps. We go even further with this and allow the batching across multi [...]
 
 <p>The client implementer can choose to ignore this and send everything one at a time if they like.</p>
 
-<h5><a id="protocol_compatibility" href="#protocol_compatibility">Compatibility</a></h5>
+<h5 class="anchor-heading"><a id="protocol_compatibility" class="anchor-link"></a><a href="#protocol_compatibility">Compatibility</a></h5>
 
 <p>Kafka has a "bidirectional" client compatibility policy.  In other words, new clients can talk to old servers, and old clients can talk to new servers.  This allows users to upgrade either clients or servers without experiencing any downtime.
 
@@ -128,7 +128,7 @@
 
 <p>Note that <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields">KIP-482 tagged fields</a> can be added to a request without incrementing the version number.  This offers an additional way of evolving the message schema without breaking compatibility.  Tagged fields do not take up any space when the field is not set.  Therefore, if a field is rarely used, it is more efficient to make it a tagged field than to put [...]
 
-<h5><a id="api_versions" href="#api_versions">Retrieving Supported API versions</a></h5>
+<h5 class="anchor-heading"><a id="api_versions" class="anchor-link"></a><a href="#api_versions">Retrieving Supported API versions</a></h5>
 <p>In order to work against multiple broker versions, clients need to know what versions of various APIs a
     broker supports. The broker exposes this information since 0.10.0.0 as described in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version">KIP-35</a>.
     Clients should use the supported API versions information to choose the highest API version supported by both client and broker. If no such version
@@ -151,7 +151,7 @@
         upgraded/downgraded in the mean time.</li>
 </ol>
 
-<h5><a id="sasl_handshake" href="#sasl_handshake">SASL Authentication Sequence</a></h5>
+<h5 class="anchor-heading"><a id="sasl_handshake" class="anchor-link"></a><a href="#sasl_handshake">SASL Authentication Sequence</a></h5>
 <p>The following sequence is used for SASL authentication:
 <ol>
   <li>Kafka <code>ApiVersionsRequest</code> may be sent by the client to obtain the version ranges of requests supported by the broker. This is optional.</li>
@@ -167,50 +167,48 @@
 Kafka request. SASL/GSSAPI authentication is performed starting with this packet, skipping the first two steps above.</p>
 
 
-<h4><a id="protocol_details" href="#protocol_details">The Protocol</a></h4>
+<h4 class="anchor-heading"><a id="protocol_details" class="anchor-link"></a><a href="#protocol_details">The Protocol</a></h4>
 
-<h5><a id="protocol_types" href="#protocol_types">Protocol Primitive Types</a></h5>
+<h5 class="anchor-heading"><a id="protocol_types" class="anchor-link"></a><a href="#protocol_types">Protocol Primitive Types</a></h5>
 
 <p>The protocol is built out of the following primitive types.</p>
 <!--#include virtual="generated/protocol_types.html" -->
 
-<h5><a id="protocol_grammar" href="#protocol_grammar">Notes on reading the request format grammars</a></h5>
+<h5 class="anchor-heading"><a id="protocol_grammar" class="anchor-link"></a><a href="#protocol_grammar">Notes on reading the request format grammars</a></h5>
 
 <p>The <a href="https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form">BNF</a>s below give an exact context free grammar for the request and response binary format. The BNF is intentionally not compact in order to give human-readable name. As always in a BNF a sequence of productions indicates concatenation. When there are multiple possible productions these are separated with '|' and may be enclosed in parenthesis for grouping. The top-level definition is always given first and subsequ [...]
 
-<h5><a id="protocol_common" href="#protocol_common">Common Request and Response Structure</a></h5>
+<h5 class="anchor-heading"><a id="protocol_common" class="anchor-link"></a><a href="#protocol_common">Common Request and Response Structure</a></h5>
 
 <p>All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:</p>
 
-<pre>
-RequestOrResponse => Size (RequestMessage | ResponseMessage)
-  Size => int32
-</pre>
+<pre class="line-numbers"><code class="language-text">RequestOrResponse => Size (RequestMessage | ResponseMessage)
+  Size => int32</code></pre>
 
 <table class="data-table"><tbody>
 <tr><th>Field</th><th>Description</th></tr>
 <tr><td>message_size</td><td>The message_size field gives the size of the subsequent request or response message in bytes. The client can read requests by first reading this 4 byte size as an integer N, and then reading and parsing the subsequent N bytes of the request.</td></tr>
 </table>
 
-<h5><a id="protocol_recordbatch" href="#protocol_recordbatch">Record Batch</a></h5>
+<h5 class="anchor-heading"><a id="protocol_recordbatch" class="anchor-link"></a><a href="#protocol_recordbatch">Record Batch</a></h5>
 <p>A description of the record batch format can be found <a href="/documentation/#recordbatch">here</a>.</p>
 
-<h4><a id="protocol_constants" href="#protocol_constants">Constants</a></h4>
+<h4 class="anchor-heading"><a id="protocol_constants" class="anchor-link"></a><a href="#protocol_constants">Constants</a></h4>
 
-<h5><a id="protocol_error_codes" href="#protocol_error_codes">Error Codes</a></h5>
+<h5 class="anchor-heading"><a id="protocol_error_codes" class="anchor-link"></a><a href="#protocol_error_codes">Error Codes</a></h5>
 <p>We use numeric codes to indicate what problem occurred on the server. These can be translated by the client into exceptions or whatever the appropriate error handling mechanism in the client language. Here is a table of the error codes currently in use:</p>
 <!--#include virtual="generated/protocol_errors.html" -->
 
-<h5><a id="protocol_api_keys" href="#protocol_api_keys">Api Keys</a></h5>
+<h5 class="anchor-heading"><a id="protocol_api_keys" class="anchor-link"></a><a href="#protocol_api_keys">Api Keys</a></h5>
 <p>The following are the numeric codes that the ApiKey in the request can take for each of the below request types.</p>
 <!--#include virtual="generated/protocol_api_keys.html" -->
 
-<h4><a id="protocol_messages" href="#protocol_messages">The Messages</a></h4>
+<h4 class="anchor-heading"><a id="protocol_messages" class="anchor-link"></a><a href="#protocol_messages">The Messages</a></h4>
 
 <p>This section gives details on each of the individual API Messages, their usage, their binary format, and the meaning of their fields.</p>
 <!--#include virtual="generated/protocol_messages.html" -->
 
-<h4><a id="protocol_philosophy" href="#protocol_philosophy">Some Common Philosophical Questions</a></h4>
+<h4 class="anchor-heading"><a id="protocol_philosophy" class="anchor-link"></a><a href="#protocol_philosophy">Some Common Philosophical Questions</a></h4>
 
 <p>Some people have asked why we don't use HTTP. There are a number of reasons, the best is that client implementors can make use of some of the more advanced TCP features--the ability to multiplex requests, the ability to simultaneously poll many connections, etc. We have also found HTTP libraries in many languages to be surprisingly shabby.</p>
 
diff --git a/docs/quickstart-docker.html b/docs/quickstart-docker.html
new file mode 100644
index 0000000..d8816ba
--- /dev/null
+++ b/docs/quickstart-docker.html
@@ -0,0 +1,204 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="js/templateData.js" --></script>
+
+<script id="quickstart-docker-template" type="text/x-handlebars-template">
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="step-1-get-kafka" href="#step-1-get-kafka"></a>
+    <a href="#step-1-get-kafka">Step 1: Get Kafka</a>
+</h4>
+
+<p>
+    This docker-compose file will run everything for you via <a href="https://www.docker.com/" rel="nofollow">Docker</a>.
+    Copy and paste it into a file named <code>docker-compose.yml</code> on your local filesystem.
+</p>
+<pre class="line-numbers"><code class="language-bash">---
+    version: '2'
+    
+    services:
+      broker:
+        image: apache-kafka/broker:2.5.0
+        hostname: kafka-broker
+        container_name: kafka-broker
+    
+    # ...rest omitted...</code></pre>
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="step-2-start-kafka" href="#step-2-start-kafka"></a>
+    <a href="#step-2-start-kafka">Step 2: Start the Kafka environment</a>
+</h4>
+
+<p>
+    From the directory containing the <code>docker-compose.yml</code> file created in the previous step, run this
+    command in order to start all services in the correct order:
+</p>
+<pre class="line-numbers"><code class="language-bash">$ docker-compose up</code></pre>
+<p>
+    Once all services have successfully launched, you will have a basic Kafka environment running and ready to use.
+</p>
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="step-3-create-a-topic" href="#step-3-create-a-topic"></a>
+    <a href="#step-3-create-a-topic">Step 3: Create a topic to store your events</a>
+</h4>
+<p>Kafka is a distributed <em>event streaming platform</em> that lets you read, write, store, and process
+<a href="/documentation/#messages" rel="nofollow"><em>events</em></a> (also called <em>records</em> or <em>messages</em> in the documentation)
+across many machines.
+Example events are payment transactions, geolocation updates from mobile phones, shipping orders, sensor measurements
+from IoT devices or medical equipment, and much more.
+These events are organized and stored in <a href="/documentation/#intro_topics" rel="nofollow"><em>topics</em></a>.
+Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.</p>
+<p>So before you can write your first events, you must create a topic:</p>
+<pre class="line-numbers"><code class="language-bash">$ docker exec -it kafka-broker kafka-topics.sh --create --topic quickstart-events</code></pre>
+<p>All of Kafka's command line tools have additional options: run the <code>kafka-topics.sh</code> command without any
+arguments to display usage information.
+For example, it can also show you
+<a href="/documentation/#intro_topics" rel="nofollow">details such as the partition count</a> of the new topic:</p>
+<pre class="line-numbers"><code class="language-bash">$ docker exec -it kafka-broker kafka-topics.sh --describe --topic quickstart-events
+    Topic:quickstart-events  PartitionCount:1    ReplicationFactor:1 Configs:
+    Topic: quickstart-events Partition: 0    Leader: 0   Replicas: 0 Isr: 0</code></pre>
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="step-4-write-events" href="#step-4-write-events"></a>
+    <a href="#step-4-write-events">Step 4: Write some events into the topic</a>
+</h4>
+<p>A Kafka client communicates with the Kafka brokers via the network for writing (or reading) events.
+Once received, the brokers will store the events in a durable and fault-tolerant manner for as long as you
+need—even forever.</p>
+<p>Run the console producer client to write a few events into your topic.
+By default, each line you enter will result in a separate event being written to the topic.</p>
+<pre class="line-numbers"><code class="language-bash">$ docker exec -it kafka-broker kafka-console-producer.sh --topic quickstart-events
+This is my first event
+This is my second event</code></pre>
+<p>You can stop the producer client with <code>Ctrl-C</code> at any time.</p>
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="step-5-read-the-events" href="#step-5-read-the-events"></a>
+    <a href="#step-5-read-the-events">Step 5: Read the events</a>
+</h4>
+<p>Open another terminal session and run the console consumer client to read the events you just created:</p>
+<pre class="line-numbers"><code class="language-bash">$ docker exec -it kafka-broker kafka-console-consumer.sh --topic quickstart-events --from-beginning
+This is my first event
+This is my second event</code></pre>
+<p>You can stop the consumer client with <code>Ctrl-C</code> at any time.</p>
+<p>Feel free to experiment: for example, switch back to your producer terminal (previous step) to write
+additional events, and see how the events immediately show up in your consumer terminal.</p>
+<p>Because events are durably stored in Kafka, they can be read as many times and by as many consumers as you want.
+You can easily verify this by opening yet another terminal session and re-running the previous command again.</p>
+
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="step-5-read-the-events" href="#step-5-read-the-events"></a>
+    <a href="#step-5-read-the-events">Step 6: Import/export your data as streams of events with Kafka Connect</a>
+</h4>
+<p>You probably have lots of data in existing systems like relational databases or traditional messaging systems, along
+with many applications that already use these systems.
+<a href="/documentation/#connect" rel="nofollow">Kafka Connect</a> allows you to continuously ingest data from external
+systems into Kafka, and vice versa.  It is thus
+very easy to integrate existing systems with Kafka. To make this process even easier, there are hundreds of such
+connectors readily available.</p>
+<p>Take a look at the <a href="/documentation/#connect" rel="nofollow">Kafka Connect section</a> in the documentation to
+learn more about how to continuously import/export your data into and out of Kafka.</p>
+
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="step-7-process-events" href="#step-7-process-events"></a>
+    <a href="#step-7-process-events">Step 7: Process your events with Kafka Streams</a>
+</h4>
+
+<p>Once your data is stored in Kafka as events, you can process the data with the
+<a href="/documentation/streams" rel="nofollow">Kafka Streams</a> client library for Java/Scala.
+It allows you to implement mission-critical real-time applications and microservices, where the input and/or output data
+is stored in Kafka topics.  Kafka Streams combines the simplicity of writing and deploying standard Java and Scala
+applications on the client side with the benefits of Kafka's server-side cluster technology to make these applications
+highly scalable, elastic, fault-tolerant, and distributed. The library supports exactly-once processing, stateful
+operations and aggregations, windowing, joins, processing based on event-time, and much more.</p>
+<p>To give you a first taste, here's how one would implement the popular <code>WordCount</code> algorithm:</p>
+<pre class="line-numbers"><code class="language-java">KStream<String, String> textLines = builder.stream("quickstart-events");
+
+KTable<String, Long> wordCounts = textLines
+            .flatMapValues(line -> Arrays.asList(line.toLowerCase().split(" ")))
+            .groupBy((keyIgnored, word) -> word)
+            .count();
+
+wordCounts.toStream().to("output-topic"), Produced.with(Serdes.String(), Serdes.Long()));</code></pre>
+<p>The <a href="/25/documentation/streams/quickstart" rel="nofollow">Kafka Streams demo</a> and the
+<a href="/25/documentation/streams/tutorial" rel="nofollow">app development tutorial</a> demonstrate how to code and run
+such a streaming application from start to finish.</p>
+
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="step-8-terminate" href="#step-8-terminate"></a>
+    <a href="#step-8-terminate">Step 8: Terminate the Kafka environment</a>
+</h4>
+<p>Now that you reached the end of the quickstart, feel free to tear down the Kafka environment—or continue playing around.</p>
+<p>Run the following command to tear down the environment, which also deletes any events you have created along the way:</p>
+<pre class="line-numbers"><code class="language-bash">$ docker-compose down</code></pre>
+
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+    <a class="anchor-link" id="quickstart_kafkacongrats" href="#quickstart_kafkacongrats"></a>
+    <a href="#quickstart_kafkacongrats">Congratulations!</a>
+  </h4>
+  
+  <p>You have successfully finished the Apache Kafka quickstart.<div>
+  
+  <p>To learn more, we suggest the following next steps:</p>
+  
+  <ul>
+      <li>
+          Read through the brief <a href="/intro">Introduction</a> to learn how Kafka works at a high level, its
+          main concepts, and how it compares to other technologies. To understand Kafka in more detail, head over to the
+          <a href="/documentation/">Documentation</a>.
+      </li>
+      <li>
+          Browse through the <a href="/powered-by">Use Cases</a> to learn how other users in our world-wide
+          community are getting value out of Kafka.
+      </li>
+      <!--
+      <li>
+          Learn how _Kafka compares to other technologies_ [note to design team: this new page is not yet written] you might be familiar with.
+      </li>
+      -->
+      <li>
+          Join a <a href="/events">local Kafka meetup group</a> and
+          <a href="https://kafka-summit.org/past-events/">watch talks from Kafka Summit</a>,
+          the main conference of the Kafka community.
+      </li>
+  </ul>
+</div>
+</script>
+
+<div class="p-quickstart-docker"></div>
diff --git a/docs/quickstart-zookeeper.html b/docs/quickstart-zookeeper.html
new file mode 100644
index 0000000..33cd9bd
--- /dev/null
+++ b/docs/quickstart-zookeeper.html
@@ -0,0 +1,277 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script>
+  <!--#include virtual="js/templateData.js" -->
+</script>
+
+<script id="quickstart-template" type="text/x-handlebars-template">
+
+      <div class="quickstart-step">
+      <h4 class="anchor-heading">
+          <a class="anchor-link" id="quickstart_download" href="#quickstart_download"></a>
+          <a href="#quickstart_download">Step 1: Get Kafka</a>
+      </h4>
+
+      <p>
+          <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/2.6.1/kafka_2.13-2.6.1.tgz">Download</a>
+          the latest Kafka release and extract it:
+      </p>
+
+<pre class="line-numbers"><code class="language-bash">$ tar -xzf kafka_2.13-2.6.1.tgz
+$ cd kafka_2.13-2.6.1</code></pre>
+  </div>
+
+  <div class="quickstart-step">
+      <h4 class="anchor-heading">
+          <a class="anchor-link" id="quickstart_startserver" href="#quickstart_startserver"></a>
+          <a href="#quickstart_startserver">Step 2: Start the Kafka environment</a>
+      </h4>
+
+      <p class="note">
+        NOTE: Your local environment must have Java 8+ installed.
+      </p>
+
+      <p>
+          Run the following commands in order to start all services in the correct order:
+      </p>
+
+<pre class="line-numbers"><code class="language-bash"># Start the ZooKeeper service
+# Note: Soon, ZooKeeper will no longer be required by Apache Kafka.
+$ bin/zookeeper-server-start.sh config/zookeeper.properties</code></pre>
+
+      <p>
+          Open another terminal session and run:
+      </p>
+
+<pre class="line-numbers"><code class="language-bash"># Start the Kafka broker service
+$ bin/kafka-server-start.sh config/server.properties</code></pre>
+
+      <p>
+          Once all services have successfully launched, you will have a basic Kafka environment running and ready to use.
+      </p>
+  </div>
+
+  <div class="quickstart-step">
+      <h4 class="anchor-heading">
+          <a class="anchor-link" id="quickstart_createtopic" href="#quickstart_createtopic"></a>
+          <a href="#quickstart_createtopic">Step 3: Create a topic to store your events</a>
+      </h4>
+
+      <p>
+          Kafka is a distributed <em>event streaming platform</em> that lets you read, write, store, and process
+          <a href="/documentation/#messages"><em>events</em></a> (also called <em>records</em> or
+          <em>messages</em> in the documentation)
+          across many machines.
+      </p>
+
+      <p>
+          Example events are payment transactions, geolocation updates from mobile phones, shipping orders, sensor measurements
+          from IoT devices or medical equipment, and much more. These events are organized and stored in
+          <a href="/documentation/#intro_topics"><em>topics</em></a>.
+          Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.
+      </p>
+
+      <p>
+          So before you can write your first events, you must create a topic.  Open another terminal session and run:
+      </p>
+
+<pre class="line-numbers"><code class="language-bash">$ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092</code></pre>
+
+      <p>
+          All of Kafka's command line tools have additional options: run the <code>kafka-topics.sh</code> command without any
+          arguments to display usage information. For example, it can also show you
+          <a href="/documentation/#intro_topics">details such as the partition count</a>
+          of the new topic:
+      </p>
+
+<pre class="line-numbers"><code class="language-bash">$ bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092
+Topic:quickstart-events  PartitionCount:1    ReplicationFactor:1 Configs:
+    Topic: quickstart-events Partition: 0    Leader: 0   Replicas: 0 Isr: 0</code></pre>
+  </div>
+
+  <div class="quickstart-step">
+      <h4 class="anchor-heading">
+          <a class="anchor-link" id="quickstart_send" href="#quickstart_send"></a>
+          <a href="#quickstart_send">Step 4: Write some events into the topic</a>
+      </h4>
+
+      <p>
+          A Kafka client communicates with the Kafka brokers via the network for writing (or reading) events.
+          Once received, the brokers will store the events in a durable and fault-tolerant manner for as long as you
+          need—even forever.
+      </p>
+
+      <p>
+          Run the console producer client to write a few events into your topic.
+          By default, each line you enter will result in a separate event being written to the topic.
+      </p>
+
+<pre class="line-numbers"><code class="language-bash">$ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
+This is my first event
+This is my second event</code></pre>
+
+      <p>
+          You can stop the producer client with <code>Ctrl-C</code> at any time.
+      </p>
+  </div>
+
+  <div class="quickstart-step">
+      <h4 class="anchor-heading">
+          <a class="anchor-link" id="quickstart_consume" href="#quickstart_consume"></a>
+          <a href="#quickstart_consume">Step 5: Read the events</a>
+      </h4>
+
+  <p>Open another terminal session and run the console consumer client to read the events you just created:</p>
+
+<pre class="line-numbers"><code class="language-bash">$ bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
+This is my first event
+This is my second event</code></pre>
+
+  <p>You can stop the consumer client with <code>Ctrl-C</code> at any time.</p>
+
+  <p>Feel free to experiment: for example, switch back to your producer terminal (previous step) to write
+  additional events, and see how the events immediately show up in your consumer terminal.</p>
+
+  <p>Because events are durably stored in Kafka, they can be read as many times and by as many consumers as you want.
+  You can easily verify this by opening yet another terminal session and re-running the previous command again.</p>
+  </div>
+
+  <div class="quickstart-step">
+  <h4 class="anchor-heading">
+      <a class="anchor-link" id="quickstart_kafkaconnect" href="#quickstart_kafkaconnect"></a>
+      <a href="#quickstart_kafkaconnect">Step 6: Import/export your data as streams of events with Kafka Connect</a>
+  </h4>
+
+  <p>
+      You probably have lots of data in existing systems like relational databases or traditional messaging systems,
+      along with many applications that already use these systems.
+      <a href="/documentation/#connect">Kafka Connect</a> allows you to continuously ingest
+      data from external systems into Kafka, and vice versa.  It is thus very easy to integrate existing systems with
+      Kafka. To make this process even easier, there are hundreds of such connectors readily available.
+  </p>
+
+  <p>Take a look at the <a href="/documentation/#connect">Kafka Connect section</a>
+  learn more about how to continuously import/export your data into and out of Kafka.</p>
+
+  </div>
+
+  <div class="quickstart-step">
+  <h4 class="anchor-heading">
+      <a class="anchor-link" id="quickstart_kafkastreams" href="#quickstart_kafkastreams"></a>
+      <a href="#quickstart_kafkastreams">Step 7: Process your events with Kafka Streams</a>
+  </h4>
+
+  <p>
+      Once your data is stored in Kafka as events, you can process the data with the
+      <a href="/documentation/streams">Kafka Streams</a> client library for Java/Scala.
+      It allows you to implement mission-critical real-time applications and microservices, where the input
+      and/or output data is stored in Kafka topics.  Kafka Streams combines the simplicity of writing and deploying
+      standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster
+      technology to make these applications highly scalable, elastic, fault-tolerant, and distributed. The library
+      supports exactly-once processing, stateful operations and aggregations, windowing, joins, processing based
+      on event-time, and much more.
+  </p>
+
+  <p>To give you a first taste, here's how one would implement the popular <code>WordCount</code> algorithm:</p>
+
+<pre class="line-numbers"><code class="language-bash">KStream&lt;String, String&gt; textLines = builder.stream("quickstart-events");
+
+KTable&lt;String, Long&gt; wordCounts = textLines
+            .flatMapValues(line -&gt; Arrays.asList(line.toLowerCase().split(" ")))
+            .groupBy((keyIgnored, word) -&gt; word)
+            .count();
+
+wordCounts.toStream().to("output-topic"), Produced.with(Serdes.String(), Serdes.Long()));</code></pre>
+
+  <p>
+      The <a href="/25/documentation/streams/quickstart">Kafka Streams demo</a>
+      and the <a href="/25/documentation/streams/tutorial">app development tutorial</a> 
+      demonstrate how to code and run such a streaming application from start to finish.
+  </p>
+
+  </div>
+
+  <div class="quickstart-step">
+  <h4 class="anchor-heading">
+      <a class="anchor-link" id="quickstart_kafkaterminate" href="#quickstart_kafkaterminate"></a>
+      <a href="#quickstart_kafkaterminate">Step 8: Terminate the Kafka environment</a>
+  </h4>
+
+  <p>
+      Now that you reached the end of the quickstart, feel free to tear down the Kafka environment—or
+      continue playing around.
+  </p>
+
+  <ol>
+      <li>
+          Stop the producer and consumer clients with <code>Ctrl-C</code>, if you haven't done so already.
+      </li>
+      <li>
+          Stop the Kafka broker with <code>Ctrl-C</code>.
+      </li>
+      <li>
+          Lastly, stop the ZooKeeper server with <code>Ctrl-C</code>.
+      </li>
+  </ol>
+
+  <p>
+      If you also want to delete any data of your local Kafka environment including any events you have created
+      along the way, run the command:
+  </p>
+
+<pre class="line-numbers"><code class="language-bash">$ rm -rf /tmp/kafka-logs /tmp/zookeeper</code></pre>
+
+  </div>
+
+  <div class="quickstart-step">
+  <h4 class="anchor-heading">
+      <a class="anchor-link" id="quickstart_kafkacongrats" href="#quickstart_kafkacongrats"></a>
+      <a href="#quickstart_kafkacongrats">Congratulations!</a>
+    </h4>
+
+    <p>You have successfully finished the Apache Kafka quickstart.<div>
+
+    <p>To learn more, we suggest the following next steps:</p>
+
+    <ul>
+        <li>
+            Read through the brief <a href="/intro">Introduction</a>
+            to learn how Kafka works at a high level, its main concepts, and how it compares to other
+            technologies. To understand Kafka in more detail, head over to the
+            <a href="/documentation/">Documentation</a>.
+        </li>
+        <li>
+            Browse through the <a href="/powered-by">Use Cases</a> to learn how 
+            other users in our world-wide community are getting value out of Kafka.
+        </li>
+        <!--
+        <li>
+            Learn how _Kafka compares to other technologies_ you might be familiar with.
+            [note to design team: this new page is not yet written] 
+        </li>
+        -->
+        <li>
+            Join a <a href="/events">local Kafka meetup group</a> and
+            <a href="https://kafka-summit.org/past-events/">watch talks from Kafka Summit</a>,
+            the main conference of the Kafka community.
+        </li>
+    </ul>
+  </div>
+</script>
+
+<div class="p-quickstart"></div>
diff --git a/docs/security.html b/docs/security.html
index f0a1e5f..0ea7b6d 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -16,7 +16,7 @@
 -->
 
 <script id="security-template" type="text/x-handlebars-template">
-    <h3><a id="security_overview" href="#security_overview">7.1 Security Overview</a></h3>
+    <h3 class="anchor-heading"><a id="security_overview" class="anchor-link"></a><a href="#security_overview">7.1 Security Overview</a></h3>
     In release 0.9.0.0, the Kafka community added a number of features that, used either separately or together, increases security in a Kafka cluster. The following security measures are currently supported:
     <ol>
         <li>Authentication of connections to brokers from clients (producers and consumers), other brokers and tools, using either SSL or SASL. Kafka supports the following SASL mechanisms:
@@ -36,20 +36,18 @@
 
     The guides below explain how to configure and use the security features in both clients and brokers.
 
-    <h3><a id="security_ssl" href="#security_ssl">7.2 Encryption and Authentication using SSL</a></h3>
+    <h3 class="anchor-heading"><a id="security_ssl" class="anchor-link"></a><a href="#security_ssl">7.2 Encryption and Authentication using SSL</a></h3>
     Apache Kafka allows clients to use SSL for encryption of traffic as well as authentication. By default, SSL is disabled but can be turned on if needed.
     The following paragraphs explain in detail how to set up your own PKI infrastructure, use it to create certificates and configure Kafka to use these.
 
     <ol>
-        <li><h4><a id="security_ssl_key" href="#security_ssl_key">Generate SSL key and certificate for each Kafka broker</a></h4>
+        <li><h4 class="anchor-heading"><a id="security_ssl_key" class="anchor-link"></a><a href="#security_ssl_key">Generate SSL key and certificate for each Kafka broker</a></h4>
             The first step of deploying one or more brokers with SSL support is to generate a public/private keypair for every server.
             Since Kafka expects all keys and certificates to be stored in keystores we will use Java's keytool command for this task.
             The tool supports two different keystore formats, the Java specific jks format which has been deprecated by now, as well as PKCS12.
             PKCS12 is the default format as of Java version 9, to ensure this format is being used regardless of the Java version in use all following
             commands explicitly specify the PKCS12 format.
-            <pre class="brush: bash;">
-                keytool -keystore {keystorefile} -alias localhost -validity {validity} -genkey -keyalg RSA -storetype pkcs12
-            </pre>
+            <pre class="line-numbers"><code class="language-bash">                keytool -keystore {keystorefile} -alias localhost -validity {validity} -genkey -keyalg RSA -storetype pkcs12</code></pre>
             You need to specify two parameters in the above command:
             <ol>
                 <li>keystorefile: the keystore file that stores the keys (and later the certificate) for this broker. The keystore file contains the private
@@ -65,9 +63,7 @@
             authentication purposes.<br>
             To generate certificate signing requests run the following command for all server keystores created so far.
 
-            <pre class="brush: bash;">
-                keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}
-            </pre>
+            <pre class="line-numbers"><code class="language-bash">                keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}</code></pre>
             This command assumes that you want to add hostname information to the certificate, if this is not the case, you can omit the extension parameter <code>-ext SAN=DNS:{FQDN},IP:{IPADDRESS1}</code>. Please see below for more information on this.
 
             <h5>Host Name Verification</h5>
@@ -80,9 +76,7 @@
             Server host name verification may be disabled by setting <code>ssl.endpoint.identification.algorithm</code> to an empty string.<br>
             For dynamically configured broker listeners, hostname verification may be disabled using <code>kafka-configs.sh</code>:<br>
 
-            <pre class="brush: text;">
-                bin/kafka-configs.sh --bootstrap-server localhost:9093 --entity-type brokers --entity-name 0 --alter --add-config "listener.name.internal.ssl.endpoint.identification.algorithm="
-            </pre>
+            <pre class="line-numbers"><code class="language-text">                bin/kafka-configs.sh --bootstrap-server localhost:9093 --entity-type brokers --entity-name 0 --alter --add-config "listener.name.internal.ssl.endpoint.identification.algorithm="</code></pre>
 
             <p><b>Note:</b></p>
             Normally there is no good reason to disable hostname verification apart from being the quickest way to "just get it to work" followed
@@ -105,12 +99,10 @@
 
 
             To add a SAN field append the following argument <code> -ext SAN=DNS:{FQDN},IP:{IPADDRESS} </code> to the keytool command:
-            <pre class="brush: bash;">
-                keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}
-            </pre>
+            <pre class="line-numbers"><code class="language-bash">                keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}</code></pre>
         </li>
 
-        <li><h4><a id="security_ssl_ca" href="#security_ssl_ca">Creating your own CA</a></h4>
+        <li><h4 class="anchor-heading"><a id="security_ssl_ca" class="anchor-link"></a><a href="#security_ssl_ca">Creating your own CA</a></h4>
             After this step each machine in the cluster has a public/private key pair which can already be used to encrypt traffic and a certificate
             signing request, which is the basis for creating a certificate. To add authentication capabilities this signing request needs to be signed
             by a trusted authority, which will be created in this step.
@@ -131,8 +123,7 @@
             CA keypair.<br>
 
             Save the following listing into a file called openssl-ca.cnf and adjust the values for validity and common attributes as necessary.
-            <pre class="brush: bash">
-HOME            = .
+            <pre class="line-numbers"><code class="language-bash">HOME            = .
 RANDFILE        = $ENV::HOME/.rnd
 
 ####################################################################
@@ -212,39 +203,30 @@ emailAddress           = optional
 subjectKeyIdentifier   = hash
 authorityKeyIdentifier = keyid,issuer
 basicConstraints       = CA:FALSE
-keyUsage               = digitalSignature, keyEncipherment
-            </pre>
+keyUsage               = digitalSignature, keyEncipherment</code></pre>
 
             Then create a database and serial number file, these will be used to keep track of which certificates were signed with this CA. Both of
             these are simply text files that reside in the same directory as your CA keys.
 
-            <pre class="brush: bash;">
-                echo 01 > serial.txt
-                touch index.txt
-            </pre>
+            <pre class="line-numbers"><code class="language-bash">                echo 01 > serial.txt
+                touch index.txt</code></pre>
 
             With these steps done you are now ready to generate your CA that will be used to sign certificates later.
 
-            <pre class="brush: bash;">
-            openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -sha256 -nodes -out cacert.pem -outform PEM
-            </pre>
+            <pre class="line-numbers"><code class="language-bash">            openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -sha256 -nodes -out cacert.pem -outform PEM</code></pre>
 
             The CA is simply a public/private key pair and certificate that is signed by itself, and is only intended to sign other certificates.<br>
             This keypair should be kept very safe, if someone gains access to it, they can create and sign certificates that will be trusted by your
             infrastructure, which means they will be able to impersonate anybody when connecting to any service that trusts this CA.<br>
 
             The next step is to add the generated CA to the **clients' truststore** so that the clients can trust this CA:
-            <pre class="brush: bash;">
-                keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
-            </pre>
+            <pre class="line-numbers"><code class="language-bash">                keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert</code></pre>
 
             <b>Note:</b>
             If you configure the Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" in the
             <a href="#brokerconfigs">Kafka brokers config</a> then you must provide a truststore for the Kafka brokers as well and it should have
             all the CA certificates that clients' keys were signed by.
-            <pre class="brush: bash;">
-                keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
-            </pre>
+            <pre class="line-numbers"><code class="language-bash">                keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert</code></pre>
 
             In contrast to the keystore in step 1 that stores each machine's own identity, the truststore of a client stores all the certificates
             that the client should trust. Importing a certificate into one's truststore also means trusting all certificates that are signed by that
@@ -253,17 +235,13 @@ keyUsage               = digitalSignature, keyEncipherment
             in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. That way all machines can authenticate all
             other machines.
         </li>
-        <li><h4><a id="security_ssl_signing" href="#security_ssl_signing">Signing the certificate</a></h4>
+        <li><h4 class="anchor-heading"><a id="security_ssl_signing" class="anchor-link"></a><a href="#security_ssl_signing">Signing the certificate</a></h4>
             Then sign it with the CA:
-            <pre class="brush: bash;">
-                openssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out {server certificate} -infiles {certificate signing request}
-           </pre>
+            <pre class="line-numbers"><code class="language-bash">                openssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out {server certificate} -infiles {certificate signing request}</code></pre>
 
             Finally, you need to import both the certificate of the CA and the signed certificate into the keystore:
-            <pre class="brush: bash;">
-                keytool -keystore {keystore} -alias CARoot -import -file {CA certificate}
-                keytool -keystore {keystore} -alias localhost -import -file cert-signed
-            </pre>
+            <pre class="line-numbers"><code class="language-bash">                keytool -keystore {keystore} -alias CARoot -import -file {CA certificate}
+                keytool -keystore {keystore} -alias localhost -import -file cert-signed</code></pre>
 
             The definitions of the parameters are the following:
             <ol>
@@ -283,7 +261,7 @@ keyUsage               = digitalSignature, keyEncipherment
                 extensive scripting in place to help with these steps.</p>
 
         </li>
-        <li><h4><a id="security_ssl_production" href="#security_ssl_production">Common Pitfalls in Production</a></h4>
+        <li><h4 class="anchor-heading"><a id="security_ssl_production" class="anchor-link"></a><a href="#security_ssl_production">Common Pitfalls in Production</a></h4>
             The above paragraphs show the process to create your own CA and use it to sign certificates for your cluster.
             While very useful for sandbox, dev, test, and similar systems, this is usually not the correct process to create certificates for a production
             cluster in a corporate environment.
@@ -316,28 +294,24 @@ keyUsage               = digitalSignature, keyEncipherment
                     harder for a malicious party to obtain certificates with potentially misleading or fraudulent values.
                     It is adviseable to double check signed certificates, whether these contain all requested SAN fields to enable proper hostname verification.
                     The following command can be used to print certificate details to the console, which should be compared with what was originally requested:
-                    <pre class="brush: bash;">
-                        openssl x509 -in certificate.crt -text -noout
-                    </pre>
+                    <pre class="line-numbers"><code class="language-bash">                        openssl x509 -in certificate.crt -text -noout</code></pre>
                 </li>
             </ol>
         </li>
-        <li><h4><a id="security_configbroker" href="#security_configbroker">Configuring Kafka Brokers</a></h4>
+        <li><h4 class="anchor-heading"><a id="security_configbroker" class="anchor-link"></a><a href="#security_configbroker">Configuring Kafka Brokers</a></h4>
             Kafka Brokers support listening for connections on multiple ports.
             We need to configure the following property in server.properties, which must have one or more comma-separated values:
-            <pre>listeners</pre>
+            <pre><code>listeners</code></pre>
 
             If SSL is not enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports will be necessary.
-            <pre class="brush: text;">
-            listeners=PLAINTEXT://host.name:port,SSL://host.name:port</pre>
+            <pre class="line-numbers"><code class="language-text">            listeners=PLAINTEXT://host.name:port,SSL://host.name:port</code></pre>
 
             Following SSL configs are needed on the broker side
-            <pre class="brush: text;">
-            ssl.keystore.location=/var/private/ssl/server.keystore.jks
+            <pre class="line-numbers"><code class="language-text">            ssl.keystore.location=/var/private/ssl/server.keystore.jks
             ssl.keystore.password=test1234
             ssl.key.password=test1234
             ssl.truststore.location=/var/private/ssl/server.truststore.jks
-            ssl.truststore.password=test1234</pre>
+            ssl.truststore.password=test1234</code></pre>
 
             Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
 
@@ -351,8 +325,7 @@ keyUsage               = digitalSignature, keyEncipherment
                 <li>ssl.secure.random.implementation=SHA1PRNG</li>
             </ol>
             If you want to enable SSL for inter-broker communication, add the following to the server.properties file (it defaults to PLAINTEXT)
-            <pre>
-            security.inter.broker.protocol=SSL</pre>
+            <pre class="line-numbers"><code class="language-text">            security.inter.broker.protocol=SSL</code></pre>
 
             <p>
             Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the <a href="http://www.oracle.com/technetwork/java/javase/downloads/index.html">JCE Unlimited Strength Jurisdiction Policy Files</a> must be obtained and installed in the JDK/JRE. See the
@@ -368,35 +341,31 @@ keyUsage               = digitalSignature, keyEncipherment
             </p>
 
             Once you start the broker you should be able to see in the server.log
-            <pre>
-            with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)</pre>
+            <pre class="line-numbers"><code class="language-text">            with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)</code></pre>
 
             To check quickly if  the server keystore and truststore are setup properly you can run the following command
-            <pre>openssl s_client -debug -connect localhost:9093 -tls1</pre> (Note: TLSv1 should be listed under ssl.enabled.protocols)<br>
+            <pre>openssl s_client -debug -connect localhost:9093 -tls1</code></pre> (Note: TLSv1 should be listed under ssl.enabled.protocols)<br>
             In the output of this command you should see server's certificate:
-            <pre>
-            -----BEGIN CERTIFICATE-----
+            <pre class="line-numbers"><code class="language-text">            -----BEGIN CERTIFICATE-----
             {variable sized random bytes}
             -----END CERTIFICATE-----
             subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=Sriharsha Chintalapani
-            issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com</pre>
+            issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com</code></pre>
             If the certificate does not show up or if there are any other error messages then your keystore is not setup properly.</li>
 
-        <li><h4><a id="security_configclients" href="#security_configclients">Configuring Kafka Clients</a></h4>
+        <li><h4 class="anchor-heading"><a id="security_configclients" class="anchor-link"></a><a href="#security_configclients">Configuring Kafka Clients</a></h4>
             SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported. The configs for SSL will be the same for both producer and consumer.<br>
             If client authentication is not required in the broker, then the following is a minimal configuration example:
-            <pre class="brush: text;">
-            security.protocol=SSL
+            <pre class="line-numbers"><code class="language-text">            security.protocol=SSL
             ssl.truststore.location=/var/private/ssl/client.truststore.jks
-            ssl.truststore.password=test1234</pre>
+            ssl.truststore.password=test1234</code></pre>
 
             Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
 
             If client authentication is required, then a keystore must be created like in step 1 and the following must also be configured:
-            <pre class="brush: text;">
-            ssl.keystore.location=/var/private/ssl/client.keystore.jks
+            <pre class="line-numbers"><code class="language-text">            ssl.keystore.location=/var/private/ssl/client.keystore.jks
             ssl.keystore.password=test1234
-            ssl.key.password=test1234</pre>
+            ssl.key.password=test1234</code></pre>
 
             Other configuration settings that may also be needed depending on our requirements and the broker configuration:
                 <ol>
@@ -408,15 +377,14 @@ keyUsage               = digitalSignature, keyEncipherment
                 </ol>
     <br>
             Examples using console-producer and console-consumer:
-            <pre class="brush: bash;">
-            kafka-console-producer.sh --bootstrap-server localhost:9093 --topic test --producer.config client-ssl.properties
-            kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties</pre>
+            <pre class="line-numbers"><code class="language-bash">            kafka-console-producer.sh --bootstrap-server localhost:9093 --topic test --producer.config client-ssl.properties
+            kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties</code></pre>
         </li>
     </ol>
-    <h3><a id="security_sasl" href="#security_sasl">7.3 Authentication using SASL</a></h3>
+    <h3 class="anchor-heading"><a id="security_sasl" class="anchor-link"></a><a href="#security_sasl">7.3 Authentication using SASL</a></h3>
 
     <ol>
-    <li><h4><a id="security_sasl_jaasconfig" href="#security_sasl_jaasconfig">JAAS configuration</a></h4>
+    <li><h4 class="anchor-heading"><a id="security_sasl_jaasconfig" class="anchor-link"></a><a href="#security_sasl_jaasconfig">JAAS configuration</a></h4>
     <p>Kafka uses the Java Authentication and Authorization Service
     (<a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jaas/JAASRefGuide.html">JAAS</a>)
     for SASL configuration.</p>
@@ -450,15 +418,14 @@ keyUsage               = digitalSignature, keyEncipherment
             login module may be specified in the config value. If multiple mechanisms are configured on a
             listener, configs must be provided for each mechanism using the listener and mechanism prefix.
             For example,
-                    <pre class="brush: text;">
-        listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
+                    <pre class="line-numbers"><code class="language-text">        listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
             username="admin" \
             password="admin-secret";
         listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
             username="admin" \
             password="admin-secret" \
             user_admin="admin-secret" \
-            user_alice="alice-secret";</pre>
+            user_alice="alice-secret";</code></pre>
 
             If JAAS configuration is defined at different levels, the order of precedence used is:
             <ul>
@@ -497,7 +464,7 @@ keyUsage               = digitalSignature, keyEncipherment
                 <a href="#security_sasl_scram_clientconfig">SCRAM</a> or
                 <a href="#security_sasl_oauthbearer_clientconfig">OAUTHBEARER</a> for example configurations.</p></li>
 
-                <li><h6><a id="security_client_staticjaas" href="#security_client_staticjaas">JAAS configuration using static config file</a></h6>
+                <li><h6 class="anchor-heading"><a id="security_client_staticjaas" class="anchor-link"></a><a href="#security_client_staticjaas">JAAS configuration using static config file</a></h6>
                 To configure SASL authentication on the clients using static JAAS config file:
                 <ol>
                 <li>Add a JAAS config file with a client login section named <tt>KafkaClient</tt>. Configure
@@ -508,17 +475,16 @@ keyUsage               = digitalSignature, keyEncipherment
                     <a href="#security_sasl_oauthbearer_clientconfig">OAUTHBEARER</a>.
                     For example, <a href="#security_sasl_gssapi_clientconfig">GSSAPI</a>
                     credentials may be configured as:
-                    <pre class="brush: text;">
-        KafkaClient {
+                    <pre class="line-numbers"><code class="language-text">        KafkaClient {
         com.sun.security.auth.module.Krb5LoginModule required
         useKeyTab=true
         storeKey=true
         keyTab="/etc/security/keytabs/kafka_client.keytab"
         principal="kafka-client-1@EXAMPLE.COM";
-    };</pre>
+    };</code></pre>
                 </li>
                 <li>Pass the JAAS config file location as JVM parameter to each client JVM. For example:
-                    <pre class="brush: bash;">    -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf</pre></li>
+                    <pre class="language-bash">    -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf</code></pre></li>
                 </ol>
                 </li>
             </ol>
@@ -550,11 +516,11 @@ keyUsage               = digitalSignature, keyEncipherment
             <li>Configure a SASL port in server.properties, by adding at least one of
                 SASL_PLAINTEXT or SASL_SSL to the <i>listeners</i> parameter, which
                 contains one or more comma-separated values:
-                <pre>    listeners=SASL_PLAINTEXT://host.name:port</pre>
+                <pre>    listeners=SASL_PLAINTEXT://host.name:port</code></pre>
                 If you are only configuring a SASL port (or if you want
                 the Kafka brokers to authenticate each other using SASL) then make sure
                 you set the same SASL protocol for inter-broker communication:
-                <pre>    security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)</pre></li>
+                <pre>    security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)</code></pre></li>
             <li>Select one or more  <a href="#security_sasl_mechanism">supported mechanisms</a>
                 to enable in the broker and follow the steps to configure SASL for the mechanism.
                 To enable multiple mechanisms in the broker, follow the steps
@@ -574,23 +540,21 @@ keyUsage               = digitalSignature, keyEncipherment
     </li>
     <li><h4><a id="security_sasl_kerberos" href="#security_sasl_kerberos">Authentication using SASL/Kerberos</a></h4>
         <ol>
-        <li><h5><a id="security_sasl_kerberos_prereq" href="#security_sasl_kerberos_prereq">Prerequisites</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_kerberos_prereq" class="anchor-link"></a><a href="#security_sasl_kerberos_prereq">Prerequisites</a></h5>
         <ol>
             <li><b>Kerberos</b><br>
             If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (<a href="https://help.ubuntu.com/community/Kerberos">Ubuntu</a>, <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/install [...]
             <li><b>Create Kerberos Principals</b><br>
             If you are using the organization's Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).</br>
             If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:
-                <pre class="brush: bash;">
-        sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
-        sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"</pre></li>
+                <pre class="line-numbers"><code class="language-bash">        sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
+        sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"</code></pre></li>
             <li><b>Make sure all hosts can be reachable using hostnames</b> - it is a Kerberos requirement that all your hosts can be resolved with their FQDNs.</li>
         </ol>
-        <li><h5><a id="security_sasl_kerberos_brokerconfig" href="#security_sasl_kerberos_brokerconfig">Configuring Kafka Brokers</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_kerberos_brokerconfig" class="anchor-link"></a><a href="#security_sasl_kerberos_brokerconfig">Configuring Kafka Brokers</a></h5>
         <ol>
             <li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab):
-            <pre class="brush: text;">
-        KafkaServer {
+            <pre class="line-numbers"><code class="language-text">        KafkaServer {
             com.sun.security.auth.module.Krb5LoginModule required
             useKeyTab=true
             storeKey=true
@@ -605,27 +569,26 @@ keyUsage               = digitalSignature, keyEncipherment
         storeKey=true
         keyTab="/etc/security/keytabs/kafka_server.keytab"
         principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
-        };</pre>
+        };</code></pre>
 
             </li>
             <tt>KafkaServer</tt> section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It
             allows the broker to login using the keytab specified in this section. See <a href="#security_jaas_broker">notes</a> for more details on Zookeeper SASL configuration.
             <li>Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
                 <pre>    -Djava.security.krb5.conf=/etc/kafka/krb5.conf
-        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre>
+        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre>
             </li>
             <li>Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting kafka broker.</li>
             <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>. For example:
             <pre>    listeners=SASL_PLAINTEXT://host.name:port
         security.inter.broker.protocol=SASL_PLAINTEXT
         sasl.mechanism.inter.broker.protocol=GSSAPI
-        sasl.enabled.mechanisms=GSSAPI
-            </pre>
+        sasl.enabled.mechanisms=GSSAPI</code></pre>
             </li>We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is "kafka/kafka1.hostname.com@EXAMPLE.com", so:
-            <pre>    sasl.kerberos.service.name=kafka</pre>
+            <pre>    sasl.kerberos.service.name=kafka</code></pre>
 
         </ol></li>
-        <li><h5><a id="security_sasl_kerberos_clientconfig" href="#security_sasl_kerberos_clientconfig">Configuring Kafka Clients</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_kerberos_clientconfig" class="anchor-link"></a><a href="#security_sasl_kerberos_clientconfig">Configuring Kafka Clients</a></h5>
             To configure SASL authentication on the clients:
             <ol>
                 <li>
@@ -636,30 +599,27 @@ keyUsage               = digitalSignature, keyEncipherment
                     The property <code>sasl.jaas.config</code> in producer.properties or consumer.properties describes
                     how clients like producer and consumer can connect to the Kafka Broker. The following is an example
                     configuration for a client using a keytab (recommended for long-running processes):
-                <pre>
-    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
+                <pre class="line-numbers"><code class="language-text">    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
         useKeyTab=true \
         storeKey=true  \
         keyTab="/etc/security/keytabs/kafka_client.keytab" \
-        principal="kafka-client-1@EXAMPLE.COM";</pre>
+        principal="kafka-client-1@EXAMPLE.COM";</code></pre>
 
                    For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used
                    along with "useTicketCache=true" as in:
-                <pre>
-    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
-        useTicketCache=true;</pre>
+                <pre class="line-numbers"><code class="language-text">    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
+        useTicketCache=true;</code></pre>
 
                    JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
                    as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
                    <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</li>
                 <li>Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client.</li>
                 <li>Optionally pass the krb5 file locations as JVM parameters to each client JVM (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
-                <pre>    -Djava.security.krb5.conf=/etc/kafka/krb5.conf</pre></li>
+                <pre>    -Djava.security.krb5.conf=/etc/kafka/krb5.conf</code></pre></li>
                 <li>Configure the following properties in producer.properties or consumer.properties:
-                <pre>
-    security.protocol=SASL_PLAINTEXT (or SASL_SSL)
+                <pre class="line-numbers"><code class="language-text">    security.protocol=SASL_PLAINTEXT (or SASL_SSL)
     sasl.mechanism=GSSAPI
-    sasl.kerberos.service.name=kafka</pre></li>
+    sasl.kerberos.service.name=kafka</code></pre></li>
             </ol>
         </li>
         </ol>
@@ -670,42 +630,40 @@ keyUsage               = digitalSignature, keyEncipherment
         Kafka supports a default implementation for SASL/PLAIN which can be extended for production use as described <a href="#security_sasl_plain_production">here</a>.</p>
         The username is used as the authenticated <code>Principal</code> for configuration of ACLs etc.
         <ol>
-        <li><h5><a id="security_sasl_plain_brokerconfig" href="#security_sasl_plain_brokerconfig">Configuring Kafka Brokers</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_plain_brokerconfig" class="anchor-link"></a><a href="#security_sasl_plain_brokerconfig">Configuring Kafka Brokers</a></h5>
             <ol>
             <li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
-                <pre class="brush: text;">
-        KafkaServer {
+                <pre class="line-numbers"><code class="language-text">        KafkaServer {
             org.apache.kafka.common.security.plain.PlainLoginModule required
             username="admin"
             password="admin-secret"
             user_admin="admin-secret"
             user_alice="alice-secret";
-        };</pre>
+        };</code></pre>
                 This configuration defines two users (<i>admin</i> and <i>alice</i>). The properties <tt>username</tt> and <tt>password</tt>
                 in the <tt>KafkaServer</tt> section are used by the broker to initiate connections to other brokers. In this example,
                 <i>admin</i> is the user for inter-broker communication. The set of properties <tt>user_<i>userName</i></tt> defines
                 the passwords for all users that connect to the broker and the broker validates all client connections including
                 those from other brokers using these properties.</li>
             <li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
-                <pre>    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre></li>
+                <pre>    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre></li>
             <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>. For example:
                 <pre>    listeners=SASL_SSL://host.name:port
         security.inter.broker.protocol=SASL_SSL
         sasl.mechanism.inter.broker.protocol=PLAIN
-        sasl.enabled.mechanisms=PLAIN</pre></li>
+        sasl.enabled.mechanisms=PLAIN</code></pre></li>
             </ol>
         </li>
 
-        <li><h5><a id="security_sasl_plain_clientconfig" href="#security_sasl_plain_clientconfig">Configuring Kafka Clients</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_plain_clientconfig" class="anchor-link"></a><a href="#security_sasl_plain_clientconfig">Configuring Kafka Clients</a></h5>
             To configure SASL authentication on the clients:
             <ol>
             <li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
                 The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
                 The following is an example configuration for a client for the PLAIN mechanism:
-                <pre class="brush: text;">
-    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
+                <pre class="line-numbers"><code class="language-text">    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
         username="alice" \
-        password="alice-secret";</pre>
+        password="alice-secret";</code></pre>
                 <p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure
                 the user for client connections. In this example, clients connect to the broker as user <i>alice</i>.
                 Different clients within a JVM may connect as different users by specifying different user names
@@ -715,9 +673,8 @@ keyUsage               = digitalSignature, keyEncipherment
                 as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
                 <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
             <li>Configure the following properties in producer.properties or consumer.properties:
-                <pre>
-    security.protocol=SASL_SSL
-    sasl.mechanism=PLAIN</pre></li>
+                <pre class="line-numbers"><code class="language-text">    security.protocol=SASL_SSL
+    sasl.mechanism=PLAIN</code></pre></li>
             </ol>
         </li>
         <li><h5><a id="security_sasl_plain_production" href="#security_sasl_plain_production">Use of SASL/PLAIN in production</a></h5>
@@ -746,65 +703,54 @@ keyUsage               = digitalSignature, keyEncipherment
         is on a private network. Refer to <a href="#security_sasl_scram_security">Security Considerations</a>
         for more details.</p>
         <ol>
-        <li><h5><a id="security_sasl_scram_credentials" href="#security_sasl_scram_credentials">Creating SCRAM Credentials</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_scram_credentials" class="anchor-link"></a><a href="#security_sasl_scram_credentials">Creating SCRAM Credentials</a></h5>
         <p>The SCRAM implementation in Kafka uses Zookeeper as credential store. Credentials can be created in
         Zookeeper using <tt>kafka-configs.sh</tt>. For each SCRAM mechanism enabled, credentials must be created
         by adding a config with the mechanism name. Credentials for inter-broker communication must be created
         before Kafka brokers are started. Client credentials may be created and updated dynamically and updated
         credentials will be used to authenticate new connections.</p>
         <p>Create SCRAM credentials for user <i>alice</i> with password <i>alice-secret</i>:
-        <pre class="brush: bash;">
-    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice</code></pre>
         <p>The default iteration count of 4096 is used if iterations are not specified. A random salt is created
         and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in Zookeeper.
         See <a href="https://tools.ietf.org/html/rfc5802">RFC 5802</a> for details on SCRAM identity and the individual fields.
         <p>The following examples also require a user <i>admin</i> for inter-broker communication which can be created using:
-        <pre class="brush: bash;">
-    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin</code></pre>
         <p>Existing credentials may be listed using the <i>--describe</i> option:
-        <pre class="brush: bash;">
-    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --describe --entity-type users --entity-name alice
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --describe --entity-type users --entity-name alice</code></pre>
         <p>Credentials may be deleted for one or more SCRAM mechanisms using the <i>--alter --delete-config</i> option:
-        <pre class="brush: bash;">
-    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">    > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice</code></pre>
         </li>
-        <li><h5><a id="security_sasl_scram_brokerconfig" href="#security_sasl_scram_brokerconfig">Configuring Kafka Brokers</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_scram_brokerconfig" class="anchor-link"></a><a href="#security_sasl_scram_brokerconfig">Configuring Kafka Brokers</a></h5>
             <ol>
             <li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
-                <pre>
-    KafkaServer {
+                <pre class="line-numbers"><code class="language-text">    KafkaServer {
         org.apache.kafka.common.security.scram.ScramLoginModule required
         username="admin"
         password="admin-secret";
-    };</pre>
+    };</code></pre>
                 The properties <tt>username</tt> and <tt>password</tt> in the <tt>KafkaServer</tt> section are used by
                 the broker to initiate connections to other brokers. In this example, <i>admin</i> is the user for
                 inter-broker communication.</li>
             <li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
-                <pre>    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre></li>
-            <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>.</pre> For example:
-                <pre>
-    listeners=SASL_SSL://host.name:port
+                <pre>    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre></li>
+            <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>.</code></pre> For example:
+                <pre class="line-numbers"><code class="language-text">    listeners=SASL_SSL://host.name:port
     security.inter.broker.protocol=SASL_SSL
     sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 (or SCRAM-SHA-512)
-    sasl.enabled.mechanisms=SCRAM-SHA-256 (or SCRAM-SHA-512)</pre></li>
+    sasl.enabled.mechanisms=SCRAM-SHA-256 (or SCRAM-SHA-512)</code></pre></li>
             </ol>
         </li>
 
-        <li><h5><a id="security_sasl_scram_clientconfig" href="#security_sasl_scram_clientconfig">Configuring Kafka Clients</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_scram_clientconfig" class="anchor-link"></a><a href="#security_sasl_scram_clientconfig">Configuring Kafka Clients</a></h5>
             To configure SASL authentication on the clients:
             <ol>
             <li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
                 The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
                 The following is an example configuration for a client for the SCRAM mechanisms:
-                <pre class="brush: text;">
-   sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
+                <pre class="line-numbers"><code class="language-text">   sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
         username="alice" \
-        password="alice-secret";</pre>
+        password="alice-secret";</code></pre>
 
                 <p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure
                 the user for client connections. In this example, clients connect to the broker as user <i>alice</i>.
@@ -815,9 +761,8 @@ keyUsage               = digitalSignature, keyEncipherment
                 as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
                 <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
             <li>Configure the following properties in producer.properties or consumer.properties:
-                <pre>
-    security.protocol=SASL_SSL
-    sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)</pre></li>
+                <pre class="line-numbers"><code class="language-text">    security.protocol=SASL_SSL
+    sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)</code></pre></li>
             </ol>
         </li>
         <li><h5><a id="security_sasl_scram_security" href="#security_sasl_scram_security">Security Considerations for SASL/SCRAM</a></h5>
@@ -847,37 +792,34 @@ keyUsage               = digitalSignature, keyEncipherment
         and is only suitable for use in non-production Kafka installations. Refer to <a href="#security_sasl_oauthbearer_security">Security Considerations</a>
         for more details.</p>
         <ol>
-        <li><h5><a id="security_sasl_oauthbearer_brokerconfig" href="#security_sasl_oauthbearer_brokerconfig">Configuring Kafka Brokers</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_oauthbearer_brokerconfig" class="anchor-link"></a><a href="#security_sasl_oauthbearer_brokerconfig">Configuring Kafka Brokers</a></h5>
             <ol>
             <li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
-                <pre>
-    KafkaServer {
+                <pre class="line-numbers"><code class="language-text">    KafkaServer {
         org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
         unsecuredLoginStringClaim_sub="admin";
-    };</pre>
+    };</code></pre>
                 The property <tt>unsecuredLoginStringClaim_sub</tt> in the <tt>KafkaServer</tt> section is used by
                 the broker when it initiates connections to other brokers. In this example, <i>admin</i> will appear in the
                 subject (<tt>sub</tt>) claim and will be the user for inter-broker communication.</li>
             <li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
-                <pre>    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre></li>
-            <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>.</pre> For example:
-                <pre>
-    listeners=SASL_SSL://host.name:port (or SASL_PLAINTEXT if non-production)
+                <pre>    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre></li>
+            <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>.</code></pre> For example:
+                <pre class="line-numbers"><code class="language-text">    listeners=SASL_SSL://host.name:port (or SASL_PLAINTEXT if non-production)
     security.inter.broker.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
     sasl.mechanism.inter.broker.protocol=OAUTHBEARER
-    sasl.enabled.mechanisms=OAUTHBEARER</pre></li>
+    sasl.enabled.mechanisms=OAUTHBEARER</code></pre></li>
             </ol>
         </li>
 
-        <li><h5><a id="security_sasl_oauthbearer_clientconfig" href="#security_sasl_oauthbearer_clientconfig">Configuring Kafka Clients</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_oauthbearer_clientconfig" class="anchor-link"></a><a href="#security_sasl_oauthbearer_clientconfig">Configuring Kafka Clients</a></h5>
             To configure SASL authentication on the clients:
             <ol>
 	    <li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
                 The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
 	        The following is an example configuration for a client for the OAUTHBEARER mechanisms:
-                <pre class="brush: text;">
-   sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
-        unsecuredLoginStringClaim_sub="alice";</pre>
+                <pre class="line-numbers"><code class="language-text">   sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
+        unsecuredLoginStringClaim_sub="alice";</code></pre>
 
                 <p>The option <tt>unsecuredLoginStringClaim_sub</tt> is used by clients to configure
                 the subject (<tt>sub</tt>) claim, which determines the user for client connections.
@@ -889,9 +831,8 @@ keyUsage               = digitalSignature, keyEncipherment
                 as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
                 <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
             <li>Configure the following properties in producer.properties or consumer.properties:
-                <pre>
-    security.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
-    sasl.mechanism=OAUTHBEARER</pre></li>
+                <pre class="line-numbers"><code class="language-text">    security.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
+    sasl.mechanism=OAUTHBEARER</code></pre></li>
              <li>The default implementation of SASL/OAUTHBEARER depends on the jackson-databind library.
                  Since it's an optional dependency, users have to configure it as a dependency via their build tool.</li>
             </ol>
@@ -1046,11 +987,10 @@ keyUsage               = digitalSignature, keyEncipherment
         </ol>
     </li>
 
-    <li><h4><a id="security_sasl_multimechanism" href="#security_sasl_multimechanism">Enabling multiple SASL mechanisms in a broker</a></h4>
+    <li><h4 class="anchor-heading"><a id="security_sasl_multimechanism" class="anchor-link"></a><a href="#security_sasl_multimechanism">Enabling multiple SASL mechanisms in a broker</a></h4>
         <ol>
         <li>Specify configuration for the login modules of all enabled mechanisms in the <tt>KafkaServer</tt> section of the JAAS config file. For example:
-            <pre>
-        KafkaServer {
+            <pre class="line-numbers"><code class="language-text">        KafkaServer {
             com.sun.security.auth.module.Krb5LoginModule required
             useKeyTab=true
             storeKey=true
@@ -1062,19 +1002,18 @@ keyUsage               = digitalSignature, keyEncipherment
             password="admin-secret"
             user_admin="admin-secret"
             user_alice="alice-secret";
-        };</pre></li>
-        <li>Enable the SASL mechanisms in server.properties: <pre>    sasl.enabled.mechanisms=GSSAPI,PLAIN,SCRAM-SHA-256,SCRAM-SHA-512,OAUTHBEARER</pre></li>
+        };</code></pre></li>
+        <li>Enable the SASL mechanisms in server.properties: <pre>    sasl.enabled.mechanisms=GSSAPI,PLAIN,SCRAM-SHA-256,SCRAM-SHA-512,OAUTHBEARER</code></pre></li>
         <li>Specify the SASL security protocol and mechanism for inter-broker communication in server.properties if required:
-            <pre>
-    security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
-    sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechanisms)</pre></li>
+            <pre class="line-numbers"><code class="language-text">    security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
+    sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechanisms)</code></pre></li>
         <li>Follow the mechanism-specific steps in <a href="#security_sasl_kerberos_brokerconfig">GSSAPI (Kerberos)</a>,
             <a href="#security_sasl_plain_brokerconfig">PLAIN</a>,
             <a href="#security_sasl_scram_brokerconfig">SCRAM</a> and <a href="#security_sasl_oauthbearer_brokerconfig">OAUTHBEARER</a>
             to configure SASL for the enabled mechanisms.</li>
         </ol>
     </li>
-    <li><h4><a id="saslmechanism_rolling_upgrade" href="#saslmechanism_rolling_upgrade">Modifying SASL mechanism in a Running Cluster</a></h4>
+    <li><h4 class="anchor-heading"><a id="saslmechanism_rolling_upgrade" class="anchor-link"></a><a href="#saslmechanism_rolling_upgrade">Modifying SASL mechanism in a Running Cluster</a></h4>
         <p>SASL mechanism can be modified in a running cluster using the following sequence:</p>
         <ol>
         <li>Enable new SASL mechanism by adding the mechanism to <tt>sasl.enabled.mechanisms</tt> in server.properties for each broker. Update JAAS config file to include both
@@ -1087,7 +1026,7 @@ keyUsage               = digitalSignature, keyEncipherment
         </ol>
     </li>
 
-    <li><h4><a id="security_delegation_token" href="#security_delegation_token">Authentication using Delegation Tokens</a></h4>
+    <li><h4 class="anchor-heading"><a id="security_delegation_token" class="anchor-link"></a><a href="#security_delegation_token">Authentication using Delegation Tokens</a></h4>
         <p>Delegation token based authentication is a lightweight authentication mechanism to complement existing SASL/SSL
             methods. Delegation tokens are shared secrets between kafka brokers and clients. Delegation tokens will help processing
             frameworks to distribute the workload to available workers in a secure environment without the added cost of distributing
@@ -1103,7 +1042,7 @@ keyUsage               = digitalSignature, keyEncipherment
         </ol>
 
         <ol>
-        <li><h5><a id="security_token_management" href="#security_token_management">Token Management</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_token_management" class="anchor-link"></a><a href="#security_token_management">Token Management</a></h5>
         <p> A master key/secret is used to generate and verify delegation tokens. This is supplied using config
             option <tt>delegation.token.master.key</tt>. Same secret key must be configured across all the brokers.
             If the secret is not set or set to empty string, brokers will disable the delegation token authentication.</p>
@@ -1120,29 +1059,21 @@ keyUsage               = digitalSignature, keyEncipherment
             is beyond the max life time, it will be deleted from all broker caches as well as from zookeeper.</p>
         </li>
 
-        <li><h5><a id="security_sasl_create_tokens" href="#security_sasl_create_tokens">Creating Delegation Tokens</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_sasl_create_tokens" class="anchor-link"></a><a href="#security_sasl_create_tokens">Creating Delegation Tokens</a></h5>
         <p>Tokens can be created by using Admin APIs or using <tt>kafka-delegation-tokens.sh</tt> script.
             Delegation token requests (create/renew/expire/describe) should be issued only on SASL or SSL authenticated channels.
             Tokens can not be requests if the initial authentication is done through delegation token.
             <tt>kafka-delegation-tokens.sh</tt> script examples are given below.</p>
         <p>Create a delegation token:
-        <pre class="brush: bash;">
-    > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --create   --max-life-time-period -1 --command-config client.properties --renewer-principal User:user1
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">    > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --create   --max-life-time-period -1 --command-config client.properties --renewer-principal User:user1</code></pre>
         <p>Renew a delegation token:
-        <pre class="brush: bash;">
-    > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --renew    --renew-time-period -1 --command-config client.properties --hmac ABCDEFGHIJK
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">    > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --renew    --renew-time-period -1 --command-config client.properties --hmac ABCDEFGHIJK</code></pre>
         <p>Expire a delegation token:
-        <pre class="brush: bash;">
-    > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --expire   --expiry-time-period -1   --command-config client.properties  --hmac ABCDEFGHIJK
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">    > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --expire   --expiry-time-period -1   --command-config client.properties  --hmac ABCDEFGHIJK</code></pre>
         <p>Existing tokens can be described using the --describe option:
-        <pre class="brush: bash;">
-    > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --describe --command-config client.properties  --owner-principal User:user1
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">    > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --describe --command-config client.properties  --owner-principal User:user1</code></pre>
         </li>
-        <li><h5><a id="security_token_authentication" href="#security_token_authentication">Token Authentication</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_token_authentication" class="anchor-link"></a><a href="#security_token_authentication">Token Authentication</a></h5>
             <p>Delegation token authentication piggybacks on the current SASL/SCRAM authentication mechanism. We must enable
                 SASL/SCRAM mechanism on Kafka cluster as described in <a href="#security_sasl_scram">here</a>.</p>
 
@@ -1151,11 +1082,10 @@ keyUsage               = digitalSignature, keyEncipherment
                     <li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
                 The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
                 The following is an example configuration for a client for the token authentication:
-                <pre class="brush: text;">
-   sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
+                <pre class="line-numbers"><code class="language-text">   sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
         username="tokenID123" \
         password="lAYYSFmLs4bTjf+lTZ1LCHR/ZZFNA==" \
-        tokenauth="true";</pre>
+        tokenauth="true";</code></pre>
 
                 <p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure the token id and
                     token HMAC. And the option <tt>tokenauth</tt> is used to indicate the server about token authentication.
@@ -1180,7 +1110,7 @@ keyUsage               = digitalSignature, keyEncipherment
             <p>We intend to automate this in a future Kafka release.</p>
         </li>
 
-        <li><h5><a id="security_token_notes" href="#security_token_notes">Notes on Delegation Tokens</a></h5>
+        <li><h5 class="anchor-heading"><a id="security_token_notes" class="anchor-link"></a><a href="#security_token_notes">Notes on Delegation Tokens</a></h5>
             <ul>
             <li>Currently, we only allow a user to create delegation token for that user only. Owner/Renewers can renew or expire tokens.
                 Owner/renewers can always describe their own tokens. To describe others tokens, we need to add DESCRIBE permission on Token Resource.</li>
@@ -1190,15 +1120,15 @@ keyUsage               = digitalSignature, keyEncipherment
     </li>
     </ol>
 
-    <h3><a id="security_authz" href="#security_authz">7.4 Authorization and ACLs</a></h3>
+    <h3 class="anchor-heading"><a id="security_authz" class="anchor-link"></a><a href="#security_authz">7.4 Authorization and ACLs</a></h3>
     Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that uses zookeeper to store all the acls. The Authorizer is configured by setting <tt>authorizer.class.name</tt> in server.properties. To enable the out of the box implementation use:
-    <pre>authorizer.class.name=kafka.security.authorizer.AclAuthorizer</pre>
+    <pre>authorizer.class.name=kafka.security.authorizer.AclAuthorizer</code></pre>
     Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP". You can read more about the acl structure in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface">KIP-11</a> and resource patterns in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-290%3A+Support+for+Prefixed+ACLs">KIP-290</a>. In order to add, remove or list acls you can use th [...]
-    <pre>allow.everyone.if.no.acl.found=true</pre>
+    <pre>allow.everyone.if.no.acl.found=true</code></pre>
     One can also add super users in server.properties like the following (note that the delimiter is semicolon since SSL user names may contain comma). Default PrincipalType string "User" is case sensitive.
-    <pre>super.users=User:Bob;User:Alice</pre>
+    <pre>super.users=User:Bob;User:Alice</code></pre>
 
-    <h5><a id="security_authz_ssl" href="#security_authz_ssl">Customizing SSL User Name</a></h5>
+    <h5 class="anchor-heading"><a id="security_authz_ssl" class="anchor-link"></a><a href="#security_authz_ssl">Customizing SSL User Name</a></h5>
 
     By default, the SSL user name will be of the form "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown". One can change that by setting <code>ssl.principal.mapping.rules</code> to a customized rule in server.properties.
     This config allows a list of rules for mapping X.500 distinguished name to short name. The rules are evaluated in order and the first rule that matches a distinguished name is used to map it to a short name. Any later rules in the list are ignored.
@@ -1207,43 +1137,37 @@ keyUsage               = digitalSignature, keyEncipherment
     string representation of the X.500 certificate distinguished name. If the distinguished name matches the pattern, then the replacement command will be run over the name.
     This also supports lowercase/uppercase options, to force the translated result to be all lower/uppercase case. This is done by adding a "/L" or "/U' to the end of the rule.
 
-    <pre>
-        RULE:pattern/replacement/
-        RULE:pattern/replacement/[LU]
-    </pre>
+    <pre class="line-numbers"><code class="language-text">        RULE:pattern/replacement/
+        RULE:pattern/replacement/[LU]</code></pre>
 
     Example <code>ssl.principal.mapping.rules</code> values are:
-    <pre>
-        RULE:^CN=(.*?),OU=ServiceUsers.*$/$1/,
+    <pre class="line-numbers"><code class="language-text">        RULE:^CN=(.*?),OU=ServiceUsers.*$/$1/,
         RULE:^CN=(.*?),OU=(.*?),O=(.*?),L=(.*?),ST=(.*?),C=(.*?)$/$1@$2/L,
         RULE:^.*[Cc][Nn]=([a-zA-Z0-9.]*).*$/$1/L,
-        DEFAULT
-    </pre>
+        DEFAULT</code></pre>
 
     Above rules translate distinguished name "CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" to "serviceuser"
     and "CN=adminUser,OU=Admin,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" to "adminuser@admin".
 
     <br>For advanced use cases, one can customize the name by setting a customized PrincipalBuilder in server.properties like the following.
-    <pre>principal.builder.class=CustomizedPrincipalBuilderClass</pre>
+    <pre>principal.builder.class=CustomizedPrincipalBuilderClass</code></pre>
 
-    <h5><a id="security_authz_sasl" href="#security_authz_sasl">Customizing SASL User Name</a></h5>
+    <h5 class="anchor-heading"><a id="security_authz_sasl" class="anchor-link"></a><a href="#security_authz_sasl">Customizing SASL User Name</a></h5>
 
     By default, the SASL user name will be the primary part of the Kerberos principal. One can change that by setting <code>sasl.kerberos.principal.to.local.rules</code> to a customized rule in server.properties.
     The format of <code>sasl.kerberos.principal.to.local.rules</code> is a list where each rule works in the same way as the auth_to_local in <a href="http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html">Kerberos configuration file (krb5.conf)</a>. This also support additional lowercase/uppercase rule, to force the translated result to be all lowercase/uppercase. This is done by adding a "/L" or "/U" to the end of the rule. check below formats for syntax.
     Each rules starts with RULE: and contains an expression as the following formats. See the kerberos documentation for more details.
-    <pre>
-        RULE:[n:string](regexp)s/pattern/replacement/
+    <pre class="line-numbers"><code class="language-text">        RULE:[n:string](regexp)s/pattern/replacement/
         RULE:[n:string](regexp)s/pattern/replacement/g
         RULE:[n:string](regexp)s/pattern/replacement//L
         RULE:[n:string](regexp)s/pattern/replacement/g/L
         RULE:[n:string](regexp)s/pattern/replacement//U
-        RULE:[n:string](regexp)s/pattern/replacement/g/U
-    </pre>
+        RULE:[n:string](regexp)s/pattern/replacement/g/U</code></pre>
 
     An example of adding a rule to properly translate user@MYDOMAIN.COM to user while also keeping the default rule in place is:
-    <pre>sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT</pre>
+    <pre>sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT</code></pre>
 
-    <h4><a id="security_authz_cli" href="#security_authz_cli">Command Line Interface</a></h4>
+    <h4 class="anchor-heading"><a id="security_authz_cli" class="anchor-link"></a><a href="#security_authz_cli">Command Line Interface</a></h4>
     Kafka Authorization management CLI can be found under bin directory with all the other CLIs. The CLI script is called <b>kafka-acls.sh</b>. Following lists all the options that the script supports:
     <p></p>
     <table class="data-table">
@@ -1431,45 +1355,45 @@ keyUsage               = digitalSignature, keyEncipherment
         </tr>
     </tbody></table>
 
-    <h4><a id="security_authz_examples" href="#security_authz_examples">Examples</a></h4>
+    <h4 class="anchor-heading"><a id="security_authz_examples" class="anchor-link"></a><a href="#security_authz_examples">Examples</a></h4>
     <ul>
         <li><b>Adding Acls</b><br>
     Suppose you want to add an acl "Principals User:Bob and User:Alice are allowed to perform Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1". You can do that by executing the CLI with following options:
-            <pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic</pre>
+            <pre class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic</code></pre>
             By default, all principals that don't have an explicit acl that allows access for an operation to a resource are denied. In rare cases where an allow acl is defined that allows access to all but some principal we will have to use the --deny-principal and --deny-host option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob from IP 198.51.100.3 we can do so using following commands:
-            <pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --allow-host * --deny-principal User:BadBob --deny-host 198.51.100.3 --operation Read --topic Test-topic</pre>
+            <pre class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --allow-host * --deny-principal User:BadBob --deny-host 198.51.100.3 --operation Read --topic Test-topic</code></pre>
             Note that <code>--allow-host</code> and <code>--deny-host</code> only support IP addresses (hostnames are not supported).
             Above examples add acls to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group by specifying --group [group-name].
             You can add acls on any resource of a certain type, e.g. suppose you wanted to add an acl "Principal User:Peter is allowed to produce to any Topic from IP 198.51.200.0"
             You can do that by using the wildcard resource '*', e.g. by executing the CLI with following options:
-            <pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Peter --allow-host 198.51.200.1 --producer --topic *</pre>
+            <pre class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Peter --allow-host 198.51.200.1 --producer --topic *</code></pre>
             You can add acls on prefixed resource patterns, e.g. suppose you want to add an acl "Principal User:Jane is allowed to produce to any Topic whose name starts with 'Test-' from any host".
             You can do that by executing the CLI with following options:
-            <pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Jane --producer --topic Test- --resource-pattern-type prefixed</pre>
+            <pre class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Jane --producer --topic Test- --resource-pattern-type prefixed</code></pre>
             Note, --resource-pattern-type defaults to 'literal', which only affects resources with the exact same name or, in the case of the wildcard resource name '*', a resource with any name.</li>
 
         <li><b>Removing Acls</b><br>
                 Removing acls is pretty much the same. The only difference is instead of --add option users will have to specify --remove option. To remove the acls added by the first example above we can execute the CLI with following options:
-            <pre class="brush: bash;"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic </pre>
+            <pre class="language-bash"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic </code></pre>
             If you want to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options:
-            <pre class="brush: bash;"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Jane --producer --topic Test- --resource-pattern-type Prefixed</pre></li>
+            <pre class="language-bash"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Jane --producer --topic Test- --resource-pattern-type Prefixed</code></pre></li>
 
         <li><b>List Acls</b><br>
                 We can list acls for any resource by specifying the --list option with the resource. To list all acls on the literal resource pattern Test-topic, we can execute the CLI with following options:
-                <pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic Test-topic</pre>
+                <pre class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic Test-topic</code></pre>
                 However, this will only return the acls that have been added to this exact resource pattern. Other acls can exist that affect access to the topic,
                 e.g. any acls on the topic wildcard '*', or any acls on prefixed resource patterns. Acls on the wildcard resource pattern can be queried explicitly:
-                <pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic *</pre>
+                <pre class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic *</code></pre>
                 However, it is not necessarily possible to explicitly query for acls on prefixed resource patterns that match Test-topic as the name of such patterns may not be known.
                 We can list <i>all</i> acls affecting Test-topic by using '--resource-pattern-type match', e.g.
-                <pre class="brush: bash;">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic Test-topic --resource-pattern-type match</pre>
+                <pre class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic Test-topic --resource-pattern-type match</code></pre>
                 This will list acls on all matching literal, wildcard and prefixed resource patterns.</li>
 
         <li><b>Adding or removing a principal as producer or consumer</b><br>
                 The most common use case for acl management are adding/removing a principal as producer or consumer so we added convenience options to handle these cases. In order to add User:Bob as a producer of  Test-topic we can execute the following command:
-            <pre class="brush: bash;"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --producer --topic Test-topic</pre>
+            <pre class="language-bash"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --producer --topic Test-topic</code></pre>
                 Similarly to add Alice as a consumer of Test-topic with consumer group Group-1 we just have to pass --consumer option:
-            <pre class="brush: bash;"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --consumer --topic Test-topic --group Group-1 </pre>
+            <pre class="language-bash"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --consumer --topic Test-topic --group Group-1 </code></pre>
                 Note that for consumer option we must also specify the consumer group.
                 In order to remove a principal from producer or consumer role we just need to pass --remove option. </li>
 
@@ -1477,18 +1401,17 @@ keyUsage               = digitalSignature, keyEncipherment
             Users having Alter permission on ClusterResource can use Admin API for ACL management. kafka-acls.sh script supports AdminClient API to manage ACLs without interacting with zookeeper/authorizer directly.
             All the above examples can be executed by using <b>--bootstrap-server</b> option. For example:
 
-            <pre class="brush: bash;">
-            bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --add --allow-principal User:Bob --producer --topic Test-topic
+            <pre class="line-numbers"><code class="language-bash">            bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --add --allow-principal User:Bob --producer --topic Test-topic
             bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --add --allow-principal User:Bob --consumer --topic Test-topic --group Group-1
-            bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --list --topic Test-topic</pre></li>
+            bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --list --topic Test-topic</code></pre></li>
 
     </ul>
 
-    <h4><a id="security_authz_primitives" href="#security_authz_primitives">Authorization Primitives</a></h4>
+    <h4 class="anchor-heading"><a id="security_authz_primitives" class="anchor-link"></a><a href="#security_authz_primitives">Authorization Primitives</a></h4>
     <p>Protocol calls are usually performing some operations on certain resources in Kafka. It is required to know the
         operations and resources to set up effective protection. In this section we'll list these operations and
         resources, then list the combination of these with the protocols to see the valid scenarios.</p>
-    <h5><a id="operations_in_kafka" href="#operations_in_kafka">Operations in Kafka</a></h5>
+    <h5 class="anchor-heading"><a id="operations_in_kafka" class="anchor-link"></a><a href="#operations_in_kafka">Operations in Kafka</a></h5>
     <p>There are a few operation primitives that can be used to build up privileges. These can be matched up with
         certain resources to allow specific protocol calls for a given user. These are:</p>
     <ul>
@@ -1504,7 +1427,7 @@ keyUsage               = digitalSignature, keyEncipherment
         <li>IdempotentWrite</li>
         <li>All</li>
     </ul>
-    <h5><a id="resources_in_kafka" href="#resources_in_kafka">Resources in Kafka</a></h5>
+    <h5 class="anchor-heading"><a id="resources_in_kafka" class="anchor-link"></a><a href="#resources_in_kafka">Resources in Kafka</a></h5>
     <p>The operations above can be applied on certain resources which are described below.</p>
     <ul>
         <li><b>Topic:</b> this simply represents a Topic. All protocol calls that are acting on topics (such as reading,
@@ -1524,7 +1447,7 @@ keyUsage               = digitalSignature, keyEncipherment
             <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka#KIP-48DelegationtokensupportforKafka-DescribeDelegationTokenRequest">KIP-48</a>
             and the related upstream documentation at <a href="#security_delegation_token">Authentication using Delegation Tokens</a>.</li>
     </ul>
-    <h5><a id="operations_resources_and_protocols" href="#operations_resources_and_protocols">Operations and Resources on Protocols</a></h5>
+    <h5 class="anchor-heading"><a id="operations_resources_and_protocols" class="anchor-link"></a><a href="#operations_resources_and_protocols">Operations and Resources on Protocols</a></h5>
     <p>In the below table we'll list the valid operations on resources that are executed by the Kafka API protocols.</p>
     <table class="data-table">
         <thead>
@@ -1970,7 +1893,7 @@ keyUsage               = digitalSignature, keyEncipherment
         </tbody>
     </table>
 
-    <h3><a id="security_rolling_upgrade" href="#security_rolling_upgrade">7.5 Incorporating Security Features in a Running Cluster</a></h3>
+    <h3 class="anchor-heading"><a id="security_rolling_upgrade" class="anchor-link"></a><a href="#security_rolling_upgrade">7.5 Incorporating Security Features in a Running Cluster</a></h3>
         You can secure a running cluster via one or more of the supported protocols discussed previously. This is done in phases:
         <p></p>
         <ul>
@@ -1990,64 +1913,48 @@ keyUsage               = digitalSignature, keyEncipherment
         When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It's also good practice to wait for restarted replicas to return to the ISR list before moving onto the next node.
         <p></p>
         As an example, say we wish to encrypt both broker-client and broker-broker communication with SSL. In the first incremental bounce, an SSL port is opened on each node:
-            <pre>
-            listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
-            </pre>
+            <pre class="line-numbers"><code class="language-text">            listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092</code></pre>
 
         We then restart the clients, changing their config to point at the newly opened, secured port:
 
-            <pre>
-            bootstrap.servers = [broker1:9092,...]
+            <pre class="line-numbers"><code class="language-text">            bootstrap.servers = [broker1:9092,...]
             security.protocol = SSL
-            ...etc
-            </pre>
+            ...etc</code></pre>
 
         In the second incremental server bounce we instruct Kafka to use SSL as the broker-broker protocol (which will use the same SSL port):
 
-            <pre>
-            listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
-            security.inter.broker.protocol=SSL
-            </pre>
+            <pre class="line-numbers"><code class="language-text">            listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
+            security.inter.broker.protocol=SSL</code></pre>
 
         In the final bounce we secure the cluster by closing the PLAINTEXT port:
 
-            <pre>
-            listeners=SSL://broker1:9092
-            security.inter.broker.protocol=SSL
-            </pre>
+            <pre class="line-numbers"><code class="language-text">            listeners=SSL://broker1:9092
+            security.inter.broker.protocol=SSL</code></pre>
 
         Alternatively we might choose to open multiple ports so that different protocols can be used for broker-broker and broker-client communication. Say we wished to use SSL encryption throughout (i.e. for broker-broker and broker-client communication) but we'd like to add SASL authentication to the broker-client connection also. We would achieve this by opening two additional ports during the first bounce:
 
-            <pre>
-            listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
-            </pre>
+            <pre class="line-numbers"><code class="language-text">            listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093</code></pre>
 
         We would then restart the clients, changing their config to point at the newly opened, SASL & SSL secured port:
 
-            <pre>
-            bootstrap.servers = [broker1:9093,...]
+            <pre class="line-numbers"><code class="language-text">            bootstrap.servers = [broker1:9093,...]
             security.protocol = SASL_SSL
-            ...etc
-            </pre>
+            ...etc</code></pre>
 
         The second server bounce would switch the cluster to use encrypted broker-broker communication via the SSL port we previously opened on port 9092:
 
-            <pre>
-            listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
-            security.inter.broker.protocol=SSL
-            </pre>
+            <pre class="line-numbers"><code class="language-text">            listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
+            security.inter.broker.protocol=SSL</code></pre>
 
         The final bounce secures the cluster by closing the PLAINTEXT port.
 
-            <pre>
-        listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
-        security.inter.broker.protocol=SSL
-            </pre>
+            <pre class="line-numbers"><code class="language-text">        listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
+        security.inter.broker.protocol=SSL</code></pre>
 
         ZooKeeper can be secured independently of the Kafka cluster. The steps for doing this are covered in section <a href="#zk_authz_migration">7.6.2</a>.
 
 
-    <h3><a id="zk_authz" href="#zk_authz">7.6 ZooKeeper Authentication</a></h3>
+    <h3 class="anchor-heading"><a id="zk_authz" class="anchor-link"></a><a href="#zk_authz">7.6 ZooKeeper Authentication</a></h3>
     ZooKeeper supports mutual TLS (mTLS) authentication beginning with the 3.5.x versions.
     Kafka supports authenticating to ZooKeeper with SASL and mTLS -- either individually or both together --
     beginning with version 2.5. See
@@ -2079,8 +1986,8 @@ keyUsage               = digitalSignature, keyEncipherment
         Use the <tt>-zk-tls-config-file &lt;file&gt;</tt> option (note the single-dash rather than double-dash)
         to set TLS configs for the <tt>zookeeper-shell.sh</tt> CLI tool.
     </p>
-    <h4><a id="zk_authz_new" href="#zk_authz_new">7.6.1 New clusters</a></h4>
-    <h5><a id="zk_authz_new_sasl" href="#zk_authz_new_sasl">7.6.1.1 ZooKeeper SASL Authentication</a></h5>
+    <h4 class="anchor-heading"><a id="zk_authz_new" class="anchor-link"></a><a href="#zk_authz_new">7.6.1 New clusters</a></h4>
+    <h5 class="anchor-heading"><a id="zk_authz_new_sasl" class="anchor-link"></a><a href="#zk_authz_new_sasl">7.6.1.1 ZooKeeper SASL Authentication</a></h5>
     To enable ZooKeeper SASL authentication on brokers, there are two necessary steps:
     <ol>
         <li> Create a JAAS login file and set the appropriate system property to point to it as described above</li>
@@ -2089,7 +1996,7 @@ keyUsage               = digitalSignature, keyEncipherment
 
     The metadata stored in ZooKeeper for the Kafka cluster is world-readable, but can only be modified by the brokers. The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of that data can cause cluster disruption. We also recommend limiting the access to ZooKeeper via network segmentation (only brokers and some admin tools need access to ZooKeeper).
 
-    <h5><a id="zk_authz_new_mtls" href="#zk_authz_new_mtls">7.6.1.2 ZooKeeper Mutual TLS Authentication</a></h5>
+    <h5 class="anchor-heading"><a id="zk_authz_new_mtls" class="anchor-link"></a><a href="#zk_authz_new_mtls">7.6.1.2 ZooKeeper Mutual TLS Authentication</a></h5>
     ZooKeeper mTLS authentication can be enabled with or without SASL authentication.  As mentioned above,
     when using mTLS alone, every broker and any CLI tools (such as the <a href="#zk_authz_migration">ZooKeeper Security Migration Tool</a>)
     must generally identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed, which means
@@ -2105,15 +2012,13 @@ keyUsage               = digitalSignature, keyEncipherment
     Here is a sample (partial) ZooKeeper configuration for enabling TLS authentication.
     These configurations are described in the
     <a href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperAdmin.html#sc_authOptions">ZooKeeper Admin Guide</a>.
-    <pre>
-        secureClientPort=2182
+    <pre class="line-numbers"><code class="language-text">        secureClientPort=2182
         serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
         authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
         ssl.keyStore.location=/path/to/zk/keystore.jks
         ssl.keyStore.password=zk-ks-passwd
         ssl.trustStore.location=/path/to/zk/truststore.jks
-        ssl.trustStore.password=zk-ts-passwd
-    </pre>
+        ssl.trustStore.password=zk-ts-passwd</code></pre>
     <strong>IMPORTANT</strong>: ZooKeeper does not support setting the key password in the ZooKeeper server keystore
     to a value different from the keystore password itself.
     Be sure to set the key password to be the same as the keystore password.
@@ -2121,8 +2026,7 @@ keyUsage               = digitalSignature, keyEncipherment
     <p>Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with mTLS authentication.
     These configurations are described above in <a href="#brokerconfigs">Broker Configs</a>.
     </p>
-    <pre>
-        # connect to the ZooKeeper port configured for TLS
+    <pre class="line-numbers"><code class="language-text">        # connect to the ZooKeeper port configured for TLS
         zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
         # required to use TLS to ZooKeeper (default is false)
         zookeeper.ssl.client.enable=true
@@ -2134,26 +2038,23 @@ keyUsage               = digitalSignature, keyEncipherment
         zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
         zookeeper.ssl.truststore.password=kafka-ts-passwd
         # tell broker to create ACLs on znodes
-        zookeeper.set.acl=true
-    </pre>
+        zookeeper.set.acl=true</code></pre>
     <strong>IMPORTANT</strong>: ZooKeeper does not support setting the key password in the ZooKeeper client (i.e. broker) keystore
     to a value different from the keystore password itself.
     Be sure to set the key password to be the same as the keystore password.
 
-    <h4><a id="zk_authz_migration" href="#zk_authz_migration">7.6.2 Migrating clusters</a></h4>
+    <h4 class="anchor-heading"><a id="zk_authz_migration" class="anchor-link"></a><a href="#zk_authz_migration">7.6.2 Migrating clusters</a></h4>
     If you are running a version of Kafka that does not support security or simply with security disabled, and you want to make the cluster secure, then you need to execute the following steps to enable ZooKeeper authentication with minimal disruption to your operations:
     <ol>
         <li>Enable SASL and/or mTLS authentication on ZooKeeper.  If enabling mTLS, you would now have both a non-TLS port and a TLS port, like this:
-            <pre>
-    clientPort=2181
+            <pre class="line-numbers"><code class="language-text">    clientPort=2181
     secureClientPort=2182
     serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
     authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
     ssl.keyStore.location=/path/to/zk/keystore.jks
     ssl.keyStore.password=zk-ks-passwd
     ssl.trustStore.location=/path/to/zk/truststore.jks
-    ssl.trustStore.password=zk-ts-passwd
-            </pre>
+    ssl.trustStore.password=zk-ts-passwd</code></pre>
         </li>
         <li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations (including connecting to the TLS-enabled ZooKeeper port) as required, which enables brokers to authenticate to ZooKeeper. At the end of the rolling restart, brokers are able to manipulate znodes with strict ACLs, but they will not create znodes with those ACLs</li>
         <li>If you enabled mTLS, disable the non-TLS port in ZooKeeper</li>
@@ -2169,32 +2070,27 @@ keyUsage               = digitalSignature, keyEncipherment
         <li>If you are disabling mTLS, disable the TLS port in ZooKeeper</li>
     </ol>
     Here is an example of how to run the migration tool:
-    <pre class="brush: bash;">
-    bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181
-    </pre>
+    <pre class="line-numbers"><code class="language-bash">    bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181</code></pre>
     <p>Run this to see the full list of parameters:</p>
-    <pre class="brush: bash;">
-    bin/zookeeper-security-migration.sh --help
-    </pre>
-    <h4><a id="zk_authz_ensemble" href="#zk_authz_ensemble">7.6.3 Migrating the ZooKeeper ensemble</a></h4>
+    <pre class="line-numbers"><code class="language-bash">    bin/zookeeper-security-migration.sh --help</code></pre>
+    <h4 class="anchor-heading"><a id="zk_authz_ensemble" class="anchor-link"></a><a href="#zk_authz_ensemble">7.6.3 Migrating the ZooKeeper ensemble</a></h4>
     It is also necessary to enable SASL and/or mTLS authentication on the ZooKeeper ensemble. To do it, we need to perform a rolling restart of the server and set a few properties. See above for mTLS information.  Please refer to the ZooKeeper documentation for more detail:
     <ol>
         <li><a href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperProgrammers.html#sc_ZooKeeperAccessControl">Apache ZooKeeper documentation</a></li>
         <li><a href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL">Apache ZooKeeper wiki</a></li>
     </ol>
-    <h4><a id="zk_authz_quorum" href="#zk_authz_quorum">7.6.4 ZooKeeper Quorum Mutual TLS Authentication</a></h4>
+    <h4 class="anchor-heading"><a id="zk_authz_quorum" class="anchor-link"></a><a href="#zk_authz_quorum">7.6.4 ZooKeeper Quorum Mutual TLS Authentication</a></h4>
     It is possible to enable mTLS authentication between the ZooKeeper servers themselves.
     Please refer to the <a href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperAdmin.html#Quorum+TLS">ZooKeeper documentation</a> for more detail.
 
-    <h3><a id="zk_encryption" href="#zk_encryption">7.7 ZooKeeper Encryption</a></h3>
+    <h3 class="anchor-heading"><a id="zk_encryption" class="anchor-link"></a><a href="#zk_encryption">7.7 ZooKeeper Encryption</a></h3>
     ZooKeeper connections that use mutual TLS are encrypted.
     Beginning with ZooKeeper version 3.5.7 (the version shipped with Kafka version 2.5) ZooKeeper supports a sever-side config
     <tt>ssl.clientAuth</tt> (case-insensitively: <tt>want</tt>/<tt>need</tt>/<tt>none</tt> are the valid options, the default is <tt>need</tt>),
     and setting this value to <tt>none</tt> in ZooKeeper allows clients to connect via a TLS-encrypted connection
     without presenting their own certificate.  Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with just TLS encryption.
     These configurations are described above in <a href="#brokerconfigs">Broker Configs</a>.
-    <pre>
-        # connect to the ZooKeeper port configured for TLS
+    <pre class="line-numbers"><code class="language-text">        # connect to the ZooKeeper port configured for TLS
         zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
         # required to use TLS to ZooKeeper (default is false)
         zookeeper.ssl.client.enable=true
@@ -2205,8 +2101,7 @@ keyUsage               = digitalSignature, keyEncipherment
         zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
         zookeeper.ssl.truststore.password=kafka-ts-passwd
         # tell broker to create ACLs on znodes (if using SASL authentication, otherwise do not set this)
-        zookeeper.set.acl=true
-    </pre>
+        zookeeper.set.acl=true</code></pre>
 </script>
 
 <div class="p-security"></div>
diff --git a/docs/streams/architecture.html b/docs/streams/architecture.html
index 43de9e7..67594c2 100644
--- a/docs/streams/architecture.html
+++ b/docs/streams/architecture.html
@@ -41,7 +41,7 @@
     </p>
     <img class="centered" src="/{{version}}/images/streams-architecture-overview.jpg" style="width:750px">
 
-    <h3><a id="streams_architecture_tasks" href="#streams_architecture_tasks">Stream Partitions and Tasks</a></h3>
+    <h3 class="anchor-heading"><a id="streams_architecture_tasks" class="anchor-link"></a><a href="#streams_architecture_tasks">Stream Partitions and Tasks</a></h3>
 
     <p>
         The messaging layer of Kafka partitions data for storing and transporting it. Kafka Streams partitions data for processing it.
@@ -91,7 +91,7 @@
     <img class="centered" src="/{{version}}/images/streams-architecture-tasks.jpg" style="width:400px">
     <br>
 
-    <h3><a id="streams_architecture_threads" href="#streams_architecture_threads">Threading Model</a></h3>
+    <h3 class="anchor-heading"><a id="streams_architecture_threads" class="anchor-link"></a><a href="#streams_architecture_threads">Threading Model</a></h3>
 
     <p>
         Kafka Streams allows the user to configure the number of <b>threads</b> that the library can use to parallelize processing within an application instance.
@@ -112,7 +112,7 @@
     </p>
     <br>
 
-    <h3><a id="streams_architecture_state" href="#streams_architecture_state">Local State Stores</a></h3>
+    <h3 class="anchor-heading"><a id="streams_architecture_state" class="anchor-link"></a><a href="#streams_architecture_state">Local State Stores</a></h3>
 
     <p>
         Kafka Streams provides so-called <b>state stores</b>, which can be used by stream processing applications to store and query data,
@@ -131,7 +131,7 @@
     <img class="centered" src="/{{version}}/images/streams-architecture-states.jpg" style="width:400px">
     <br>
 
-    <h3><a id="streams_architecture_recovery" href="#streams_architecture_recovery">Fault Tolerance</a></h3>
+    <h3 class="anchor-heading"><a id="streams_architecture_recovery" class="anchor-link"></a><a href="#streams_architecture_recovery">Fault Tolerance</a></h3>
 
     <p>
         Kafka Streams builds on fault-tolerance capabilities integrated natively within Kafka. Kafka partitions are highly available and replicated; so when stream data is persisted to Kafka it is available
@@ -165,10 +165,10 @@
 
 <!--#include virtual="../../includes/_header.htm" -->
 <!--#include virtual="../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
     <!--#include virtual="../../includes/_nav.htm" -->
     <div class="right">
-        <!--#include virtual="../../includes/_docs_banner.htm" -->
+        <!--//#include virtual="../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
             <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/core-concepts.html b/docs/streams/core-concepts.html
index d8cbd7a..5049ccd 100644
--- a/docs/streams/core-concepts.html
+++ b/docs/streams/core-concepts.html
@@ -58,7 +58,7 @@
         We first summarize the key concepts of Kafka Streams.
     </p>
 
-    <h3><a id="streams_topology" href="#streams_topology">Stream Processing Topology</a></h3>
+    <h3 class="anchor-heading"><a id="streams_topology" class="anchor-link"></a><a href="#streams_topology">Stream Processing Topology</a></h3>
 
     <ul>
         <li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
@@ -88,7 +88,7 @@
         At runtime, the logical topology is instantiated and replicated inside the application for parallel processing (see <a href="/{{version}}/documentation/streams/architecture#streams_architecture_tasks"><b>Stream Partitions and Tasks</b></a> for details).
     </p>
 
-    <h3><a id="streams_time" href="#streams_time">Time</a></h3>
+    <h3 class="anchor-heading"><a id="streams_time" class="anchor-link"></a><a href="#streams_time">Time</a></h3>
 
     <p>
         A critical aspect in stream processing is the notion of <b>time</b>, and how it is modeled and integrated.
@@ -157,7 +157,7 @@
 
     </p>
 
-    <h3><a id="streams_concepts_aggregations" href="#streams_concepts_aggregations">Aggregations</a></h3>
+    <h3 class="anchor-heading"><a id="streams_concepts_aggregations" class="anchor-link"></a><a href="#streams_concepts_aggregations">Aggregations</a></h3>
     <p>
         An <strong>aggregation</strong> operation takes one input stream or table, and yields a new table by combining multiple input records into a single output record. Examples of aggregations are computing counts or sum.
     </p>
@@ -166,7 +166,7 @@
         In the <code>Kafka Streams DSL</code>, an input stream of an <code>aggregation</code> can be a KStream or a KTable, but the output stream will always be a KTable. This allows Kafka Streams to update an aggregate value upon the out-of-order arrival of further records after the value was produced and emitted. When such out-of-order arrival happens, the aggregating KStream or KTable emits a new aggregate value. Because the output is a KTable, the new value is considered to overwrite [...]
     </p>
 
-    <h3> <a id="streams_concepts_windowing" href="#streams_concepts_windowing">Windowing</a></h3>
+    <h3 class="anchor-heading"><a id="streams_concepts_windowing" class="anchor-link"></a><a href="#streams_concepts_windowing">Windowing</a></h3>
     <p>
         Windowing lets you control how to <em>group records that have the same key</em> for stateful operations such as <code>aggregations</code> or <code>joins</code> into so-called <em>windows</em>. Windows are tracked per record key.
     </p>
@@ -177,7 +177,7 @@
         Out-of-order records are always possible in the real world and should be properly accounted for in your applications. It depends on the effective <code>time semantics </code> how out-of-order records are handled. In the case of processing-time, the semantics are &quot;when the record is being processed&quot;, which means that the notion of out-of-order records is not applicable as, by definition, no record can be out-of-order. Hence, out-of-order records can only be considered as [...]
     </p>
 
-    <h3><a id="streams_concepts_duality" href="#streams-concepts-duality">Duality of Streams and Tables</a></h3>
+    <h3 class="anchor-heading"><a id="streams_concepts_duality" class="anchor-link"></a><a href="#streams_concepts_duality">Duality of Streams and Tables</a></h3>
     <p>
         When implementing stream processing use cases in practice, you typically need both <strong>streams</strong> and also <strong>databases</strong>.
         An example use case that is very common in practice is an e-commerce application that enriches an incoming <em>stream</em> of customer
@@ -204,7 +204,7 @@
       Essentially, this duality means that a stream can be viewed as a table, and a table can be viewed as a stream.
   </p>
 
-    <h3><a id="streams_state" href="#streams_state">States</a></h3>
+    <h3 class="anchor-heading"><a id="streams_state" class="anchor-link"></a><a href="#streams_state">States</a></h3>
 
     <p>
         Some stream processing applications don't require state, which means the processing of a message is independent from
@@ -224,7 +224,7 @@
     </p>
     <br>
 
-    <h2><a id="streams_processing_guarantee" href="#streams_processing_guarantee">Processing Guarantees</a></h2>
+    <h2 class="anchor-heading"><a id="streams_processing_guarantee" class="anchor-link"></a><a href="#streams_processing_guarantee">Processing Guarantees</a></h2>
 
     <p>
         In stream processing, one of the most frequently asked question is "does my stream processing system guarantee that each record is processed once and only once, even if some failures are encountered in the middle of processing?"
@@ -254,7 +254,7 @@
         For more information, see the <a href="/{{version}}/documentation/streams/developer-guide/config-streams.html">Kafka Streams Configs</a> section.
     </p>
 
-    <h3><a id="streams_out_of_ordering" href="#streams_out_of_ordering">Out-of-Order Handling</a></h3>
+    <h3 class="anchor-heading"><a id="streams_out_of_ordering" class="anchor-link"></a><a href="#streams_out_of_ordering">Out-of-Order Handling</a></h3>
 
     <p>
         Besides the guarantee that each record will be processed exactly-once, another issue that many stream processing application will face is how to
@@ -293,10 +293,10 @@
 
 <!--#include virtual="../../includes/_header.htm" -->
 <!--#include virtual="../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
     <!--#include virtual="../../includes/_nav.htm" -->
     <div class="right">
-        <!--#include virtual="../../includes/_docs_banner.htm" -->
+        <!--//#include virtual="../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
             <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/app-reset-tool.html b/docs/streams/developer-guide/app-reset-tool.html
index e875a23..246f8c4 100644
--- a/docs/streams/developer-guide/app-reset-tool.html
+++ b/docs/streams/developer-guide/app-reset-tool.html
@@ -77,8 +77,7 @@
         <div class="section" id="step-1-run-the-application-reset-tool">
             <h2>Step 1: Run the application reset tool<a class="headerlink" href="#step-1-run-the-application-reset-tool" title="Permalink to this headline"></a></h2>
             <p>Invoke the application reset tool from the command line</p>
-            <div class="highlight-bash"><div class="highlight"><pre><span></span>&lt;path-to-kafka&gt;/bin/kafka-streams-application-reset
-</pre></div>
+            <div class="highlight-bash"><div class="highlight"><pre><span></span>&lt;path-to-kafka&gt;/bin/kafka-streams-application-reset</code></pre></div>
             </div>
             <p>The tool accepts the following parameters:</p>
             <div class="highlight-bash"><div class="highlight"><pre><span></span>Option <span class="o">(</span>* <span class="o">=</span> required<span class="o">)</span>                 Description
@@ -120,8 +119,7 @@
                                         directly.
 --force                               Force removing members of the consumer group
                                       (intended to remove left-over members if
-                                      long session timeout was configured).
-</pre></div>
+                                      long session timeout was configured).</code></pre></div>
             </div>
             <p>Consider the following as reset-offset scenarios for <code>input-topics</code>:</p>
             <ul>
@@ -161,10 +159,10 @@
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
   <!--#include virtual="../../../includes/_nav.htm" -->
   <div class="right">
-    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
     <ul class="breadcrumbs">
       <li><a href="/documentation">Documentation</a></li>
       <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/config-streams.html b/docs/streams/developer-guide/config-streams.html
index a1674b0..d6164f0 100644
--- a/docs/streams/developer-guide/config-streams.html
+++ b/docs/streams/developer-guide/config-streams.html
@@ -47,8 +47,7 @@
 <span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">APPLICATION_ID_CONFIG</span><span class="o">,</span> <span class="s">&quot;my-first-streams-application&quot;</span><span class="o">);</span>
 <span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">BOOTSTRAP_SERVERS_CONFIG</span><span class="o">,</span> <span class="s">&quot;kafka-broker1:9092&quot;</span><span class="o">);</span>
 <span class="c1">// Any further settings</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(...</span> <span class="o">,</span> <span class="o">...);</span>
-</pre></div>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(...</span> <span class="o">,</span> <span class="o">...);</span></code></pre></div>
         </div>
       </li>
     </ol>
@@ -309,14 +308,14 @@
             <td>1</td>
           </tr>
           <tr class="row-odd"><td>retries</td>
-              <td>Medium</td>
-              <td colspan="2">The number of retries for broker requests that return a retryable error. </td>
-              <td>0</td>
+            <td>Medium</td>
+            <td colspan="2">The number of retries for broker requests that return a retryable error. </td>
+            <td>0</td>
           </tr>
           <tr class="row-even"><td>retry.backoff.ms</td>
-              <td>Medium</td>
-              <td colspan="2">The amount of time in milliseconds, before a request is retried. This applies if the <code class="docutils literal"><span class="pre">retries</span></code> parameter is configured to be greater than 0. </td>
-              <td>100</td>
+            <td>Medium</td>
+            <td colspan="2">The amount of time in milliseconds, before a request is retried. This applies if the <code class="docutils literal"><span class="pre">retries</span></code> parameter is configured to be greater than 0. </td>
+            <td>100</td>
           </tr>
           <tr class="row-odd"><td>rocksdb.config.setter</td>
             <td>Medium</td>
@@ -355,9 +354,9 @@
           <blockquote>
             <div>
               <p>
-              The maximum acceptable lag (total number of offsets to catch up from the changelog) for an instance to be considered caught-up and able to receive an active task. Streams will only assign
-              stateful active tasks to instances whose state stores are within the acceptable recovery lag, if any exist, and assign warmup replicas to restore state in the background for instances
-              that are not yet caught up. Should correspond to a recovery time of well under a minute for a given workload. Must be at least 0.
+                The maximum acceptable lag (total number of offsets to catch up from the changelog) for an instance to be considered caught-up and able to receive an active task. Streams will only assign
+                stateful active tasks to instances whose state stores are within the acceptable recovery lag, if any exist, and assign warmup replicas to restore state in the background for instances
+                that are not yet caught up. Should correspond to a recovery time of well under a minute for a given workload. Must be at least 0.
               </p>
               <p>
                 Note: if you set this to <code>Long.MAX_VALUE</code> it effectively disables the warmup replicas and task high availability, allowing Streams to immediately produce a balanced
@@ -390,8 +389,7 @@
                 The drawback of this approach is that "manual" writes are side effects that are invisible to the Kafka Streams runtime library,
                 so they do not benefit from the end-to-end processing guarantees of the Streams API:</p>
 
-              <pre class="brush: java;">
-              public class SendToDeadLetterQueueExceptionHandler implements DeserializationExceptionHandler {
+              <pre class="line-numbers"><code class="language-java">              public class SendToDeadLetterQueueExceptionHandler implements DeserializationExceptionHandler {
                   KafkaProducer&lt;byte[], byte[]&gt; dlqProducer;
                   String dlqTopic;
 
@@ -415,8 +413,7 @@
                       dlqProducer = .. // get a producer from the configs map
                       dlqTopic = .. // get the topic name from the configs map
                   }
-              }
-              </pre>
+              }</code></pre>
 
             </div></blockquote>
         </div>
@@ -427,10 +424,10 @@
               such as attempting to produce a record that is too large. By default, Kafka provides and uses the <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/streams/errors/DefaultProductionExceptionHandler.html">DefaultProductionExceptionHandler</a>
               that always fails when these exceptions occur.</p>
 
-            <p>Each exception handler can return a <code>FAIL</code> or <code>CONTINUE</code> depending on the record and the exception thrown. Returning <code>FAIL</code> will signal that Streams should shut down and <code>CONTINUE</code> will signal that Streams
-            should ignore the issue and continue processing. If you want to provide an exception handler that always ignores records that are too large, you could implement something like the following:</p>
+              <p>Each exception handler can return a <code>FAIL</code> or <code>CONTINUE</code> depending on the record and the exception thrown. Returning <code>FAIL</code> will signal that Streams should shut down and <code>CONTINUE</code> will signal that Streams
+                should ignore the issue and continue processing. If you want to provide an exception handler that always ignores records that are too large, you could implement something like the following:</p>
 
-            <pre class="brush: java;">
+            <pre class="line-numbers"><code class="language-java">
             import java.util.Properties;
             import org.apache.kafka.streams.StreamsConfig;
             import org.apache.kafka.common.errors.RecordTooLargeException;
@@ -455,7 +452,7 @@
             // other various kafka streams settings, e.g. bootstrap servers, application id, etc
 
             settings.put(StreamsConfig.DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG,
-                         IgnoreRecordTooLargeHandler.class);</pre></div>
+                         IgnoreRecordTooLargeHandler.class);</code></pre></div>
           </blockquote>
         </div>
         <div class="section" id="timestamp-extractor">
@@ -569,7 +566,7 @@
                 <li>Whenever data is read from or written to a <em>Kafka topic</em> (e.g., via the <code class="docutils literal"><span class="pre">StreamsBuilder#stream()</span></code> and <code class="docutils literal"><span class="pre">KStream#to()</span></code> methods).</li>
                 <li>Whenever data is read from or written to a <em>state store</em>.</li>
               </ul>
-                <p>This is discussed in more detail in <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data types and serialization</span></a>.</p>
+              <p>This is discussed in more detail in <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data types and serialization</span></a>.</p>
             </div></blockquote>
         </div>
         <div class="section" id="default-windowed-key-serde-inner">
@@ -581,7 +578,7 @@
                   <li>Whenever data is read from or written to a <em>Kafka topic</em> (e.g., via the <code class="docutils literal"><span class="pre">StreamsBuilder#stream()</span></code> and <code class="docutils literal"><span class="pre">KStream#to()</span></code> methods).</li>
                   <li>Whenever data is read from or written to a <em>state store</em>.</li>
                 </ul>
-                  <p>This is discussed in more detail in <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data types and serialization</span></a>.</p>
+                <p>This is discussed in more detail in <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data types and serialization</span></a>.</p>
                 </div>
             </div></blockquote>
         </div>
@@ -629,11 +626,11 @@
             </div>
           </blockquote>
         </div>
-          <div class="admonition note">
-              <p class="first admonition-title">Note</p>
-              <p class="last">If you enable <cite>n</cite> standby tasks, you need to provision <cite>n+1</cite> <code class="docutils literal"><span class="pre">KafkaStreams</span></code>
-              instances.</p>
-          </div>
+        <div class="admonition note">
+          <p class="first admonition-title">Note</p>
+          <p class="last">If you enable <cite>n</cite> standby tasks, you need to provision <cite>n+1</cite> <code class="docutils literal"><span class="pre">KafkaStreams</span></code>
+            instances.</p>
+        </div>
         <div class="section" id="num-stream-threads">
           <h4><a class="toc-backref" href="#id11">num.stream.threads</a><a class="headerlink" href="#num-stream-threads" title="Permalink to this headline"></a></h4>
           <blockquote>
@@ -664,22 +661,22 @@
           <span id="streams-developer-guide-processing-guarantee"></span><h4><a class="toc-backref" href="#id25">processing.guarantee</a><a class="headerlink" href="#processing-guarantee" title="Permalink to this headline"></a></h4>
           <blockquote>
             <div>The processing guarantee that should be used.
-                 Possible values are <code class="docutils literal"><span class="pre">"at_least_once"</span></code> (default),
-                 <code class="docutils literal"><span class="pre">"exactly_once"</span></code>,
-                 and <code class="docutils literal"><span class="pre">"exactly_once_beta"</span></code>.
-                 Using <code class="docutils literal"><span class="pre">"exactly_once"</span></code> requires broker
-                 version 0.11.0 or newer, while using <code class="docutils literal"><span class="pre">"exactly_once_beta"</span></code>
-                 requires broker version 2.5 or newer.
-                 Note that if exactly-once processing is enabled, the default for parameter
-                 <code class="docutils literal"><span class="pre">commit.interval.ms</span></code> changes to 100ms.
-                 Additionally, consumers are configured with <code class="docutils literal"><span class="pre">isolation.level="read_committed"</span></code>
-                 and producers are configured with <code class="docutils literal"><span class="pre">enable.idempotence=true</span></code> per default.
-                 Note that by default exactly-once processing requires a cluster of at least three brokers what is the recommended setting for production.
-                 For development, you can change this configuration by adjusting broker setting
-                 <code class="docutils literal"><span class="pre">transaction.state.log.replication.factor</span></code>
-                 and <code class="docutils literal"><span class="pre">transaction.state.log.min.isr</span></code>
-                 to the number of brokers you want to use.
-                 For more details see <a href="../core-concepts#streams_processing_guarantee">Processing Guarantees</a>.
+              Possible values are <code class="docutils literal"><span class="pre">"at_least_once"</span></code> (default),
+              <code class="docutils literal"><span class="pre">"exactly_once"</span></code>,
+              and <code class="docutils literal"><span class="pre">"exactly_once_beta"</span></code>.
+              Using <code class="docutils literal"><span class="pre">"exactly_once"</span></code> requires broker
+              version 0.11.0 or newer, while using <code class="docutils literal"><span class="pre">"exactly_once_beta"</span></code>
+              requires broker version 2.5 or newer.
+              Note that if exactly-once processing is enabled, the default for parameter
+              <code class="docutils literal"><span class="pre">commit.interval.ms</span></code> changes to 100ms.
+              Additionally, consumers are configured with <code class="docutils literal"><span class="pre">isolation.level="read_committed"</span></code>
+              and producers are configured with <code class="docutils literal"><span class="pre">enable.idempotence=true</span></code> per default.
+              Note that by default exactly-once processing requires a cluster of at least three brokers what is the recommended setting for production.
+              For development, you can change this configuration by adjusting broker setting
+              <code class="docutils literal"><span class="pre">transaction.state.log.replication.factor</span></code>
+              and <code class="docutils literal"><span class="pre">transaction.state.log.min.isr</span></code>
+              to the number of brokers you want to use.
+              For more details see <a href="../core-concepts#streams_processing_guarantee">Processing Guarantees</a>.
             </div></blockquote>
         </div>
         <div class="section" id="replication-factor">
@@ -729,79 +726,79 @@
                     <span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
                     <span class="n">streamsConfig</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">ROCKSDB_CONFIG_SETTER_CLASS_CONFIG</span><span class="o">,</span> <span class="n">CustomRocksDBConfig</span><span class="o">.</span><span class="na">class</span><span class="o">);</span>
                     </pre></div>
-                    </div>
-                    <dl class="docutils">
-                      <dt>Notes for example:</dt>
-                      <dd><ol class="first last arabic simple">
-                        <li><code class="docutils literal"><span class="pre">BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();</span></code> Get a reference to the existing table config rather than create a new one, so you don't accidentally overwrite defaults such as the <code class="docutils literal"><span class="pre">BloomFilter</span></code>, which is an important optimization.
-                        <li><code class="docutils literal"><span class="pre">tableConfig.setBlockSize(16</span> <span class="pre">*</span> <span class="pre">1024L);</span></code> Modify the default <a class="reference external" href="https://github.com/apache/kafka/blob/2.3/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java#L79">block size</a> per these instructions from the <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Memory- [...]
-                        <li><code class="docutils literal"><span class="pre">tableConfig.setCacheIndexAndFilterBlocks(true);</span></code> Do not let the index and filter blocks grow unbounded. For more information, see the <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks">RocksDB GitHub</a>.</li>
-                        <li><code class="docutils literal"><span class="pre">options.setMaxWriteBufferNumber(2);</span></code> See the advanced options in the <a class="reference external" href="https://github.com/facebook/rocksdb/blob/8dee8cad9ee6b70fd6e1a5989a8156650a70c04f/include/rocksdb/advanced_options.h#L103">RocksDB GitHub</a>.</li>
-                        <li><code class="docutils literal"><span class="pre">cache.close();</span></code> To avoid memory leaks, you must close any objects you constructed that extend org.rocksdb.RocksObject. See  <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/RocksJava-Basics#memory-management">RocksJava docs</a> for more details.</li>
-                      </ol>
-                      </dd>
-                    </dl>
-                  </div></blockquote>
               </div>
-            </div>
-          </blockquote>
-        </div>
-        <div class="section" id="state-dir">
-          <h4><a class="toc-backref" href="#id14">state.dir</a><a class="headerlink" href="#state-dir" title="Permalink to this headline"></a></h4>
-          <blockquote>
-            <div>The state directory. Kafka Streams persists local states under the state directory. Each application has a subdirectory on its hosting
-              machine that is located under the state directory. The name of the subdirectory is the application ID. The state stores associated
-              with the application are created under this subdirectory. When running multiple instances of the same application on a single machine,
-              this path must be unique for each such instance.</div>
-          </blockquote>
-        </div>
-        <div class="section" id="topology-optimization">
-          <h4><a class="toc-backref" href="#id31">topology.optimization</a><a class="headerlink" href="#topology-optimization" title="Permalink to this headline"></a></h4>
-          <blockquote>
-            <div>
-              <p>
-                You can tell Streams to apply topology optimizations by setting this config. The optimizations are currently all or none and disabled by default.
-                These optimizations include moving/reducing repartition topics and reusing the source topic as the changelog for source KTables. It is recommended to enable this.
-              </p>
-              <p>
-                Note that as of 2.3, you need to do two things to enable optimizations. In addition to setting this config to <code>StreamsConfig.OPTIMIZE</code>, you'll need to pass in your
-                configuration properties when building your topology by using the overloaded <code>StreamsBuilder.build(Properties)</code> method.
-                For example <code>KafkaStreams myStream = new KafkaStreams(streamsBuilder.build(properties), properties)</code>.
-              </p>
-          </div></blockquote>
-        </div>
-        <div class="section" id="upgrade-from">
-          <h4><a class="toc-backref" href="#id14">upgrade.from</a><a class="headerlink" href="#upgrade-from" title="Permalink to this headline"></a></h4>
-          <blockquote>
-            <div>
-              The version you are upgrading from. It is important to set this config when performing a rolling upgrade to certain versions, as described in the upgrade guide.
-              You should set this config to the appropriate version before bouncing your instances and upgrading them to the newer version. Once everyone is on the
-              newer version, you should remove this config and do a second rolling bounce. It is only necessary to set this config and follow the two-bounce upgrade path
-              when upgrading from below version 2.0, or when upgrading to 2.4+ from any version lower than 2.4.
-            </div>
-          </blockquote>
+              <dl class="docutils">
+                <dt>Notes for example:</dt>
+                <dd><ol class="first last arabic simple">
+                  <li><code class="docutils literal"><span class="pre">BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();</span></code> Get a reference to the existing table config rather than create a new one, so you don't accidentally overwrite defaults such as the <code class="docutils literal"><span class="pre">BloomFilter</span></code>, which is an important optimization.
+                  <li><code class="docutils literal"><span class="pre">tableConfig.setBlockSize(16</span> <span class="pre">*</span> <span class="pre">1024L);</span></code> Modify the default <a class="reference external" href="https://github.com/apache/kafka/blob/2.3/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java#L79">block size</a> per these instructions from the <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Memory-usage- [...]
+                  <li><code class="docutils literal"><span class="pre">tableConfig.setCacheIndexAndFilterBlocks(true);</span></code> Do not let the index and filter blocks grow unbounded. For more information, see the <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks">RocksDB GitHub</a>.</li>
+                  <li><code class="docutils literal"><span class="pre">options.setMaxWriteBufferNumber(2);</span></code> See the advanced options in the <a class="reference external" href="https://github.com/facebook/rocksdb/blob/8dee8cad9ee6b70fd6e1a5989a8156650a70c04f/include/rocksdb/advanced_options.h#L103">RocksDB GitHub</a>.</li>
+                  <li><code class="docutils literal"><span class="pre">cache.close();</span></code> To avoid memory leaks, you must close any objects you constructed that extend org.rocksdb.RocksObject. See  <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/RocksJava-Basics#memory-management">RocksJava docs</a> for more details.</li>
+                </ol>
+                </dd>
+              </dl>
+            </div></blockquote>
         </div>
       </div>
-      <div class="section" id="kafka-consumers-and-producer-configuration-parameters">
-        <h3><a class="toc-backref" href="#id16">Kafka consumers, producer and admin client configuration parameters</a><a class="headerlink" href="#kafka-consumers-and-producer-configuration-parameters" title="Permalink to this headline"></a></h3>
-        <p>You can specify parameters for the Kafka <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/clients/consumer/package-summary.html">consumers</a>, <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/clients/producer/package-summary.html">producers</a>,
-            and <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/kafka/clients/admin/package-summary.html">admin client</a> that are used internally.
-            The consumer, producer and admin client settings are defined by specifying parameters in a <code class="docutils literal"><span class="pre">StreamsConfig</span></code> instance.</p>
-        <p>In this example, the Kafka <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/clients/consumer/ConsumerConfig.html#SESSION_TIMEOUT_MS_CONFIG">consumer session timeout</a> is configured to be 60000 milliseconds in the Streams settings:</p>
-        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+      </blockquote>
+    </div>
+    <div class="section" id="state-dir">
+      <h4><a class="toc-backref" href="#id14">state.dir</a><a class="headerlink" href="#state-dir" title="Permalink to this headline"></a></h4>
+      <blockquote>
+        <div>The state directory. Kafka Streams persists local states under the state directory. Each application has a subdirectory on its hosting
+          machine that is located under the state directory. The name of the subdirectory is the application ID. The state stores associated
+          with the application are created under this subdirectory. When running multiple instances of the same application on a single machine,
+          this path must be unique for each such instance.</div>
+      </blockquote>
+    </div>
+    <div class="section" id="topology-optimization">
+      <h4><a class="toc-backref" href="#id31">topology.optimization</a><a class="headerlink" href="#topology-optimization" title="Permalink to this headline"></a></h4>
+      <blockquote>
+        <div>
+          <p>
+            You can tell Streams to apply topology optimizations by setting this config. The optimizations are currently all or none and disabled by default.
+            These optimizations include moving/reducing repartition topics and reusing the source topic as the changelog for source KTables. It is recommended to enable this.
+          </p>
+          <p>
+            Note that as of 2.3, you need to do two things to enable optimizations. In addition to setting this config to <code>StreamsConfig.OPTIMIZE</code>, you'll need to pass in your
+            configuration properties when building your topology by using the overloaded <code>StreamsBuilder.build(Properties)</code> method.
+            For example <code>KafkaStreams myStream = new KafkaStreams(streamsBuilder.build(properties), properties)</code>.
+          </p>
+        </div></blockquote>
+    </div>
+    <div class="section" id="upgrade-from">
+      <h4><a class="toc-backref" href="#id14">upgrade.from</a><a class="headerlink" href="#upgrade-from" title="Permalink to this headline"></a></h4>
+      <blockquote>
+        <div>
+          The version you are upgrading from. It is important to set this config when performing a rolling upgrade to certain versions, as described in the upgrade guide.
+          You should set this config to the appropriate version before bouncing your instances and upgrading them to the newer version. Once everyone is on the
+          newer version, you should remove this config and do a second rolling bounce. It is only necessary to set this config and follow the two-bounce upgrade path
+          when upgrading from below version 2.0, or when upgrading to 2.4+ from any version lower than 2.4.
+        </div>
+      </blockquote>
+    </div>
+  </div>
+  <div class="section" id="kafka-consumers-and-producer-configuration-parameters">
+    <h3><a class="toc-backref" href="#id16">Kafka consumers, producer and admin client configuration parameters</a><a class="headerlink" href="#kafka-consumers-and-producer-configuration-parameters" title="Permalink to this headline"></a></h3>
+    <p>You can specify parameters for the Kafka <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/clients/consumer/package-summary.html">consumers</a>, <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/clients/producer/package-summary.html">producers</a>,
+      and <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/kafka/clients/admin/package-summary.html">admin client</a> that are used internally.
+      The consumer, producer and admin client settings are defined by specifying parameters in a <code class="docutils literal"><span class="pre">StreamsConfig</span></code> instance.</p>
+    <p>In this example, the Kafka <a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/clients/consumer/ConsumerConfig.html#SESSION_TIMEOUT_MS_CONFIG">consumer session timeout</a> is configured to be 60000 milliseconds in the Streams settings:</p>
+    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
 <span class="c1">// Example of a &quot;normal&quot; setting for Kafka Streams</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">BOOTSTRAP_SERVERS_CONFIG</span><span class="o">,</span> <span class="s">&quot;kafka-broker-01:9092&quot;</span><span class="o">);</span>
 <span class="c1">// Customize the Kafka consumer settings of your Streams application</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">ConsumerConfig</span><span class="o">.</span><span class="na">SESSION_TIMEOUT_MS_CONFIG</span><span class="o">,</span> <span class="mi">60000</span><span class="o">);</span>
 </pre></div>
-        </div>
-        <div class="section" id="naming">
-          <h4><a class="toc-backref" href="#id17">Naming</a><a class="headerlink" href="#naming" title="Permalink to this headline"></a></h4>
-          <p>Some consumer, producer and admin client configuration parameters use the same parameter name, and Kafka Streams library itself also uses some parameters that share the same name with its embedded client. For example, <code class="docutils literal"><span class="pre">send.buffer.bytes</span></code> and
-              <code class="docutils literal"><span class="pre">receive.buffer.bytes</span></code> are used to configure TCP buffers; <code class="docutils literal"><span class="pre">request.timeout.ms</span></code> and <code class="docutils literal"><span class="pre">retry.backoff.ms</span></code> control retries for client request;
-              <code class="docutils literal"><span class="pre">retries</span></code> are used to configure how many retries are allowed when handling retriable errors from broker request responses.
-              You can avoid duplicate names by prefix parameter names with <code class="docutils literal"><span class="pre">consumer.</span></code>, <code class="docutils literal"><span class="pre">producer.</span></code>, or <code class="docutils literal"><span class="pre">admin.</span></code> (e.g., <code class="docutils literal"><span class="pre">consumer.send.buffer.bytes</span></code> and <code class="docutils literal"><span class="pre">producer.send.buffer.bytes</span></code>).</p>
-          <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+    </div>
+    <div class="section" id="naming">
+      <h4><a class="toc-backref" href="#id17">Naming</a><a class="headerlink" href="#naming" title="Permalink to this headline"></a></h4>
+      <p>Some consumer, producer and admin client configuration parameters use the same parameter name, and Kafka Streams library itself also uses some parameters that share the same name with its embedded client. For example, <code class="docutils literal"><span class="pre">send.buffer.bytes</span></code> and
+        <code class="docutils literal"><span class="pre">receive.buffer.bytes</span></code> are used to configure TCP buffers; <code class="docutils literal"><span class="pre">request.timeout.ms</span></code> and <code class="docutils literal"><span class="pre">retry.backoff.ms</span></code> control retries for client request;
+        <code class="docutils literal"><span class="pre">retries</span></code> are used to configure how many retries are allowed when handling retriable errors from broker request responses.
+        You can avoid duplicate names by prefix parameter names with <code class="docutils literal"><span class="pre">consumer.</span></code>, <code class="docutils literal"><span class="pre">producer.</span></code>, or <code class="docutils literal"><span class="pre">admin.</span></code> (e.g., <code class="docutils literal"><span class="pre">consumer.send.buffer.bytes</span></code> and <code class="docutils literal"><span class="pre">producer.send.buffer.bytes</span></code>).</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
 <span class="c1">// same value for consumer, producer, and admin client</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;PARAMETER_NAME&quot;</span><span class="o">,</span> <span class="s">&quot;value&quot;</span><span class="o">);</span>
 <span class="c1">// different values for consumer and producer</span>
@@ -813,14 +810,14 @@
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">producerPrefix</span><span class="o">(</span><span class="s">&quot;PARAMETER_NAME&quot;</span><span class="o">),</span> <span class="s">&quot;producer-value&quot;</span><span class="o">);</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">adminClientPrefix</span><span class="o">(</span><span class="s">&quot;PARAMETER_NAME&quot;</span><span class="o">),</span> <span class="s">&quot;admin-value&quot;</span><span class="o">);</span>
 </pre></div>
-          <p>You could further separate consumer configuration by adding different prefixes:</p>
-          <ul class="simple">
-            <li><code class="docutils literal"><span class="pre">main.consumer.</span></code> for main consumer which is the default consumer of stream source.</li>
-            <li><code class="docutils literal"><span class="pre">restore.consumer.</span></code> for restore consumer which is in charge of state store recovery.</li>
-            <li><code class="docutils literal"><span class="pre">global.consumer.</span></code> for global consumer which is used in global KTable construction.</li>
-          </ul>
-          <p>For example, if you only want to set restore consumer config without touching other consumers' settings, you could simply use <code class="docutils literal"><span class="pre">restore.consumer.</span></code> to set the config.</p>
-          <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+        <p>You could further separate consumer configuration by adding different prefixes:</p>
+        <ul class="simple">
+          <li><code class="docutils literal"><span class="pre">main.consumer.</span></code> for main consumer which is the default consumer of stream source.</li>
+          <li><code class="docutils literal"><span class="pre">restore.consumer.</span></code> for restore consumer which is in charge of state store recovery.</li>
+          <li><code class="docutils literal"><span class="pre">global.consumer.</span></code> for global consumer which is used in global KTable construction.</li>
+        </ul>
+        <p>For example, if you only want to set restore consumer config without touching other consumers' settings, you could simply use <code class="docutils literal"><span class="pre">restore.consumer.</span></code> to set the config.</p>
+        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
 <span class="c1">// same config value for all consumer types</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;consumer.PARAMETER_NAME&quot;</span><span class="o">,</span> <span class="s">&quot;general-consumer-value&quot;</span><span class="o">);</span>
 <span class="c1">// set a different restore consumer config. This would make restore consumer take restore-consumer-value,</span>
@@ -829,103 +826,103 @@
 <span class="c1">// alternatively, you can use</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">restoreConsumerPrefix</span><span class="o">(</span><span class="s">&quot;PARAMETER_NAME&quot;</span><span class="o">),</span> <span class="s">&quot;restore-consumer-value&quot;</span><span class="o">);</span>
 </pre></div>
-          </div>
-          <p> Same applied to <code class="docutils literal"><span class="pre">main.consumer.</span></code> and <code class="docutils literal"><span class="pre">main.consumer.</span></code>, if you only want to specify one consumer type config.</p>
-          <p> Additionally, to configure the internal repartition/changelog topics, you could use the <code class="docutils literal"><span class="pre">topic.</span></code> prefix, followed by any of the standard topic configs.</p>
-            <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+        </div>
+        <p> Same applied to <code class="docutils literal"><span class="pre">main.consumer.</span></code> and <code class="docutils literal"><span class="pre">main.consumer.</span></code>, if you only want to specify one consumer type config.</p>
+        <p> Additionally, to configure the internal repartition/changelog topics, you could use the <code class="docutils literal"><span class="pre">topic.</span></code> prefix, followed by any of the standard topic configs.</p>
+        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
 <span class="c1">// Override default for both changelog and repartition topics</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;topic.PARAMETER_NAME&quot;</span><span class="o">,</span> <span class="s">&quot;topic-value&quot;</span><span class="o">);</span>
 <span class="c1">// alternatively, you can use</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">topicPrefix</span><span class="o">(</span><span class="s">&quot;PARAMETER_NAME&quot;</span><span class="o">),</span> <span class="s">&quot;topic-value&quot;</span><span class="o">);</span>
 </pre></div>
-            </div>
-          </div>
-        </div>
-        <div class="section" id="default-values">
-          <h4><a class="toc-backref" href="#id18">Default Values</a><a class="headerlink" href="#default-values" title="Permalink to this headline"></a></h4>
-          <p>Kafka Streams uses different default values for some of the underlying client configs, which are summarized below. For detailed descriptions
-            of these configs, see <a class="reference external" href="http://kafka.apache.org/0100/documentation.html#producerconfigs">Producer Configs</a>
-            and <a class="reference external" href="http://kafka.apache.org/0100/documentation.html#newconsumerconfigs">Consumer Configs</a>.</p>
-          <table border="1" class="non-scrolling-table docutils">
-            <thead valign="bottom">
-            <tr class="row-odd"><th class="head">Parameter Name</th>
-              <th class="head">Corresponding Client</th>
-              <th class="head">Streams Default</th>
-            </tr>
-            </thead>
-            <tbody valign="top">
-            <tr class="row-even"><td>auto.offset.reset</td>
-              <td>Consumer</td>
-              <td>earliest</td>
-            </tr>
-            <tr class="row-even"><td>linger.ms</td>
-              <td>Producer</td>
-              <td>100</td>
-            </tr>
-            <tr class="row-odd"><td>max.poll.interval.ms</td>
-              <td>Consumer</td>
-              <td>Integer.MAX_VALUE</td>
-            </tr>
-            <tr class="row-even"><td>max.poll.records</td>
-              <td>Consumer</td>
-              <td>1000</td>
-            </tr>
-            </tbody>
-          </table>
-        </div>
-        <div class="section" id="parameters-controlled-by-kafka-streams">
-          <h3><a class="toc-backref" href="#id26">Parameters controlled by Kafka Streams</a><a class="headerlink" href="#parameters-controlled-by-kafka-streams" title="Permalink to this headline"></a></h3>
-          <p>Kafka Streams assigns the following configuration parameters. If you try to change
-            <code class="docutils literal"><span class="pre">allow.auto.create.topics</span></code>, your value
-            is ignored and setting it has no effect in a Kafka Streams application. You can set the other parameters.
-            Kafka Streams sets them to different default values than a plain
-            <code class="docutils literal"><span class="pre">KafkaConsumer</span></code>.
-          <p>Kafka Streams uses the <code class="docutils literal"><span class="pre">client.id</span></code>
-            parameter to compute derived client IDs for internal clients. If you don't set
-            <code class="docutils literal"><span class="pre">client.id</span></code>, Kafka Streams sets it to
-            <code class="docutils literal"><span class="pre">&lt;application.id&gt;-&lt;random-UUID&gt;</span></code>.
-            <table border="1" class="non-scrolling-table docutils">
-              <colgroup>
-              <col width="50%">
-              <col width="19%">
-              <col width="31%">
-              </colgroup>
-              <thead valign="bottom">
-              <tr class="row-odd"><th class="head">Parameter Name</th>
-              <th class="head">Corresponding Client</th>
-              <th class="head">Streams Default</th>
-              </tr>
-              </thead>
-              <tbody valign="top">
-              <tr class="row-odd"><td>allow.auto.create.topics</td>
-              <td>Consumer</td>
-              <td>false</td>
-              </tr>
-              <tr class="row-even"><td>auto.offset.reset</td>
-              <td>Consumer</td>
-              <td>earliest</td>
-              </tr>
-              <tr class="row-odd"><td>linger.ms</td>
-              <td>Producer</td>
-              <td>100</td>
-              </tr>
-              <tr class="row-even"><td>max.poll.interval.ms</td>
-              <td>Consumer</td>
-              <td>300000</td>
-              </tr>
-              <tr class="row-odd"><td>max.poll.records</td>
-              <td>Consumer</td>
-              <td>1000</td>
-              </tr>
-              </tbody>
-              </table>
-        <div class="section" id="enable-auto-commit">
-          <span id="streams-developer-guide-consumer-auto-commit"></span><h4><a class="toc-backref" href="#id19">enable.auto.commit</a><a class="headerlink" href="#enable-auto-commit" title="Permalink to this headline"></a></h4>
-          <blockquote>
-            <div>The consumer auto commit. To guarantee at-least-once processing semantics and turn off auto commits, Kafka Streams overrides this consumer config
-              value to <code class="docutils literal"><span class="pre">false</span></code>.  Consumers will only commit explicitly via <em>commitSync</em> calls when the Kafka Streams library or a user decides
-              to commit the current processing state.</div></blockquote>
         </div>
+      </div>
+    </div>
+    <div class="section" id="default-values">
+      <h4><a class="toc-backref" href="#id18">Default Values</a><a class="headerlink" href="#default-values" title="Permalink to this headline"></a></h4>
+      <p>Kafka Streams uses different default values for some of the underlying client configs, which are summarized below. For detailed descriptions
+        of these configs, see <a class="reference external" href="http://kafka.apache.org/0100/documentation.html#producerconfigs">Producer Configs</a>
+        and <a class="reference external" href="http://kafka.apache.org/0100/documentation.html#newconsumerconfigs">Consumer Configs</a>.</p>
+      <table border="1" class="non-scrolling-table docutils">
+        <thead valign="bottom">
+        <tr class="row-odd"><th class="head">Parameter Name</th>
+          <th class="head">Corresponding Client</th>
+          <th class="head">Streams Default</th>
+        </tr>
+        </thead>
+        <tbody valign="top">
+        <tr class="row-even"><td>auto.offset.reset</td>
+          <td>Consumer</td>
+          <td>earliest</td>
+        </tr>
+        <tr class="row-even"><td>linger.ms</td>
+          <td>Producer</td>
+          <td>100</td>
+        </tr>
+        <tr class="row-odd"><td>max.poll.interval.ms</td>
+          <td>Consumer</td>
+          <td>Integer.MAX_VALUE</td>
+        </tr>
+        <tr class="row-even"><td>max.poll.records</td>
+          <td>Consumer</td>
+          <td>1000</td>
+        </tr>
+        </tbody>
+      </table>
+    </div>
+    <div class="section" id="parameters-controlled-by-kafka-streams">
+      <h3><a class="toc-backref" href="#id26">Parameters controlled by Kafka Streams</a><a class="headerlink" href="#parameters-controlled-by-kafka-streams" title="Permalink to this headline"></a></h3>
+      <p>Kafka Streams assigns the following configuration parameters. If you try to change
+        <code class="docutils literal"><span class="pre">allow.auto.create.topics</span></code>, your value
+        is ignored and setting it has no effect in a Kafka Streams application. You can set the other parameters.
+        Kafka Streams sets them to different default values than a plain
+        <code class="docutils literal"><span class="pre">KafkaConsumer</span></code>.
+      <p>Kafka Streams uses the <code class="docutils literal"><span class="pre">client.id</span></code>
+        parameter to compute derived client IDs for internal clients. If you don't set
+        <code class="docutils literal"><span class="pre">client.id</span></code>, Kafka Streams sets it to
+        <code class="docutils literal"><span class="pre">&lt;application.id&gt;-&lt;random-UUID&gt;</span></code>.
+      <table border="1" class="non-scrolling-table docutils">
+        <colgroup>
+          <col width="50%">
+          <col width="19%">
+          <col width="31%">
+        </colgroup>
+        <thead valign="bottom">
+        <tr class="row-odd"><th class="head">Parameter Name</th>
+          <th class="head">Corresponding Client</th>
+          <th class="head">Streams Default</th>
+        </tr>
+        </thead>
+        <tbody valign="top">
+        <tr class="row-odd"><td>allow.auto.create.topics</td>
+          <td>Consumer</td>
+          <td>false</td>
+        </tr>
+        <tr class="row-even"><td>auto.offset.reset</td>
+          <td>Consumer</td>
+          <td>earliest</td>
+        </tr>
+        <tr class="row-odd"><td>linger.ms</td>
+          <td>Producer</td>
+          <td>100</td>
+        </tr>
+        <tr class="row-even"><td>max.poll.interval.ms</td>
+          <td>Consumer</td>
+          <td>300000</td>
+        </tr>
+        <tr class="row-odd"><td>max.poll.records</td>
+          <td>Consumer</td>
+          <td>1000</td>
+        </tr>
+        </tbody>
+      </table>
+      <div class="section" id="enable-auto-commit">
+        <span id="streams-developer-guide-consumer-auto-commit"></span><h4><a class="toc-backref" href="#id19">enable.auto.commit</a><a class="headerlink" href="#enable-auto-commit" title="Permalink to this headline"></a></h4>
+        <blockquote>
+          <div>The consumer auto commit. To guarantee at-least-once processing semantics and turn off auto commits, Kafka Streams overrides this consumer config
+            value to <code class="docutils literal"><span class="pre">false</span></code>.  Consumers will only commit explicitly via <em>commitSync</em> calls when the Kafka Streams library or a user decides
+            to commit the current processing state.</div></blockquote>
+      </div>
       <div class="section" id="recommended-configuration-parameters-for-resiliency">
         <h3><a class="toc-backref" href="#id21">Recommended configuration parameters for resiliency</a><a class="headerlink" href="#recommended-configuration-parameters-for-resiliency" title="Permalink to this headline"></a></h3>
         <p>There are several Kafka and Kafka Streams configuration options that need to be configured explicitly for resiliency in face of broker failures:</p>
@@ -978,13 +975,12 @@
           <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">REPLICATION_FACTOR_CONFIG</span><span class="o">,</span> <span class="mi">3</span><span class="o">);</span>
 <span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">topicPrefix</span><span class="o">(</span><span class="n">TopicConfig</span><span class="o">.</span><span class="na">MIN_IN_SYNC_REPLICAS_CONFIG</span><span class="o">),</span> <span class="mi">2</span><span class="o">);</span>
-<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">producerPrefix</span><span class="o">(</span><span class="n">ProducerConfig</span><span class="o">.</span><span class="na">ACKS_CONFIG</span><span class="o">),</span> <span class="s">&quot;all&quot;</span><span class="o">);</span>
-</pre></div>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">producerPrefix</span><span class="o">(</span><span class="n">ProducerConfig</span><span class="o">.</span><span class="na">ACKS_CONFIG</span><span class="o">),</span> <span class="s">&quot;all&quot;</span><span class="o">);</span></code></pre></div>
           </div>
-</div>
-</div>
-</div>
-</div>
+        </div>
+      </div>
+    </div>
+  </div>
 
 
                </div>
@@ -997,10 +993,10 @@
 
                 <!--#include virtual="../../../includes/_header.htm" -->
                 <!--#include virtual="../../../includes/_top.htm" -->
-                    <div class="content documentation documentation--current">
+                    <div class="content documentation">
                     <!--#include virtual="../../../includes/_nav.htm" -->
                     <div class="right">
-                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
                     <ul class="breadcrumbs">
                     <li><a href="/documentation">Documentation</a></li>
                     <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/datatypes.html b/docs/streams/developer-guide/datatypes.html
index 95aa43a..945b680 100644
--- a/docs/streams/developer-guide/datatypes.html
+++ b/docs/streams/developer-guide/datatypes.html
@@ -62,8 +62,7 @@
 <span class="c1">// Default serde for keys of data records (here: built-in serde for String type)</span>
 <span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">DEFAULT_KEY_SERDE_CLASS_CONFIG</span><span class="o">,</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">().</span><span class="na">getClass</span><span class="o">().</span><span class="na">getName</span><span class="o">());</span>
 <span class="c1">// Default serde for values of data records (here: built-in serde for Long type)</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">DEFAULT_VALUE_SERDE_CLASS_CONFIG</span><span class="o">,</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">().</span><span class="na">getClass</span><span class="o">().</span><span class="na">getName</span><span class="o">());</span>
-</pre></div>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">DEFAULT_VALUE_SERDE_CLASS_CONFIG</span><span class="o">,</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">().</span><span class="na">getClass</span><span class="o">().</span><span class="na">getName</span><span class="o">());</span></code></pre></div>
       </div>
     </div>
     <div class="section" id="overriding-default-serdes">
@@ -78,8 +77,7 @@
 <span class="c1">// The stream userCountByRegion has type `String` for record keys (for region)</span>
 <span class="c1">// and type `Long` for record values (for user counts).</span>
 <span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">userCountByRegion</span> <span class="o">=</span> <span class="o">...;</span>
-<span class="n">userCountByRegion</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;RegionCountsTopic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">stringSerde</span><span class="o">,</span> <span class="n">longSerde</span><span class="o">));</span>
-</pre></div>
+<span class="n">userCountByRegion</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;RegionCountsTopic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">stringSerde</span><span class="o">,</span> <span class="n">longSerde</span><span class="o">));</span></code></pre></div>
       </div>
       <p>If you want to override serdes selectively, i.e., keep the defaults for some fields, then don&#8217;t specify the serde whenever you want to leverage the default settings:</p>
       <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.common.serialization.Serde</span><span class="o">;</span>
@@ -89,8 +87,7 @@
 <span class="c1">// but override the default serializer for record values (here: userCount as Long).</span>
 <span class="kd">final</span> <span class="n">Serde</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">longSerde</span> <span class="o">=</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">();</span>
 <span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">userCountByRegion</span> <span class="o">=</span> <span class="o">...;</span>
-<span class="n">userCountByRegion</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;RegionCountsTopic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">valueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span>
-</pre></div>
+<span class="n">userCountByRegion</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;RegionCountsTopic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">valueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span></code></pre></div>
       </div>
       <p>If some of your incoming records are corrupted or ill-formatted, they will cause the deserializer class to report an error.
          Since 1.0.x we have introduced an <code>DeserializationExceptionHandler</code> interface which allows
@@ -108,8 +105,7 @@
     <span class="nt">&lt;groupId&gt;</span>org.apache.kafka<span class="nt">&lt;/groupId&gt;</span>
     <span class="nt">&lt;artifactId&gt;</span>kafka-clients<span class="nt">&lt;/artifactId&gt;</span>
     <span class="nt">&lt;version&gt;</span>{{fullDotVersion}}<span class="nt">&lt;/version&gt;</span>
-<span class="nt">&lt;/dependency&gt;</span>
-</pre></div>
+<span class="nt">&lt;/dependency&gt;</span></code></pre></div>
         </div>
         <p>This artifact provides the following serde implementations under the package <a class="reference external" href="https://github.com/apache/kafka/blob/{{dotVersion}}/clients/src/main/java/org/apache/kafka/common/serialization">org.apache.kafka.common.serialization</a>, which you can leverage when e.g., defining default serializers in your Streams configuration.</p>
         <table border="1" class="docutils">
@@ -200,10 +196,10 @@
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
   <!--#include virtual="../../../includes/_nav.htm" -->
   <div class="right">
-    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
     <ul class="breadcrumbs">
       <li><a href="/documentation">Documentation</a></li>
       <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/dsl-api.html b/docs/streams/developer-guide/dsl-api.html
index 3b3fc2a..ad2512e 100644
--- a/docs/streams/developer-guide/dsl-api.html
+++ b/docs/streams/developer-guide/dsl-api.html
@@ -110,7 +110,7 @@
         </div>
 
         <div class="section" id="dsl-core-constructs-overview">
-            <h4><a id="streams_concepts_kstream" href="#streams_concepts_kstream">KStream</a></h4>
+            <h4 class="anchor-heading"><a id="streams_concepts_kstream" class="anchor-link"></a><a href="#streams_concepts_kstream">KStream</a></h4>
 
             <p>
                 Only the <strong>Kafka Streams DSL</strong> has the notion of a <code>KStream</code>.
@@ -133,7 +133,7 @@
                 which would return <code>3</code> for <code>alice</code>.
             </p>
 
-            <h4><a id="streams_concepts_ktable" href="#streams_concepts_ktable">KTable</a></h4>
+            <h4 class="anchor-heading"><a id="streams_concepts_ktable" class="anchor-link"></a><a href="#streams_concepts_ktable">KTable</a></h4>
 
             <p>
                 Only the <strong>Kafka Streams DSL</strong> has the notion of a <code>KTable</code>.
@@ -172,7 +172,7 @@
                 KTable also provides an ability to look up <em>current</em> values of data records by keys. This table-lookup functionality is available through <strong>join operations</strong> (see also <strong>Joining</strong> in the Developer Guide) as well as through <strong>Interactive Queries</strong>.
             </p>
 
-            <h4><a id="streams_concepts_globalktable" href="#streams_concepts_globalktable">GlobalKTable</a></h4>
+            <h4 class="anchor-heading"><a id="streams_concepts_globalktable" class="anchor-link"></a><a href="#streams_concepts_globalktable">GlobalKTable</a></h4>
 
             <p>Only the <strong>Kafka Streams DSL</strong> has the notion of a <strong>GlobalKTable</strong>.</p>
 
@@ -253,8 +253,7 @@
     <span class="n">Consumed</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key serde */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()</span>   <span class="cm">/* value serde */</span>
-    <span class="o">);</span>
-</pre></div>
+    <span class="o">);</span></code></pre></div>
                         </div>
                         <p>If you do not specify SerDes explicitly, the default SerDes from the
                             <a class="reference internal" href="config-streams.html#streams-developer-guide-configuration"><span class="std std-ref">configuration</span></a> are used.</p>
@@ -316,8 +315,7 @@
       <span class="s">&quot;word-counts-global-store&quot;</span> <span class="cm">/* table/store name */</span><span class="o">)</span>
       <span class="o">.</span><span class="na">withKeySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key serde */</span>
       <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* value serde */</span>
-    <span class="o">);</span>
-</pre></div>
+    <span class="o">);</span></code></pre></div>
                         </div>
                         <p>You <strong>must specify SerDes explicitly</strong> if the key or value types of the records in the Kafka input
                             topics do not match the configured default SerDes. For information about configuring default SerDes, available
@@ -384,8 +382,7 @@
 <span class="c1">// KStream branches[1] contains all records whose keys start with &quot;B&quot;</span>
 <span class="c1">// KStream branches[2] contains all other records</span>
 
-<span class="c1">// Java 7 example: cf. `filter` for how to create `Predicate` instances</span>
-</pre></div>
+<span class="c1">// Java 7 example: cf. `filter` for how to create `Predicate` instances</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -411,8 +408,7 @@
       <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">test</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">Long</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">value</span> <span class="o">&gt;</span> <span class="mi">0</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -438,8 +434,7 @@
       <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">test</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">Long</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">value</span> <span class="o">&lt;=</span> <span class="mi">0</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -467,8 +462,7 @@
     <span class="o">}</span>
   <span class="o">);</span>
 
-<span class="c1">// Java 7 example: cf. `map` for how to create `KeyValueMapper` instances</span>
-</pre></div>
+<span class="c1">// Java 7 example: cf. `map` for how to create `KeyValueMapper` instances</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -486,8 +480,7 @@
 <span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">sentences</span> <span class="o">=</span> <span class="o">...;</span>
 <span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">words</span> <span class="o">=</span> <span class="n">sentences</span><span class="o">.</span><span class="na">flatMapValues</span><span class="o">(</span><span class="n">value</span> <span class="o">-&gt;</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class [...]
 
-<span class="c1">// Java 7 example: cf. `mapValues` for how to create `ValueMapper` instances</span>
-</pre></div>
+<span class="c1">// Java 7 example: cf. `mapValues` for how to create `ValueMapper` instances</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -517,8 +510,7 @@
       <span class="kd">public</span> <span class="kt">void</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">Long</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">key</span> <span class="o">+</span> <span class="s">&quot; =&gt; &quot;</span> <span class="o">+</span> <span class="n">value</span><span class="o">);</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -559,8 +551,7 @@
     <span class="n">Grouped</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">ByteArray</span><span class="o">(),</span> <span class="cm">/* key */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span>     <span class="cm">/* value */</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -639,8 +630,7 @@
     <span class="n">Grouped</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key (note: type was modified) */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">())</span> <span class="cm">/* value (note: type was modified) */</span>
-  <span class="o">);</span>
-                            </pre></div>
+  <span class="o">);</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -667,8 +657,7 @@
 
 <span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">table</span> <span class="o">=</span> <span class="n">cogroupedStream</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(initializer);</span>
 
-<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">table2</span> <span class="o">=</span> <span class="n">cogroupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(TimeWindows.duration(500ms))</span>.</span><span class="na">aggregate</span><span class="o">(initializer);</span>
-</pre></div>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">table2</span> <span class="o">=</span> <span class="n">cogroupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(TimeWindows.duration(500ms))</span>.</span><span class="na">aggregate</span><span class="o">(initializer);</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -697,8 +686,7 @@
       <span class="kd">public</span> <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="k">new</span> <span class="n">KeyValue</span><span class="o">&lt;&gt;(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">(),</span> <span class="n">value</span><span class="o">.</span><span class="na">length</span><span class="o">());</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -726,8 +714,7 @@
       <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">s</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">s</span><span class="o">.</span><span class="na">toUpperCase</span><span class="o">();</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -743,13 +730,11 @@
                             from different streams in the merged stream. Relative order is preserved within each input stream though (ie, records within the same input stream are processed in order)</p>
                             <div class="last highlight-java">
                               <div class="highlight">
-                                <pre>
-<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream1</span> <span class="o">=</span> <span class="o">...;</span>
+                                <pre class="line-numbers"><code class="language-text"><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream1</span> <span class="o">=</span> <span class="o">...;</span>
 
 <span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream2</span> <span class="o">=</span> <span class="o">...;</span>
 
-<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">merged</span> <span class="o">=</span> <span class="n">stream1</span><span class="o">.</span><span class="na">merge</span><span class="o">(</span><span class="n">stream2</span><span class="o">);</span>
-                                </pre>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">merged</span> <span class="o">=</span> <span class="n">stream1</span><span class="o">.</span><span class="na">merge</span><span class="o">(</span><span class="n">stream2</span><span class="o">);</span></code></pre>
                               </div>
                             </div>
                         </td>
@@ -780,8 +765,7 @@
       <span class="kd">public</span> <span class="kt">void</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;key=&quot;</span> <span class="o">+</span> <span class="n">key</span> <span class="o">+</span> <span class="s">&quot;, value=&quot;</span> <span class="o">+</span> <span class="n">value</span><span class="o">);</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -800,8 +784,7 @@
 <span class="n">stream</span><span class="o">.</span><span class="na">print</span><span class="o">();</span>
 
 <span class="c1">// print to file with a custom label</span>
-<span class="n">stream</span><span class="o">.</span><span class="na">print</span><span class="o">(</span><span class="n">Printed</span><span class="o">.</span><span class="na">toFile</span><span class="o">(</span><span class="s">&quot;streams.out&quot;</span><span class="o">).</span><span class="na">withLabel</span><span class="o">(</span><span class="s">&quot;streams&quot;</span><span class="o">));</span>
-</pre></div>
+<span class="n">stream</span><span class="o">.</span><span class="na">print</span><span class="o">(</span><span class="n">Printed</span><span class="o">.</span><span class="na">toFile</span><span class="o">(</span><span class="s">&quot;streams.out&quot;</span><span class="o">).</span><span class="na">withLabel</span><span class="o">(</span><span class="s">&quot;streams&quot;</span><span class="o">));</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -828,8 +811,7 @@
       <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">value</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">)[</span><span class="mi">0</span><span class="o">];</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -844,8 +826,7 @@
 
 <span class="c1">// Also, a variant of `toStream` exists that allows you</span>
 <span class="c1">// to select a new key for the resulting stream.</span>
-<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="n">table</span><span class="o">.</span><span class="na">toStream</span><span class="o">();</span>
-</pre></div>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="n">table</span><span class="o">.</span><span class="na">toStream</span><span class="o">();</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -858,8 +839,7 @@
                             (<a class="reference external" href="/{{version}}/javadoc/org/apache/kafka/streams/kstream/KStream.html#toTable--">details</a>)</p>
                             <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
 
-<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">table</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">toTable</span><span class="o">();</span>
-</pre></div>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">table</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">toTable</span><span class="o">();</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -878,7 +858,7 @@
                             <code><span class="pre">repartition()</span></code> operation always triggers repartitioning of the stream, as a result it can be used with embedded Processor API methods (like <code><span class="pre">transform()</span></code> et al.) that do not trigger auto repartitioning when key changing operation is performed beforehand.
 
                             <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">... ;</span>
-<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">repartitionedStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">repartition</span><span class="o">(</span><span class="n">Repartitioned</span><span class="o">.</span><span class="na">numberOfPartitions</span><span class="o">(</span><span class="s">1 [...]
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">repartitionedStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">repartition</span><span class="o">(</span><span class="n">Repartitioned</span><span class="o">.</span><span class="na">numberOfPartitions</span><span class="o">(</span><span class="s">1 [...]
                             </div>
                         </td>
                     </tr>
@@ -925,8 +905,7 @@
     <span class="c1">// `KTable&lt;String, Long&gt;` (word -&gt; count).</span>
     <span class="o">.</span><span class="na">count</span><span class="o">()</span>
     <span class="c1">// Convert the `KTable&lt;String, Long&gt;` into a `KStream&lt;String, Long&gt;`.</span>
-    <span class="o">.</span><span class="na">toStream</span><span class="o">();</span>
-</pre></div>
+    <span class="o">.</span><span class="na">toStream</span><span class="o">();</span></code></pre></div>
                 </div>
                 <p>WordCount example in Java 7:</p>
                 <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Code below is equivalent to the previous Java 8+ example above.</span>
@@ -946,8 +925,7 @@
         <span class="o">}</span>
     <span class="o">})</span>
     <span class="o">.</span><span class="na">count</span><span class="o">()</span>
-    <span class="o">.</span><span class="na">toStream</span><span class="o">();</span>
-</pre></div>
+    <span class="o">.</span><span class="na">toStream</span><span class="o">();</span></code></pre></div>
                 </div>
                 <div class="section" id="aggregating">
                     <span id="streams-developer-guide-dsl-aggregating"></span><h4><a class="toc-backref" href="#id12">Aggregating</a><a class="headerlink" href="#aggregating" title="Permalink to this headline"></a></h4>
@@ -1046,8 +1024,7 @@
       <span class="o">}</span>
     <span class="o">},</span>
     <span class="n">Materialized</span><span class="o">.</span><span class="na">as</span><span class="o">(</span><span class="s">&quot;aggregated-stream-store&quot;</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">());</span>
-</pre></div>
+        <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">());</span></code></pre></div>
                                 </div>
                                 <p>Detailed behavior of <code class="docutils literal"><span class="pre">KGroupedStream</span></code>:</p>
                                 <ul class="simple">
@@ -1154,8 +1131,7 @@
             <span class="o">}</span>
         <span class="o">},</span>
         <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">SessionStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;sessionized-aggregated-stream-store&quot;</span><span class="o">)</span>
-          <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span>
-</pre></div>
+          <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span></code></pre></div>
                                 </div>
                                 <p>Detailed behavior:</p>
                                 <ul class="simple">
@@ -1187,8 +1163,7 @@
 <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">count</span><span class="o">();</span>
 
 <span class="c1">// Counting a KGroupedTable</span>
-<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedTable</span> <span class="o">=</span> <span class="n">groupedTable</span><span class="o">.</span><span class="na">count</span><span class="o">();</span>
-</pre></div>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedTable</span> <span class="o">=</span> <span class="n">groupedTable</span><span class="o">.</span><span class="na">count</span><span class="o">();</span></code></pre></div>
                                 </div>
                                 <p>Detailed behavior for <code class="docutils literal"><span class="pre">KGroupedStream</span></code>:</p>
                                 <ul class="simple">
@@ -1224,8 +1199,7 @@
 <span class="c1">// Counting a KGroupedStream with session-based windowing (here: with 5-minute inactivity gaps)</span>
 <span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span>
     <span class="n">SessionWindows</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">5</span><span class="o">)))</span> <span class="cm">/* session window */</span>
-    <span class="o">.</span><span class="na">count</span><span class="o">();</span>
-</pre></div>
+    <span class="o">.</span><span class="na">count</span><span class="o">();</span></code></pre></div>
                                 </div>
                                 <p>Detailed behavior:</p>
                                 <ul class="last simple">
@@ -1287,8 +1261,7 @@
       <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">aggValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">oldValue</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">aggValue</span> <span class="o">-</span> <span class="n">oldValue</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                                 </div>
                                 <p>Detailed behavior for <code class="docutils literal"><span class="pre">KGroupedStream</span></code>:</p>
                                 <ul class="simple">
@@ -1373,8 +1346,7 @@
       <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">aggValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                                 </div>
                                 <p>Detailed behavior:</p>
                                 <ul class="simple">
@@ -1406,8 +1378,7 @@
     <span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">,</span> <span class="cm">/* adder */</span>
     <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;aggregated-stream-store&quot;</span> <span class="cm">/* state store name * [...]
       <span class="o">.</span><span class="na">withKeySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key serde */</span>
-      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">());</span> <span class="cm">/* serde for aggregate value */</span>
-</pre></div>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">());</span> <span class="cm">/* serde for aggregate value */</span></code></pre></div>
                     </div>
                     <div class="admonition note">
                         <p><b>Note</b></p>
@@ -1528,8 +1499,7 @@
     <span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">oldValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">-</span> <span class="n">oldValue</span><span class="o">,</span> <span class="cm">/* subtractor */</span>
     <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;aggregated-table-store&quot;</span> <span class="cm">/* state store name */ [...]
       <span class="o">.</span><span class="na">withKeySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key serde */</span>
-      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">());</span> <span class="cm">/* serde for aggregate value */</span>
-</pre></div>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">());</span> <span class="cm">/* serde for aggregate value */</span></code></pre></div>
                     </div>
                     <div class="admonition note">
                         <p><b>Note</b></p>
@@ -1802,8 +1772,7 @@
 <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joinOutputRecord</span> <span class="o">=</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span>
     <span class="n">leftRecord</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="cm">/* by definition, leftRecord.key == rightRecord.key */</span>
     <span class="n">joiner</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">leftRecord</span><span class="o">.</span><span class="na">value</span><span class="o">,</span> <span class="n">rightRecord</span><span class="o">.</span><span class="na">value</span><span class="o">)</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                         </div>
                         <table border="1" class="non-scrolling-table width-100-percent docutils">
                             <colgroup>
@@ -1855,8 +1824,7 @@
       <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -1914,8 +1882,7 @@
       <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -1976,8 +1943,7 @@
       <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
       <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -2142,8 +2108,7 @@
 <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joinOutputRecord</span> <span class="o">=</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span>
     <span class="n">leftRecord</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="cm">/* by definition, leftRecord.key == rightRecord.key */</span>
     <span class="n">joiner</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">leftRecord</span><span class="o">.</span><span class="na">value</span><span class="o">,</span> <span class="n">rightRecord</span><span class="o">.</span><span class="na">value</span><span class="o">)</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                         </div>
                         <table border="1" class="non-scrolling-table width-100-percent docutils">
                             <colgroup>
@@ -2181,8 +2146,7 @@
       <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -2227,8 +2191,7 @@
       <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -2276,8 +2239,7 @@
       <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -2496,8 +2458,7 @@
                 <span class="n">KTable</span><span class="o">&lt;Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;<br>//This </span><span class="o"><span class="o"><span class="n">foreignKeyExtractor</span></span> simply uses the left-value to map to the right-key.<br></span><span class="o"><span class="n">Function</span><span class="o">&lt;Long</span><span class="o">,</s [...]
                 <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span><br>    <span class="o"><span class="n">foreignKeyExtractor,</span></span>
                     <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">"left="</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">", right="</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
-                  <span class="o">);</span>
-                </pre>
+                  <span class="o">);</span></code></pre>
                                 </div>
                               </div>
                               <p>Detailed behavior:</p>
@@ -2505,7 +2466,7 @@
                                 <li>
                                   <p class="first">The join is <em>key-based</em>, i.e.
                                     with the join predicate: </p>
-                                  <pre><code class="docutils literal"><span class="pre">foreignKeyExtractor.apply(leftRecord.value)</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code></pre>
+                                  <pre><code class="docutils literal"><span class="pre">foreignKeyExtractor.apply(leftRecord.value)</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code></code></pre>
                                 </li>
                                 <li>
                                   <p class="first">The join will be triggered under the
@@ -2556,8 +2517,7 @@
                 <span class="n">KTable</span><span class="o">&lt;Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;<br>//This </span><span class="o"><span class="o"><span class="n">foreignKeyExtractor</span></span> simply uses the left-value to map to the right-key.<br></span><span class="o"><span class="n">Function</span><span class="o">&lt;Long</span><span class="o">,</s [...]
                 <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span><br>    <span class="o"><span class="n">foreignKeyExtractor,</span></span>
                     <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">"left="</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">", right="</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
-                  <span class="o">);</span>
-                </pre>
+                  <span class="o">);</span></code></pre>
                                 </div>
                               </div>
                               <p>Detailed behavior:</p>
@@ -2565,7 +2525,7 @@
                                 <li>
                                   <p class="first">The join is <em>key-based</em>, i.e.
                                     with the join predicate: </p>
-                                  <pre><code class="docutils literal"><span class="pre">foreignKeyExtractor.apply(leftRecord.value)</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code></pre>
+                                  <pre><code class="docutils literal"><span class="pre">foreignKeyExtractor.apply(leftRecord.value)</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code></code></pre>
                                 </li>
                                 <li>
                                   <p class="first">The join will be triggered under the
@@ -2754,8 +2714,7 @@
 <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joinOutputRecord</span> <span class="o">=</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span>
     <span class="n">leftRecord</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="cm">/* by definition, leftRecord.key == rightRecord.key */</span>
     <span class="n">joiner</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">leftRecord</span><span class="o">.</span><span class="na">value</span><span class="o">,</span> <span class="n">rightRecord</span><span class="o">.</span><span class="na">value</span><span class="o">)</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                         </div>
                         <table border="1" class="non-scrolling-table width-100-percent docutils">
                             <colgroup>
@@ -2799,8 +2758,7 @@
     <span class="o">},</span>
     <span class="n">Joined</span><span class="o">.</span><span class="na">keySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key */</span>
       <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* left value */</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -2852,8 +2810,7 @@
     <span class="o">},</span>
     <span class="n">Joined</span><span class="o">.</span><span class="na">keySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key */</span>
       <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* left value */</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -3025,8 +2982,7 @@
 <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joinOutputRecord</span> <span class="o">=</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span>
     <span class="n">leftRecord</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="cm">/* by definition, leftRecord.key == rightRecord.key */</span>
     <span class="n">joiner</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">leftRecord</span><span class="o">.</span><span class="na">value</span><span class="o">,</span> <span class="n">rightRecord</span><span class="o">.</span><span class="na">value</span><span class="o">)</span>
-  <span class="o">);</span>
-</pre></div>
+  <span class="o">);</span></code></pre></div>
                         </div>
                         <table border="1" class="non-scrolling-table width-100-percent docutils">
                             <colgroup>
@@ -3072,8 +3028,7 @@
       <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul class="last">
@@ -3126,8 +3081,7 @@
       <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
       <span class="o">}</span>
-    <span class="o">});</span>
-</pre></div>
+    <span class="o">});</span></code></pre></div>
                                     </div>
                                     <p>Detailed behavior:</p>
                                     <ul class="last">
@@ -3228,8 +3182,7 @@
 <span class="c1">// The window&#39;s name -- the string parameter -- is used to e.g. name the backing state store.</span>
 <span class="kt">Duration</span> <span class="n">windowSizeMs</span> <span class="o">=</span> <span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">5</span><span class="o">);</span>
 <span class="kt">Duration</span> <span class="n">advanceMs</span> <span class="o">=</span>    <span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">1</span><span class="o">);</span>
-<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">advanceBy</span><span class="o">(</span><span class="n">advanceMs</span><span class="o">);</span>
-</pre></div>
+<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">advanceBy</span><span class="o">(</span><span class="n">advanceMs</span><span class="o">);</span></code></pre></div>
                         </div>
                         <div class="figure align-center" id="id4">
                             <img class="centered" src="/{{version}}/images/streams-time-windows-hopping.png">
@@ -3279,8 +3232,7 @@ become t=300,000).</span></p>
 <span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">grace</span><span class="o">(</span><span class="n">gracePeriodMs</span><span class="o">);</span>
 
 <span class="c1">// The above is equivalent to the following code:</span>
-<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">advanceBy</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">grace</span><span class="o">(</span><span class="n">gracePeriodMs</span><span class="o">);</span>
-</pre></div>
+<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">advanceBy</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">grace</span><span class="o">(</span><span class="n">gracePeriodMs</span><span class="o">);</span></code></pre></div>
                         </div>
                     </div>
                     <div class="section" id="sliding-time-windows">
@@ -3314,8 +3266,7 @@ become t=300,000).</span></p>
 <span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.SessionWindows</span><span class="o">;</span>
 
 <span class="c1">// A session window with an inactivity gap of 5 minutes.</span>
-<span class="n">SessionWindows</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">5</span><span class="o">));</span>
-</pre></div>
+<span class="n">SessionWindows</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">5</span><span class="o">));</span></code></pre></div>
                         </div>
                         <p>Given the previous session window example, here&#8217;s what would happen on an input stream of six records.
                             When the first three records arrive (upper part of in the diagram below), we&#8217;d have three sessions (see lower part)
@@ -3361,16 +3312,14 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res
 			    </p>
 			    <p>For example:</p>
 			    <div class="highlight-java"><div class="highlight">
-<pre>
-KGroupedStream&lt;UserId, Event&gt; grouped = ...;
+<pre class="line-numbers"><code class="language-text">KGroupedStream&lt;UserId, Event&gt; grouped = ...;
 grouped
     .windowedBy(TimeWindows.of(Duration.ofHours(1)).grace(ofMinutes(10)))
     .count()
     .suppress(Suppressed.untilWindowCloses(unbounded()))
     .filter((windowedUserId, count) -&gt; count &lt; 3)
     .toStream()
-    .foreach((windowedUserId, count) -&gt; sendAlert(windowedUserId.window(), windowedUserId.key(), count));
-</pre>
+    .foreach((windowedUserId, count) -&gt; sendAlert(windowedUserId.window(), windowedUserId.key(), count));</code></pre>
 			    </div></div>
 			    <p>The key parts of this program are:
 			    <dl>
@@ -3527,8 +3476,7 @@ grouped
     <span class="c1">// Any code for clean up would go here.  This processor instance will not be used again after this call.</span>
   <span class="o">}</span>
 
-<span class="o">}</span>
-</pre></div>
+<span class="o">}</span></code></pre></div>
                 </div>
                 <div class="admonition tip">
                     <p><b>Tip</b></p>
@@ -3555,8 +3503,7 @@ grouped
          <span class="o">.</span><span class="na">filter</span><span class="o">((</span><span class="n">PageId</span> <span class="n">pageId</span><span class="o">,</span> <span class="n">Long</span> <span class="n">viewCount</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">viewCount</span> <span class="o">==</span> <span class="mi">1000</span><span class="o">)</span>
          <span class="c1">// PopularPageEmailAlert is your custom processor that implements the</span>
          <span class="c1">// `Processor` interface, see further down below.</span>
-         <span class="o">.</span><span class="na">process</span><span class="o">(()</span> <span class="o">-&gt;</span> <span class="k">new</span> <span class="n">PopularPageEmailAlert</span><span class="o">(</span><span class="s">&quot;alerts@yourcompany.com&quot;</span><span class="o">));</span>
-</pre></div>
+         <span class="o">.</span><span class="na">process</span><span class="o">(()</span> <span class="o">-&gt;</span> <span class="k">new</span> <span class="n">PopularPageEmailAlert</span><span class="o">(</span><span class="s">&quot;alerts@yourcompany.com&quot;</span><span class="o">));</span></code></pre></div>
                 </div>
                 <p>In Java 7:</p>
                 <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Send an email notification when the view count of a page reaches one thousand.</span>
@@ -3575,8 +3522,7 @@ grouped
                <span class="c1">// the `Processor` interface, see further down below.</span>
                <span class="k">return</span> <span class="k">new</span> <span class="n">PopularPageEmailAlert</span><span class="o">(</span><span class="s">&quot;alerts@yourcompany.com&quot;</span><span class="o">);</span>
              <span class="o">}</span>
-           <span class="o">});</span>
-</pre></div>
+           <span class="o">});</span></code></pre></div>
                 </div>
             </div>
         </div>
@@ -3602,14 +3548,12 @@ grouped
 	    <p>For example:
 	    </p>
 	    <div class="highlight-java"><div class="highlight">
-<pre>
-KGroupedTable&lt;String, String&gt; groupedTable = ...;
+<pre class="line-numbers"><code class="language-text">KGroupedTable&lt;String, String&gt; groupedTable = ...;
 groupedTable
     .count()
     .suppress(untilTimeLimit(ofMinutes(5), maxBytes(1_000_000L).emitEarlyWhenFull()))
     .toStream()
-    .foreach((key, count) -&gt; updateCountsDatabase(key, count));
-</pre>
+    .foreach((key, count) -&gt; updateCountsDatabase(key, count));</code></pre>
 	    </div></div>
 	    <p>This configuration ensures that <code>updateCountsDatabase</code> gets events for each <code>key</code> no more than once every 5 minutes.
 	       Note that the latest state for each key has to be buffered in memory for that 5-minute period.
@@ -3676,8 +3620,7 @@ groupedTable
 
 <span class="c1">// Write the stream to the output topic, using explicit key and value serdes,</span>
 <span class="c1">// (thus overriding the defaults in the config properties).</span>
-<span class="n">stream</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;my-stream-output-topic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span cla [...]
-</pre></div>
+<span class="n">stream</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;my-stream-output-topic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span cla [...]
                         </div>
                         <p><strong>Causes data re-partitioning if any of the following conditions is true:</strong></p>
                         <ol class="last arabic simple">
@@ -3743,25 +3686,20 @@ groupedTable
               <li><code class="docutils literal"><span class="pre">org.apache.kafka.streams.scala.Serdes</span></code>: Module that contains all primitive SerDes that can be imported as implicits and a helper to create custom SerDes.</li>
             </ul>
             <p>The library is cross-built with Scala 2.12 and 2.13. To reference the library compiled against Scala {{scalaVersion}} include the following in your maven <code>pom.xml</code> add the following:</p>
-            <pre class="brush: xml;">
-              &lt;dependency&gt;
+            <pre class="line-numbers"><code class="language-xml">              &lt;dependency&gt;
                 &lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
                 &lt;artifactId&gt;kafka-streams-scala_{{scalaVersion}}&lt;/artifactId&gt;
                 &lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
-              &lt;/dependency&gt;
-            </pre>
+              &lt;/dependency&gt;</code></pre>
             <p>To use the library compiled against Scala 2.12 replace the <code class="docutils literal"><span class="pre">artifactId</span></code> with <code class="docutils literal"><span class="pre">kafka-streams-scala_2.12</span></code>.</p>
             <p>When using SBT then you can reference the correct library using the following:</p>
-            <pre class="brush: scala;">
-              libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % "{{fullDotVersion}}"
-            </pre>
+            <pre class="line-numbers"><code class="language-scala">              libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % "{{fullDotVersion}}"</code></pre>
             <div class="section" id="scala-dsl-sample-usage">
               <span id="streams-developer-guide-dsl-sample-usage"></span><h3><a class="toc-backref" href="#id28">Sample Usage</a><a class="headerlink" href="#scala-dsl-sample-usage" title="Permalink to this headline"></a></h3>
               <p>The library works by wrapping the original Java abstractions of Kafka Streams within a Scala wrapper object and then using implicit conversions between them. All the Scala abstractions are named identically as the corresponding Java abstraction, but they reside in a different package of the library e.g. the Scala class <code class="docutils literal"><span class="pre">org.apache.kafka.streams.scala.StreamsBuilder</span></code> is a wrapper around <code class="docutils lit [...]
               <p>Here's an example of the classic WordCount program that uses the Scala <code class="docutils literal"><span class="pre">StreamsBuilder</span></code> that builds an instance of <code class="docutils literal"><span class="pre">KStream</span></code> which is a wrapper around Java <code class="docutils literal"><span class="pre">KStream</span></code>. Then we reify to a table and get a <code class="docutils literal"><span class="pre">KTable</span></code>, which, again is a w [...]
               <p>The net result is that the following code is structured just like using the Java API, but with Scala and with far fewer type annotations compared to using the Java API directly from Scala. The difference in type annotation usage is more obvious when given an example.  Below is an example WordCount implementation that will be used to demonstrate the differences between the Scala and Java API.</p>
-              <pre class="brush: scala;">
-import java.time.Duration
+              <pre class="line-numbers"><code class="language-scala">import java.time.Duration
 import java.util.Properties
 
 import org.apache.kafka.streams.kstream.Materialized
@@ -3794,8 +3732,7 @@ object WordCountApplication extends App {
   sys.ShutdownHookThread {
      streams.close(Duration.ofSeconds(10))
   }
-}
-              </pre>
+}</code></pre>
               <p>In the above code snippet, we don't have to provide any SerDes, <code class="docutils literal"><span class="pre">Grouped</span></code>, <code class="docutils literal"><span class="pre">Produced</span></code>, <code class="docutils literal"><span class="pre">Consumed</span></code> or <code class="docutils literal"><span class="pre">Joined</span></code> explicitly. They will also not be dependent on any SerDes specified in the config.  <strong>In fact all SerDes specified  [...]
             </div>
             <div class="section" id="scala-dsl-implicit-serdes">
@@ -3804,8 +3741,7 @@ object WordCountApplication extends App {
               <p>The library uses the power of <a href="https://docs.scala-lang.org/tour/implicit-parameters.html">Scala implicit parameters</a> to alleviate this concern. As a user you can provide implicit SerDes or implicit values of <code class="docutils literal"><span class="pre">Grouped</span></code>, <code class="docutils literal"><span class="pre">Produced</span></code>, <code class="docutils literal"><span class="pre">Repartitioned</span></code>, <code class="docutils literal"><s [...]
               <p>The library also bundles all implicit SerDes of the commonly used primitive types in a Scala module - so just import the module vals and have all SerDes in scope.  A similar strategy of modular implicits can be adopted for any user-defined SerDes as well (User-defined SerDes are discussed in the next section).</p>
               <p>Here's an example:</p>
-              <pre class="brush: scala;">
-// DefaultSerdes brings into scope implicit SerDes (mostly for primitives)
+              <pre class="line-numbers"><code class="language-scala">// DefaultSerdes brings into scope implicit SerDes (mostly for primitives)
 // that will set up all Grouped, Produced, Consumed and Joined instances.
 // So all APIs below that accept Grouped, Produced, Consumed or Joined will
 // get these instances automatically
@@ -3827,8 +3763,7 @@ val clicksPerRegion: KTable[String, Long] =
     .groupByKey
     .reduce(_ + _)
 
-clicksPerRegion.toStream.to(outputTopic)
-              </pre>
+clicksPerRegion.toStream.to(outputTopic)</code></pre>
               <p>Quite a few things are going on in the above code snippet that may warrant a few lines of elaboration:</p>
               <ol>
                 <li>The code snippet does not depend on any config defined SerDes. In fact any SerDes defined as part of the config will be ignored.</li>
@@ -3840,8 +3775,7 @@ clicksPerRegion.toStream.to(outputTopic)
             <div class="section" id="scala-dsl-user-defined-serdes">
               <span id="streams-developer-guide-dsl-scala-dsl-user-defined-serdes"></span><h3><a class="toc-backref" href="#id30">User-Defined SerDes</a><a class="headerlink" href="#scala-dsl-user-defined-serdes" title="Permalink to this headline"></a></h3>
               <p>When the default primitive SerDes are not enough and we need to define custom SerDes, the usage is exactly the same as above. Just define the implicit SerDes and start building the stream transformation. Here's an example with <code class="docutils literal"><span class="pre">AvroSerde</span></code>:</p>
-              <pre class="brush: scala;">
-// domain object as a case class
+              <pre class="line-numbers"><code class="language-scala">// domain object as a case class
 case class UserClicks(clicks: Long)
 
 // An implicit Serde implementation for the values we want to
@@ -3872,8 +3806,7 @@ val clicksPerRegion: KTable[String, Long] =
    .reduce(_ + _)
 
 // Write the (continuously updating) results to the output topic.
-clicksPerRegion.toStream.to(outputTopic)
-              </pre>
+clicksPerRegion.toStream.to(outputTopic)</code></pre>
               <p>A complete example of user-defined SerDes can be found in a test class within the library.</p>
             </div>
         </div>
@@ -3890,10 +3823,10 @@ clicksPerRegion.toStream.to(outputTopic)
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
   <!--#include virtual="../../../includes/_nav.htm" -->
   <div class="right">
-    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
     <ul class="breadcrumbs">
       <li><a href="/documentation">Documentation</a></li>
       <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/dsl-topology-naming.html b/docs/streams/developer-guide/dsl-topology-naming.html
index e0c1e1f..062755f 100644
--- a/docs/streams/developer-guide/dsl-topology-naming.html
+++ b/docs/streams/developer-guide/dsl-topology-naming.html
@@ -71,20 +71,17 @@
 		 	For example, consider the following simple topology:
 
 			<br/>
-		<pre>
-		KStream&lt;String,String&gt; stream = builder.stream("input");
+		<pre class="line-numbers"><code class="language-text">		KStream&lt;String,String&gt; stream = builder.stream("input");
 		stream.filter((k,v) -> !v.equals("invalid_txn"))
 		      .mapValues((v) -> v.substring(0,5))
-		      .to("output")
-	    </pre>
+		      .to("output")</code></pre>
 
 		</p>
 
 		<p>
 		Running <code>Topology#describe()</code> yields this string:
 
-		<pre>
-		Topologies:
+		<pre class="line-numbers"><code class="language-text">		Topologies:
 		   Sub-topology: 0
 		    Source: KSTREAM-SOURCE-0000000000 (topics: [input])
 		      --> KSTREAM-FILTER-0000000001
@@ -95,8 +92,7 @@
 		      --> KSTREAM-SINK-0000000003
 		      <-- KSTREAM-FILTER-0000000001
 		    Sink: KSTREAM-SINK-0000000003 (topic: output)
-		      <-- KSTREAM-MAPVALUES-0000000002
-		 </pre>
+		      <-- KSTREAM-MAPVALUES-0000000002</code></pre>
 
 		 From this report, you can see what the different operators are, but what is the broader context here?
 		 For example, consider <code>KSTREAM-FILTER-0000000001</code>, we can see that it's a
@@ -116,16 +112,13 @@
 		</p>
 		<p>
 		 Now let's take a look at your topology with all the processors named:
-		<pre>
-		KStream&lt;String,String&gt; stream =
+		<pre class="line-numbers"><code class="language-text">		KStream&lt;String,String&gt; stream =
 		builder.stream("input", Consumed.as("Customer_transactions_input_topic"));
 		stream.filter((k,v) -> !v.equals("invalid_txn"), Named.as("filter_out_invalid_txns"))
 		      .mapValues((v) -> v.substring(0,5), Named.as("Map_values_to_first_6_characters"))
-		      .to("output", Produced.as("Mapped_transactions_output_topic"));
-	     </pre>
+		      .to("output", Produced.as("Mapped_transactions_output_topic"));</code></pre>
 
-		 <pre>
-		 Topologies:
+		 <pre class="line-numbers"><code class="language-text">		 Topologies:
 		   Sub-topology: 0
 		    Source: Customer_transactions_input_topic (topics: [input])
 		      --> filter_out_invalid_txns
@@ -136,8 +129,7 @@
 		      --> Mapped_transactions_output_topic
 		      <-- filter_out_invalid_txns
 		    Sink: Mapped_transactions_output_topic (topic: output)
-		      <-- Map_values_to_first_6_characters
-		 </pre>
+		      <-- Map_values_to_first_6_characters</code></pre>
 
 		Now you can look at the topology description and easily understand what role each processor
 		plays in the topology. But there's another reason for naming your processor nodes when you
@@ -159,16 +151,13 @@
 		 shifting does have implications for topologies with stateful operators or repartition topics. 
 
 		 Here's a different topology with some state:
-		 <pre>
-		 KStream&lt;String,String&gt; stream = builder.stream("input");
+		 <pre class="line-numbers"><code class="language-text">		 KStream&lt;String,String&gt; stream = builder.stream("input");
 		 stream.groupByKey()
 		       .count()
 		       .toStream()
-		       .to("output");
-		 </pre>
+		       .to("output");</code></pre>
 		This topology description yields the following:
-		 <pre>
-			 Topologies:
+		 <pre class="line-numbers"><code class="language-text">			 Topologies:
 			   Sub-topology: 0
 			    Source: KSTREAM-SOURCE-0000000000 (topics: [input])
 			     --> KSTREAM-AGGREGATE-0000000002
@@ -179,25 +168,21 @@
 			     --> KSTREAM-SINK-0000000004
 			     <-- KSTREAM-AGGREGATE-0000000002
 			    Sink: KSTREAM-SINK-0000000004 (topic: output)
-			     <-- KTABLE-TOSTREAM-0000000003
-		 </pre>
+			     <-- KTABLE-TOSTREAM-0000000003</code></pre>
 		 </p>
 		 <p>
 		  You can see from the topology description above that the state store is named
 		  <code>KSTREAM-AGGREGATE-STATE-STORE-0000000002</code>.  Here's what happens when you
 		  add a filter to keep some of the records out of the aggregation:
-		  <pre>
-		   KStream&lt;String,String&gt; stream = builder.stream("input");
+		  <pre class="line-numbers"><code class="language-text">		   KStream&lt;String,String&gt; stream = builder.stream("input");
 		   stream.filter((k,v)-> v !=null && v.length() >= 6 )
 		         .groupByKey()
 		         .count()
 		         .toStream()
-		         .to("output");
-		  </pre>
+		         .to("output");</code></pre>
 
 		  And the corresponding topology:
-		  <pre>
-			  Topologies:
+		  <pre class="line-numbers"><code class="language-text">			  Topologies:
 			    Sub-topology: 0
 			     Source: KSTREAM-SOURCE-0000000000 (topics: [input])
 			      --> KSTREAM-FILTER-0000000001
@@ -211,8 +196,7 @@
 			       --> KSTREAM-SINK-0000000005
 			       <-- KSTREAM-AGGREGATE-0000000003
 			      Sink: KSTREAM-SINK-0000000005 (topic: output)
-			       <-- KTABLE-TOSTREAM-0000000004
-		  </pre>
+			       <-- KTABLE-TOSTREAM-0000000004</code></pre>
 		 </p>
 		<p>
 		Notice that since you've added an operation <em>before</em> the <code>count</code> operation, the state
@@ -232,19 +216,16 @@
 		  But it's worth reiterating the importance of naming these DSL topology operations again.
 
 		  Here's how your DSL code looks now giving a specific name to your state store:
-		  <pre>
-		  KStream&lt;String,String&gt; stream = builder.stream("input");
+		  <pre class="line-numbers"><code class="language-text">		  KStream&lt;String,String&gt; stream = builder.stream("input");
 		  stream.filter((k, v) -> v != null && v.length() >= 6)
 		        .groupByKey()
 		        .count(Materialized.as("Purchase_count_store"))
 		        .toStream()
-		        .to("output");
-		  </pre>
+		        .to("output");</code></pre>
 
 		  And here's the topology 
 
-		  <pre>
-		  Topologies:
+		  <pre class="line-numbers"><code class="language-text">		  Topologies:
 		   Sub-topology: 0
 		    Source: KSTREAM-SOURCE-0000000000 (topics: [input])
 		      --> KSTREAM-FILTER-0000000001
@@ -258,8 +239,7 @@
 		      --> KSTREAM-SINK-0000000004
 		      <-- KSTREAM-AGGREGATE-0000000002
 		    Sink: KSTREAM-SINK-0000000004 (topic: output)
-		      <-- KTABLE-TOSTREAM-0000000003
-		  </pre>
+		      <-- KTABLE-TOSTREAM-0000000003</code></pre>
 		</p>
 		<p>
 		  Now, even though you've added processors before your state store, the store name and its changelog
@@ -327,10 +307,10 @@
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
 	<!--#include virtual="../../../includes/_nav.htm" -->
 	<div class="right">
-		<!--#include virtual="../../../includes/_docs_banner.htm" -->
+		<!--//#include virtual="../../../includes/_docs_banner.htm" -->
 		<ul class="breadcrumbs">
 			<li><a href="/documentation">Documentation</a></li>
 			<li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/index.html b/docs/streams/developer-guide/index.html
index eb96f7d..19f638e 100644
--- a/docs/streams/developer-guide/index.html
+++ b/docs/streams/developer-guide/index.html
@@ -68,10 +68,10 @@
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
     <!--#include virtual="../../../includes/_nav.htm" -->
     <div class="right">
-        <!--#include virtual="../../../includes/_docs_banner.htm" -->
+        <!--//#include virtual="../../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
             <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/interactive-queries.html b/docs/streams/developer-guide/interactive-queries.html
index 0701517..cf832b9 100644
--- a/docs/streams/developer-guide/interactive-queries.html
+++ b/docs/streams/developer-guide/interactive-queries.html
@@ -143,8 +143,7 @@
 
 <span class="c1">// Start an instance of the topology</span>
 <span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaStreams</span><span class="o">(</span><span class="n">builder</span><span class="o">,</span> <span class="n">props</span><span class="o">);</span>
-<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span>
-</pre></div>
+<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span></code></pre></div>
                 </div>
                 <p>After the application has started, you can get access to &#8220;CountsKeyValueStore&#8221; and then query it via the <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java">ReadOnlyKeyValueStore</a> API:</p>
                 <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Get the key-value store CountsKeyValueStore</span>
@@ -166,8 +165,7 @@
 <span class="k">while</span> <span class="o">(</span><span class="n">range</span><span class="o">.</span><span class="na">hasNext</span><span class="o">())</span> <span class="o">{</span>
   <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">next</span> <span class="o">=</span> <span class="n">range</span><span class="o">.</span><span class="na">next</span><span class="o">();</span>
   <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;count for &quot;</span> <span class="o">+</span> <span class="n">next</span><span class="o">.</span><span class="na">key</span> <span class="o">+</span> <span class="s">&quot;: &quot;</span> <span class="o">+</span> <span class="n">next</span><span class="o">.</span><span class="na">value</span><span class=" [...]
-<span class="o">}</span>
-</pre></div>
+<span class="o">}</span></code></pre></div>
                 </div>
                 <p>You can also materialize the results of stateless operators by using the overloaded methods that take a <code class="docutils literal"><span class="pre">queryableStoreName</span></code>
                     as shown in the example below:</p>
@@ -182,8 +180,7 @@
 
 <span class="c1">// do not materialize the result of filtering corresponding to even numbers</span>
 <span class="c1">// this means that these results will not be materialized and cannot be queried.</span>
-<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">oddCounts</span> <span class="o">=</span> <span class="n">numberLines</span><span class="o">.</span><span class="na">filter</span><span class="o">((</span><span class="n">region</span><span class="o">,</span> <span class="n">count</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="o">( [...]
-</pre></div>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">oddCounts</span> <span class="o">=</span> <span class="n">numberLines</span><span class="o">.</span><span class="na">filter</span><span class="o">((</span><span class="n">region</span><span class="o">,</span> <span class="n">count</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="o">( [...]
                 </div>
             </div>
             <div class="section" id="querying-local-window-stores">
@@ -203,8 +200,7 @@
 
 <span class="c1">// Create a window state store named &quot;CountsWindowStore&quot; that contains the word counts for every minute</span>
 <span class="n">groupedByWord</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span><span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(<span class="n">Duration</span><span class="o">.</span><span class="na">ofSeconds</span><span class="o">(</span><span class="mi">60</span><span class="o">)))</span>
-  <span class="o">.</span><span class="na">count</span><span class="o">(</span><span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">WindowStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;Co [...]
-</pre></div>
+  <span class="o">.</span><span class="na">count</span><span class="o">(</span><span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">WindowStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;Co [...]
                 </div>
                 <p>After the application has started, you can get access to &#8220;CountsWindowStore&#8221; and then query it via the <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyWindowStore.java">ReadOnlyWindowStore</a> API:</p>
                 <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Get the window store named &quot;CountsWindowStore&quot;</span>
@@ -220,8 +216,7 @@
   <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">next</span> <span class="o">=</span> <span class="n">iterator</span><span class="o">.</span><span class="na">next</span><span class="o">();</span>
   <span class="kt">long</span> <span class="n">windowTimestamp</span> <span class="o">=</span> <span class="n">next</span><span class="o">.</span><span class="na">key</span><span class="o">;</span>
   <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Count of &#39;world&#39; @ time &quot;</span> <span class="o">+</span> <span class="n">windowTimestamp</span> <span class="o">+</span> <span class="s">&quot; is &quot;</span> <span class="o">+</span> <span class="n">next</span><span class="o">.</span><span class="na">value</span><span class="o">);</span>
-<span class="o">}</span>
-</pre></div>
+<span class="o">}</span></code></pre></div>
                 </div>
             </div>
             <div class="section" id="querying-local-custom-state-stores">
@@ -254,8 +249,7 @@
 
 <span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyCustomStoreBuilder</span> <span class="kd">implements</span> <span class="n">StoreBuilder</span> <span class="o">{</span>
   <span class="c1">// implementation of the supplier for MyCustomStore</span>
-<span class="o">}</span>
-</pre></div>
+<span class="o">}</span></code></pre></div>
                 </div>
                 <p>To make this store queryable you must:</p>
                 <ul class="simple">
@@ -274,8 +268,7 @@
       <span class="k">return</span> <span class="k">new</span> <span class="n">MyCustomStoreTypeWrapper</span><span class="o">(</span><span class="n">storeProvider</span><span class="o">,</span> <span class="n">storeName</span><span class="o">,</span> <span class="k">this</span><span class="o">);</span>
   <span class="o">}</span>
 
-<span class="o">}</span>
-</pre></div>
+<span class="o">}</span></code></pre></div>
                 </div>
                 <p>A wrapper class is required because each instance of a Kafka Streams application may run multiple stream tasks and manage
                     multiple local instances of a particular state store.  The wrapper class hides this complexity and lets you query a &#8220;logical&#8221;
@@ -312,8 +305,7 @@
     <span class="k">return</span> <span class="n">value</span><span class="o">.</span><span class="na">orElse</span><span class="o">(</span><span class="kc">null</span><span class="o">);</span>
   <span class="o">}</span>
 
-<span class="o">}</span>
-</pre></div>
+<span class="o">}</span></code></pre></div>
                 </div>
                 <p>You can now find and query your custom store:</p>
                 <div class="highlight-java"><div class="highlight"><pre><span></span>
@@ -335,8 +327,7 @@
 <span class="c1">// Get access to the custom store</span>
 <span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">store</span> <span class="o">=</span> <span class="n">streams</span><span class="o">.</span><span class="na">store</span><span class="o">(</span><span class="s">&quot;the-custom-store&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="n">MyCustomStoreType</span><span c [...]
 <span class="c1">// Query the store</span>
-<span class="n">String</span> <span class="n">value</span> <span class="o">=</span> <span class="n">store</span><span class="o">.</span><span class="na">read</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">);</span>
-</pre></div>
+<span class="n">String</span> <span class="n">value</span> <span class="o">=</span> <span class="n">store</span><span class="o">.</span><span class="na">read</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">);</span></code></pre></div>
                 </div>
             </div>
         </div>
@@ -410,8 +401,7 @@ interactive queries</span></p>
 <span class="c1">// fictitious, but we provide end-to-end demo applications (such as KafkaMusicExample)</span>
 <span class="c1">// that showcase how to implement such a service to get you started.</span>
 <span class="n">MyRPCService</span> <span class="n">rpcService</span> <span class="o">=</span> <span class="o">...;</span>
-<span class="n">rpcService</span><span class="o">.</span><span class="na">listenAt</span><span class="o">(</span><span class="n">rpcEndpoint</span><span class="o">);</span>
-</pre></div>
+<span class="n">rpcService</span><span class="o">.</span><span class="na">listenAt</span><span class="o">(</span><span class="n">rpcEndpoint</span><span class="o">);</span></code></pre></div>
                 </div>
             </div>
             <div class="section" id="discovering-and-accessing-application-instances-and-their-local-state-stores">
@@ -460,8 +450,7 @@ interactive queries</span></p>
         <span class="k">return</span> <span class="n">http</span><span class="o">.</span><span class="na">getLong</span><span class="o">(</span><span class="n">url</span><span class="o">);</span>
     <span class="o">})</span>
     <span class="o">.</span><span class="na">filter</span><span class="o">(</span><span class="n">s</span> <span class="o">-&gt;</span> <span class="n">s</span> <span class="o">!=</span> <span class="kc">null</span><span class="o">)</span>
-    <span class="o">.</span><span class="na">findFirst</span><span class="o">();</span>
-</pre></div>
+    <span class="o">.</span><span class="na">findFirst</span><span class="o">();</span></code></pre></div>
                 </div>
                 <p>At this point the full state of the application is interactively queryable:</p>
                 <ul class="simple">
@@ -490,10 +479,10 @@ interactive queries</span></p>
 
                 <!--#include virtual="../../../includes/_header.htm" -->
                 <!--#include virtual="../../../includes/_top.htm" -->
-                    <div class="content documentation documentation--current">
+                    <div class="content documentation ">
                     <!--#include virtual="../../../includes/_nav.htm" -->
                     <div class="right">
-                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
                     <ul class="breadcrumbs">
                     <li><a href="/documentation">Documentation</a></li>
                     <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/manage-topics.html b/docs/streams/developer-guide/manage-topics.html
index 1dd7cfc..d65e375 100644
--- a/docs/streams/developer-guide/manage-topics.html
+++ b/docs/streams/developer-guide/manage-topics.html
@@ -89,10 +89,10 @@
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
   <!--#include virtual="../../../includes/_nav.htm" -->
   <div class="right">
-    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
     <ul class="breadcrumbs">
       <li><a href="/documentation">Documentation</a></li>
       <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/memory-mgmt.html b/docs/streams/developer-guide/memory-mgmt.html
index d4c2bef..91a53c4 100644
--- a/docs/streams/developer-guide/memory-mgmt.html
+++ b/docs/streams/developer-guide/memory-mgmt.html
@@ -82,8 +82,7 @@
         processing topology:</p>
       <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Enable record cache of size 10 MB.</span>
 <span class="n">Properties</span> <span class="n">props</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
-<span class="n">props</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">CACHE_MAX_BYTES_BUFFERING_CONFIG</span><span class="o">,</span> <span class="mi">10</span> <span class="o">*</span> <span class="mi">1024</span> <span class="o">*</span> <span class="mi">1024L</span><span class="o">);</span>
-</pre></div>
+<span class="n">props</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">CACHE_MAX_BYTES_BUFFERING_CONFIG</span><span class="o">,</span> <span class="mi">10</span> <span class="o">*</span> <span class="mi">1024</span> <span class="o">*</span> <span class="mi">1024L</span><span class="o">);</span></code></pre></div>
       </div>
       <p>This parameter controls the number of bytes allocated for caching. Specifically, for a processor topology instance with
         <code class="docutils literal"><span class="pre">T</span></code> threads and <code class="docutils literal"><span class="pre">C</span></code> bytes allocated for caching, each thread will have an even <code class="docutils literal"><span class="pre">C/T</span></code> bytes to construct its own
@@ -107,8 +106,7 @@
           <blockquote>
             <div><div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Disable record cache</span>
 <span class="n">Properties</span> <span class="n">props</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
-<span class="n">props</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">CACHE_MAX_BYTES_BUFFERING_CONFIG</span><span class="o">,</span> <span class="mi">0</span><span class="o">);</span>
-</pre></div>
+<span class="n">props</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">CACHE_MAX_BYTES_BUFFERING_CONFIG</span><span class="o">,</span> <span class="mi">0</span><span class="o">);</span></code></pre></div>
             </div>
               <p>Turning off caching might result in high write traffic for the underlying RocksDB store.
                 With default settings caching is enabled within Kafka Streams but RocksDB caching is disabled.
@@ -123,8 +121,7 @@
 <span class="c1">// Enable record cache of size 10 MB.</span>
 <span class="n">props</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">CACHE_MAX_BYTES_BUFFERING_CONFIG</span><span class="o">,</span> <span class="mi">10</span> <span class="o">*</span> <span class="mi">1024</span> <span class="o">*</span> <span class="mi">1024L</span><span class="o">);</span>
 <span class="c1">// Set commit interval to 1 second.</span>
-<span class="n">props</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">COMMIT_INTERVAL_MS_CONFIG</span><span class="o">,</span> <span class="mi">1000</span><span class="o">);</span>
-</pre></div>
+<span class="n">props</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">COMMIT_INTERVAL_MS_CONFIG</span><span class="o">,</span> <span class="mi">1000</span><span class="o">);</span></code></pre></div>
             </div>
             </div></blockquote>
         </li>
@@ -164,8 +161,7 @@
     <span class="n">Stores</span><span class="o">.</span><span class="na">persistentKeyValueStore</span><span class="o">(</span><span class="s">&quot;Counts&quot;</span><span class="o">),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span>
-  <span class="o">.</span><span class="na">withCachingEnabled</span><span class="o">()</span>
-</pre></div>
+  <span class="o">.</span><span class="na">withCachingEnabled</span><span class="o">()</span></code></pre></div>
       </div>
     </div>
     <div class="section" id="rocksdb">
@@ -207,9 +203,9 @@
     <span class="o">}</span>
       </div>
         <sup id="fn1">1. INDEX_FILTER_BLOCK_RATIO can be used to set a fraction of the block cache to set aside for "high priority" (aka index and filter) blocks, preventing them from being evicted by data blocks. See the full signature of the <a class="reference external" href="https://github.com/facebook/rocksdb/blob/master/java/src/main/java/org/rocksdb/LRUCache.java#L72">LRUCache constructor</a>.
-                        NOTE: the boolean parameter in the cache constructor lets you control whether the cache should enforce a strict memory limit by failing the read or iteration in the rare cases where it might go larger than its capacity. Due to a
-                        <a class="reference external" href="https://github.com/facebook/rocksdb/issues/6247">bug in RocksDB</a>, this option cannot be used
-                        if the write buffer memory is also counted against the cache. If you set this to true, you should NOT pass the cache in to the <code>WriteBufferManager</code> and just control the write buffer and cache memory separately.</sup>
+          NOTE: the boolean parameter in the cache constructor lets you control whether the cache should enforce a strict memory limit by failing the read or iteration in the rare cases where it might go larger than its capacity. Due to a
+          <a class="reference external" href="https://github.com/facebook/rocksdb/issues/6247">bug in RocksDB</a>, this option cannot be used
+          if the write buffer memory is also counted against the cache. If you set this to true, you should NOT pass the cache in to the <code>WriteBufferManager</code> and just control the write buffer and cache memory separately.</sup>
         <br>
         <sup id="fn2">2. This must be set in order for INDEX_FILTER_BLOCK_RATIO to take effect (see footnote 1) as described in the <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks">RocksDB docs</a></sup>
         <br>
@@ -253,10 +249,10 @@
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
   <!--#include virtual="../../../includes/_nav.htm" -->
   <div class="right">
-    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
     <ul class="breadcrumbs">
       <li><a href="/documentation">Documentation</a></li>
       <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/processor-api.html b/docs/streams/developer-guide/processor-api.html
index d314c7d..a5c1335 100644
--- a/docs/streams/developer-guide/processor-api.html
+++ b/docs/streams/developer-guide/processor-api.html
@@ -158,8 +158,7 @@
       <span class="c1">// Note: Do not close any StateStores as these are managed by the library</span>
   <span class="o">}</span>
 
-<span class="o">}</span>
-</pre></div>
+<span class="o">}</span></code></pre></div>
             </div>
             <div class="admonition note">
                 <p><b>Note</b></p>
@@ -246,8 +245,7 @@
     <span class="n">Stores</span><span class="o">.</span><span class="na">persistentKeyValueStore</span><span class="o">(</span><span class="s">&quot;persistent-counts&quot;</span><span class="o">),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">());</span>
-<span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">countStore</span> <span class="o">=</span> <span class="n">countStoreSupplier</span><span class="o">.</span><span class="na">build</span><span class="o">();</span>
-</pre></div>
+<span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">countStore</span> <span class="o">=</span> <span class="n">countStoreSupplier</span><span class="o">.</span><span class="na">build</span><span class="o">();</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -281,8 +279,7 @@
     <span class="n">Stores</span><span class="o">.</span><span class="na">inMemoryKeyValueStore</span><span class="o">(</span><span class="s">&quot;inmemory-counts&quot;</span><span class="o">),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">());</span>
-<span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">countStore</span> <span class="o">=</span> <span class="n">countStoreSupplier</span><span class="o">.</span><span class="na">build</span><span class="o">();</span>
-</pre></div>
+<span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">countStore</span> <span class="o">=</span> <span class="n">countStoreSupplier</span><span class="o">.</span><span class="na">build</span><span class="o">();</span></code></pre></div>
                             </div>
                         </td>
                     </tr>
@@ -327,8 +324,7 @@
   <span class="n">Stores</span><span class="o">.</span><span class="na">persistentKeyValueStore</span><span class="o">(</span><span class="s">&quot;Counts&quot;</span><span class="o">),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span>
-  <span class="o">.</span><span class="na">withLoggingDisabled</span><span class="o">();</span> <span class="c1">// disable backing up the store to a changelog topic</span>
-</pre></div>
+  <span class="o">.</span><span class="na">withLoggingDisabled</span><span class="o">();</span> <span class="c1">// disable backing up the store to a changelog topic</span></code></pre></div>
                 </div>
                 <div class="admonition attention">
                     <p class="first admonition-title">Attention</p>
@@ -348,8 +344,7 @@
   <span class="n">Stores</span><span class="o">.</span><span class="na">persistentKeyValueStore</span><span class="o">(</span><span class="s">&quot;Counts&quot;</span><span class="o">),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span>
     <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span>
-  <span class="o">.</span><span class="na">withLoggingEnabled</span><span class="o">(</span><span class="n">changlogConfig</span><span class="o">);</span> <span class="c1">// enable changelogging, with custom changelog settings</span>
-</pre></div>
+  <span class="o">.</span><span class="na">withLoggingEnabled</span><span class="o">(</span><span class="n">changlogConfig</span><span class="o">);</span> <span class="c1">// enable changelogging, with custom changelog settings</span></code></pre></div>
                 </div>
             </div>
             <div class="section" id="timestamped-state-stores">
@@ -398,8 +393,7 @@
 
     <span class="c1">// add a header to the elements</span>
     <span class="n">context()</span><span class="o">.</span><span class="na">headers</span><span class="o">()</span><span class="o">.</span><span class="na">add</span><span class="o">.</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">,</span> <span class="s">&quot;key&quot;</span>
-<span class="o">}</span>
-</pre></div>
+<span class="o">}</span></code></pre></div>
         </div>
         <div class="section" id="connecting-processors-and-state-stores">
             <h2><a class="toc-backref" href="#id8">Connecting Processors and State Stores</a><a class="headerlink" href="#connecting-processors-and-state-stores" title="Permalink to this headline"></a></h2>
@@ -409,8 +403,7 @@
                 to generate input data streams into the topology, and sink processors with the specified Kafka topics to generate
                 output data streams out of the topology.</p>
             <p>Here is an example implementation:</p>
-            <pre class="brush: java">
-                Topology builder = new Topology();
+            <pre class="line-numbers"><code class="language-java">                Topology builder = new Topology();
                 // add the source processor node that takes Kafka topic "source-topic" as input
                 builder.addSource("Source", "source-topic")
                     // add the WordCountProcessor node which takes the source processor as its upstream processor
@@ -419,8 +412,7 @@
                     .addStateStore(countStoreBuilder, "Process")
                     // add the sink processor node that takes Kafka topic "sink-topic" as output
                     // and the WordCountProcessor node as its upstream processor
-                    .addSink("Sink", "sink-topic", "Process");
-            </pre>
+                    .addSink("Sink", "sink-topic", "Process");</code></pre>
             <p>Here is a quick explanation of this example:</p>
             <ul class="simple">
                 <li>A source processor node named <code class="docutils literal"><span class="pre">&quot;Source&quot;</span></code> is added to the topology using the <code class="docutils literal"><span class="pre">addSource</span></code> method, with one Kafka topic
@@ -437,8 +429,7 @@
                 This can be done by implementing <code class="docutils literal"><span class="pre">ConnectedStoreProvider#stores()</span></code> on the <code class="docutils literal"><span class="pre">ProcessorSupplier</span></code>
                 instead of calling <code class="docutils literal"><span class="pre">Topology#addStateStore()</span></code>, like this:
             </p>
-            <pre class="brush: java">
-                Topology builder = new Topology();
+            <pre class="line-numbers"><code class="language-java">                Topology builder = new Topology();
                 // add the source processor node that takes Kafka "source-topic" as input
                 builder.addSource("Source", "source-topic")
                     // add the WordCountProcessor node which takes the source processor as its upstream processor.
@@ -453,8 +444,7 @@
                     }, "Source")
                     // add the sink processor node that takes Kafka topic "sink-topic" as output
                     // and the WordCountProcessor node as its upstream processor
-                    .addSink("Sink", "sink-topic", "Process");
-            </pre>
+                    .addSink("Sink", "sink-topic", "Process");</code></pre>
             <p>This allows for a processor to "own" state stores, effectively encapsulating their usage from the user wiring the topology.
                 Multiple processors that share a state store may provide the same store with this technique, as long as the <code class="docutils literal"><span class="pre">StoreBuilder</span></code> is the same <code class="docutils literal"><span class="pre">instance</span></code>.</p>
             <p>In these topologies, the <code class="docutils literal"><span class="pre">&quot;Process&quot;</span></code> stream processor node is considered a downstream processor of the <code class="docutils literal"><span class="pre">&quot;Source&quot;</span></code> node, and an
@@ -483,10 +473,10 @@
 
                 <!--#include virtual="../../../includes/_header.htm" -->
                 <!--#include virtual="../../../includes/_top.htm" -->
-                    <div class="content documentation documentation--current">
+                    <div class="content documentation ">
                     <!--#include virtual="../../../includes/_nav.htm" -->
                     <div class="right">
-                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
                     <ul class="breadcrumbs">
                     <li><a href="/documentation">Documentation</a></li>
                     <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/running-app.html b/docs/streams/developer-guide/running-app.html
index fccb5090..6a9128a 100644
--- a/docs/streams/developer-guide/running-app.html
+++ b/docs/streams/developer-guide/running-app.html
@@ -53,8 +53,7 @@
               <p>You can package your Java application as a fat JAR file and then start the application like this:</p>
               <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># Start the application in class `com.example.MyStreamsApp`</span>
 <span class="c1"># from the fat JAR named `path-to-app-fatjar.jar`.</span>
-$ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
-</pre></div>
+$ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp</code></pre></div>
               </div>
               <p>When you start your application you are launching a Kafka Streams instance of your application. You can run multiple
                   instances of your application. A common scenario is that there are multiple instances of your application running in
@@ -116,7 +115,7 @@ $ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
                       <a class="reference internal" href="config-streams.html#acceptable-recovery-lag"><span class="std std-ref"><code>acceptable.recovery.lag</code></span></a>, if one exists. This means that
                       most of the time, a task migration will <b>not</b> result in downtime for that task. It will remain active on the instance that's already caught up, while the instance that it's being
                       migrated to works on restoring the state. Streams will <a class="reference internal" href="config-streams.html#probing-rebalance-interval-ms"><span class="std std-ref">regularly probe</span></a> for warmup tasks that have finished restoring and transition them to active tasks when ready.
-                   </p>
+                  </p>
                   <p>
                       Note, the one exception to this task availability is if none of the instances have a caught up version of that task. In that case, we have no choice but to assign the active
                       task to an instance that is not caught up and will have to block further processing on restoration of the task's state from the changelog. If high availability is important
@@ -151,10 +150,10 @@ $ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
 
                 <!--#include virtual="../../../includes/_header.htm" -->
                 <!--#include virtual="../../../includes/_top.htm" -->
-                    <div class="content documentation documentation--current">
+                    <div class="content documentation ">
                     <!--#include virtual="../../../includes/_nav.htm" -->
                     <div class="right">
-                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
                     <ul class="breadcrumbs">
                     <li><a href="/documentation">Documentation</a></li>
                     <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/security.html b/docs/streams/developer-guide/security.html
index bd05fc5..e2411c1 100644
--- a/docs/streams/developer-guide/security.html
+++ b/docs/streams/developer-guide/security.html
@@ -105,8 +105,7 @@ ssl.truststore.location<span class="o">=</span>/etc/security/tls/kafka.client.tr
 ssl.truststore.password<span class="o">=</span>test1234
 ssl.keystore.location<span class="o">=</span>/etc/security/tls/kafka.client.keystore.jks
 ssl.keystore.password<span class="o">=</span>test1234
-ssl.key.password<span class="o">=</span>test1234
-</pre></div>
+ssl.key.password<span class="o">=</span>test1234</code></pre></div>
             </div>
             <p>Configure these settings in the application for your <code class="docutils literal"><span class="pre">Properties</span></code> instance. These settings will encrypt any
                 data-in-transit that is being read from or written to Kafka, and your application will authenticate itself against the
@@ -127,8 +126,7 @@ ssl.key.password<span class="o">=</span>test1234
 <span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_TRUSTSTORE_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
 <span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEYSTORE_LOCATION_CONFIG</span><span class="o">,</span> <span class="s">&quot;/etc/security/tls/kafka.client.keystore.jks&quot;</span><span class="o">);</span>
 <span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEYSTORE_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEY_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
-</pre></div>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEY_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span></code></pre></div>
             </div>
             <p>If you incorrectly configure a security setting in your application, it will fail at runtime, typically right after you
                 start it.  For example, if you enter an incorrect password for the <code class="docutils literal"><span class="pre">ssl.keystore.password</span></code> setting, an error message
@@ -139,8 +137,7 @@ Exception in thread <span class="s2">&quot;main&quot;</span> org.apache.kafka.co
 Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException:
    java.io.IOException: Keystore was tampered with, or password was incorrect
 <span class="o">[</span>...snip...<span class="o">]</span>
-Caused by: java.security.UnrecoverableKeyException: Password verification failed
-</pre></div>
+Caused by: java.security.UnrecoverableKeyException: Password verification failed</code></pre></div>
             </div>
             <p>Monitor your Kafka Streams application log files for such error messages to spot any misconfigured applications quickly.</p>
 </div>
@@ -157,10 +154,10 @@ Caused by: java.security.UnrecoverableKeyException: Password verification failed
 
                 <!--#include virtual="../../../includes/_header.htm" -->
                 <!--#include virtual="../../../includes/_top.htm" -->
-                    <div class="content documentation documentation--current">
+                    <div class="content documentation ">
                     <!--#include virtual="../../../includes/_nav.htm" -->
                     <div class="right">
-                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
                     <ul class="breadcrumbs">
                     <li><a href="/documentation">Documentation</a></li>
                     <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/testing.html b/docs/streams/developer-guide/testing.html
index 5be2255..e886b81 100644
--- a/docs/streams/developer-guide/testing.html
+++ b/docs/streams/developer-guide/testing.html
@@ -51,14 +51,12 @@
                 To test a Kafka Streams application, Kafka provides a test-utils artifact that can be added as regular
                 dependency to your test code base. Example <code>pom.xml</code> snippet when using Maven:
             </p>
-            <pre>
-&lt;dependency&gt;
+            <pre class="line-numbers"><code class="language-text">&lt;dependency&gt;
     &lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
     &lt;artifactId&gt;kafka-streams-test-utils&lt;/artifactId&gt;
     &lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
     &lt;scope&gt;test&lt;/scope&gt;
-&lt;/dependency&gt;
-    </pre>
+&lt;/dependency&gt;</code></pre>
         </div>
         <div class="section" id="testing-topologytestdriver">
             <h2><a class="toc-backref" href="#testing-topologytestdriver" title="Permalink to this headline">Testing a
@@ -73,8 +71,7 @@
                 You can use the test driver to verify that your specified processor topology computes the correct result
                 with the manually piped in data records.
                 The test driver captures the results records and allows to query its embedded state stores.
-            <pre>
-// Processor API
+            <pre class="line-numbers"><code class="language-text">// Processor API
 Topology topology = new Topology();
 topology.addSource("sourceProcessor", "input-topic");
 topology.addProcessor("processor", ..., "sourceProcessor");
@@ -89,16 +86,13 @@ Topology topology = builder.build();
 Properties props = new Properties();
 props.put(StreamsConfig.APPLICATION_ID_CONFIG, "test");
 props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "dummy:1234");
-TopologyTestDriver testDriver = new TopologyTestDriver(topology, props);
-        </pre>
+TopologyTestDriver testDriver = new TopologyTestDriver(topology, props);</code></pre>
             <p>
                 With the test driver you can create <code>TestInputTopic</code> giving topic name and the corresponding serializers.
                 <code>TestInputTopic</code> provides various methods to pipe new message values, keys and values, or list of KeyValue objects.
             </p>
-            <pre>
-TestInputTopic&lt;String, Long&gt; inputTopic = testDriver.createInputTopic("input-topic", stringSerde.serializer(), longSerde.serializer());
-inputTopic.pipeInput("key", 42L);
-        </pre>
+            <pre class="line-numbers"><code class="language-text">TestInputTopic&lt;String, Long&gt; inputTopic = testDriver.createInputTopic("input-topic", stringSerde.serializer(), longSerde.serializer());
+inputTopic.pipeInput("key", 42L);</code></pre>
             <p>
                 To verify the output, you can use <code>TestOutputTopic</code>
                 where you configure the topic and the corresponding deserializers during initialization.
@@ -106,34 +100,26 @@ inputTopic.pipeInput("key", 42L);
                 For example, you can validate returned <code>KeyValue</code> with standard assertions
                 if you only care about the key and value, but not the timestamp of the result record.
             </p>
-            <pre>
-TestOutputTopic&lt;String, Long&gt; outputTopic = testDriver.createOutputTopic("result-topic", stringSerde.deserializer(), longSerde.deserializer());
-assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue&lt;&gt;("key", 42L)));
-        </pre>
+            <pre class="line-numbers"><code class="language-text">TestOutputTopic&lt;String, Long&gt; outputTopic = testDriver.createOutputTopic("result-topic", stringSerde.deserializer(), longSerde.deserializer());
+assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue&lt;&gt;("key", 42L)));</code></pre>
             <p>
                 <code>TopologyTestDriver</code> supports punctuations, too.
                 Event-time punctuations are triggered automatically based on the processed records' timestamps.
                 Wall-clock-time punctuations can also be triggered by advancing the test driver's wall-clock-time (the
                 driver mocks wall-clock-time internally to give users control over it).
             </p>
-            <pre>
-testDriver.advanceWallClockTime(Duration.ofSeconds(20));
-        </pre>
+            <pre class="line-numbers"><code class="language-text">testDriver.advanceWallClockTime(Duration.ofSeconds(20));</code></pre>
             <p>
                 Additionally, you can access state stores via the test driver before or after a test.
                 Accessing stores before a test is useful to pre-populate a store with some initial values.
                 After data was processed, expected updates to the store can be verified.
             </p>
-            <pre>
-KeyValueStore store = testDriver.getKeyValueStore("store-name");
-        </pre>
+            <pre class="line-numbers"><code class="language-text">KeyValueStore store = testDriver.getKeyValueStore("store-name");</code></pre>
             <p>
                 Note, that you should always close the test driver at the end to make sure all resources are release
                 properly.
             </p>
-            <pre>
-testDriver.close();
-        </pre>
+            <pre class="line-numbers"><code class="language-text">testDriver.close();</code></pre>
 
             <h3>Example</h3>
             <p>
@@ -142,8 +128,7 @@ testDriver.close();
                 While processing, no output is generated, but only the store is updated.
                 Output is only sent downstream based on event-time and wall-clock punctuations.
             </p>
-            <pre>
-private TopologyTestDriver testDriver;
+            <pre class="line-numbers"><code class="language-text">private TopologyTestDriver testDriver;
 private TestInputTopic&lt;String, Long&gt; inputTopic;
 private TestOutputTopic&lt;String, Long&gt; outputTopic;
 private KeyValueStore&lt;String, Long&gt; store;
@@ -277,8 +262,7 @@ public class CustomMaxAggregator implements Processor&lt;String, Long&gt; {
 
     @Override
     public void close() {}
-}
-        </pre>
+}</code></pre>
         </div>
         <div class="section" id="unit-testing-processors">
             <h2>
@@ -296,28 +280,23 @@ public class CustomMaxAggregator implements Processor&lt;String, Long&gt; {
             <b>Construction</b>
             <p>
                 To begin with, instantiate your processor and initialize it with the mock context:
-            <pre>
-final Processor processorUnderTest = ...;
+            <pre class="line-numbers"><code class="language-text">final Processor processorUnderTest = ...;
 final MockProcessorContext context = new MockProcessorContext();
-processorUnderTest.init(context);
-                </pre>
+processorUnderTest.init(context);</code></pre>
             If you need to pass configuration to your processor or set the default serdes, you can create the mock with
             config:
-            <pre>
-final Properties props = new Properties();
+            <pre class="line-numbers"><code class="language-text">final Properties props = new Properties();
 props.put(StreamsConfig.APPLICATION_ID_CONFIG, "unit-test");
 props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "");
 props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
 props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.Long().getClass());
 props.put("some.other.config", "some config value");
-final MockProcessorContext context = new MockProcessorContext(props);
-                </pre>
+final MockProcessorContext context = new MockProcessorContext(props);</code></pre>
             </p>
             <b>Captured data</b>
             <p>
                 The mock will capture any values that your processor forwards. You can make assertions on them:
-            <pre>
-processorUnderTest.process("key", "value");
+            <pre class="line-numbers"><code class="language-text">processorUnderTest.process("key", "value");
 
 final Iterator&lt;CapturedForward&gt; forwarded = context.forwarded().iterator();
 assertEquals(forwarded.next().keyValue(), new KeyValue&lt;&gt;(..., ...));
@@ -326,34 +305,27 @@ assertFalse(forwarded.hasNext());
 // you can reset forwards to clear the captured data. This may be helpful in constructing longer scenarios.
 context.resetForwards();
 
-assertEquals(context.forwarded().size(), 0);
-            </pre>
+assertEquals(context.forwarded().size(), 0);</code></pre>
             If your processor forwards to specific child processors, you can query the context for captured data by
             child name:
-            <pre>
-final List&lt;CapturedForward&gt; captures = context.forwarded("childProcessorName");
-            </pre>
+            <pre class="line-numbers"><code class="language-text">final List&lt;CapturedForward&gt; captures = context.forwarded("childProcessorName");</code></pre>
             The mock also captures whether your processor has called <code>commit()</code> on the context:
-            <pre>
-assertTrue(context.committed());
+            <pre class="line-numbers"><code class="language-text">assertTrue(context.committed());
 
 // commit captures can also be reset.
 context.resetCommit();
 
-assertFalse(context.committed());
-            </pre>
+assertFalse(context.committed());</code></pre>
             </p>
             <b>Setting record metadata</b>
             <p>
                 In case your processor logic depends on the record metadata (topic, partition, offset, or timestamp),
                 you can set them on the context, either all together or individually:
-            <pre>
-context.setRecordMetadata("topicName", /*partition*/ 0, /*offset*/ 0L, /*timestamp*/ 0L);
+            <pre class="line-numbers"><code class="language-text">context.setRecordMetadata("topicName", /*partition*/ 0, /*offset*/ 0L, /*timestamp*/ 0L);
 context.setTopic("topicName");
 context.setPartition(0);
 context.setOffset(0L);
-context.setTimestamp(0L);
-                </pre>
+context.setTimestamp(0L);</code></pre>
             Once these are set, the context will continue returning the same values, until you set new ones.
             </p>
             <b>State stores</b>
@@ -362,8 +334,7 @@ context.setTimestamp(0L);
                 You're encouraged to use a simple in-memory store of the appropriate type (KeyValue, Windowed, or
                 Session), since the mock context does <i>not</i> manage changelogs, state directories, etc.
             </p>
-            <pre>
-final KeyValueStore&lt;String, Integer&gt; store =
+            <pre class="line-numbers"><code class="language-text">final KeyValueStore&lt;String, Integer&gt; store =
     Stores.keyValueStoreBuilder(
             Stores.inMemoryKeyValueStore("myStore"),
             Serdes.String(),
@@ -372,21 +343,18 @@ final KeyValueStore&lt;String, Integer&gt; store =
         .withLoggingDisabled() // Changelog is not supported by MockProcessorContext.
         .build();
 store.init(context, store);
-context.register(store, /*deprecated parameter*/ false, /*parameter unused in mock*/ null);
-            </pre>
+context.register(store, /*deprecated parameter*/ false, /*parameter unused in mock*/ null);</code></pre>
             <b>Verifying punctuators</b>
             <p>
                 Processors can schedule punctuators to handle periodic tasks.
                 The mock context does <i>not</i> automatically execute punctuators, but it does capture them to
                 allow you to unit test them as well:
-            <pre>
-final MockProcessorContext.CapturedPunctuator capturedPunctuator = context.scheduledPunctuators().get(0);
+            <pre class="line-numbers"><code class="language-text">final MockProcessorContext.CapturedPunctuator capturedPunctuator = context.scheduledPunctuators().get(0);
 final long interval = capturedPunctuator.getIntervalMs();
 final PunctuationType type = capturedPunctuator.getType();
 final boolean cancelled = capturedPunctuator.cancelled();
 final Punctuator punctuator = capturedPunctuator.getPunctuator();
-punctuator.punctuate(/*timestamp*/ 0L);
-                </pre>
+punctuator.punctuate(/*timestamp*/ 0L);</code></pre>
             If you need to write tests involving automatic firing of scheduled punctuators, we recommend creating a
             simple topology with your processor and using the <a href="testing.html#testing-topologytestdriver"><code>TopologyTestDriver</code></a>.
             </p>
@@ -400,10 +368,10 @@ punctuator.punctuate(/*timestamp*/ 0L);
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
     <!--#include virtual="../../../includes/_nav.htm" -->
     <div class="right">
-        <!--#include virtual="../../../includes/_docs_banner.htm" -->
+        <!--//#include virtual="../../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
             <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/developer-guide/write-streams.html b/docs/streams/developer-guide/write-streams.html
index ed74d72..ed0733a 100644
--- a/docs/streams/developer-guide/write-streams.html
+++ b/docs/streams/developer-guide/write-streams.html
@@ -90,8 +90,7 @@
               <p class="last">See the section <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data Types and Serialization</span></a> for more information about Serializers/Deserializers.</p>
           </div>
           <p>Example <code class="docutils literal"><span class="pre">pom.xml</span></code> snippet when using Maven:</p>
-          <pre class="brush: xml;">
-<dependency>
+          <pre class="line-numbers"><code class="language-xml"><dependency>
     <groupId>org.apache.kafka</groupId>
     <artifactId>kafka-streams</artifactId>
     <version>{{fullDotVersion}}</version>
@@ -106,8 +105,7 @@
     <groupId>org.apache.kafka</groupId>
     <artifactId>kafka-streams-scala_{{scalaVersion}}</artifactId>
     <version>{{fullDotVersion}}</version>
-</dependency>
-          </pre>
+</dependency></code></pre>
       </div>
     <div class="section" id="using-kafka-streams-within-your-application-code">
       <h2>Using Kafka Streams within your application code<a class="headerlink" href="#using-kafka-streams-within-your-application-code" title="Permalink to this headline"></a></h2>
@@ -143,14 +141,12 @@
 <span class="c1">// and so on.</span>
 <span class="n">Properties</span> <span class="n">props</span> <span class="o">=</span> <span class="o">...;</span>
 
-<span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaStreams</span><span class="o">(</span><span class="n">topology</span><span class="o">,</span> <span class="n">props</span><span class="o">);</span>
-</pre></div>
+<span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaStreams</span><span class="o">(</span><span class="n">topology</span><span class="o">,</span> <span class="n">props</span><span class="o">);</span></code></pre></div>
       </div>
       <p>At this point, internal structures are initialized, but the processing is not started yet.
         You have to explicitly start the Kafka Streams thread by calling the <code class="docutils literal"><span class="pre">KafkaStreams#start()</span></code> method:</p>
       <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Start the Kafka Streams threads</span>
-<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span>
-</pre></div>
+<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span></code></pre></div>
       </div>
       <p>If there are other instances of this stream processing application running elsewhere (e.g., on another machine), Kafka
         Streams transparently re-assigns tasks from the existing instances to the new instance that you just started.
@@ -168,13 +164,11 @@
   <span class="kd">public</span> <span class="kt">void</span> <span class="nf">uncaughtException</span><span class="o">(</span><span class="n">Thread</span> <span class="n">thread</span><span class="o">,</span> <span class="n">Throwable</span> <span class="n">throwable</span><span class="o">)</span> <span class="o">{</span>
     <span class="c1">// here you should examine the throwable/exception and perform an appropriate action!</span>
   <span class="o">}</span>
-<span class="o">});</span>
-</pre></div>
+<span class="o">});</span></code></pre></div>
       </div>
       <p>To stop the application instance, call the <code class="docutils literal"><span class="pre">KafkaStreams#close()</span></code> method:</p>
       <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Stop the Kafka Streams threads</span>
-<span class="n">streams</span><span class="o">.</span><span class="na">close</span><span class="o">();</span>
-</pre></div>
+<span class="n">streams</span><span class="o">.</span><span class="na">close</span><span class="o">();</span></code></pre></div>
       </div>
       <p>To allow your application to gracefully shutdown in response to SIGTERM, it is recommended that you add a shutdown hook
         and call <code class="docutils literal"><span class="pre">KafkaStreams#close</span></code>.</p>
@@ -183,8 +177,7 @@
           <blockquote>
             <div><div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Add shutdown hook to stop the Kafka Streams threads.</span>
 <span class="c1">// You can optionally provide a timeout to `close`.</span>
-<span class="n">Runtime</span><span class="o">.</span><span class="na">getRuntime</span><span class="o">().</span><span class="na">addShutdownHook</span><span class="o">(</span><span class="k">new</span> <span class="n">Thread</span><span class="o">(</span><span class="n">streams</span><span class="o">::</span><span class="n">close</span><span class="o">));</span>
-</pre></div>
+<span class="n">Runtime</span><span class="o">.</span><span class="na">getRuntime</span><span class="o">().</span><span class="na">addShutdownHook</span><span class="o">(</span><span class="k">new</span> <span class="n">Thread</span><span class="o">(</span><span class="n">streams</span><span class="o">::</span><span class="n">close</span><span class="o">));</span></code></pre></div>
             </div>
             </div></blockquote>
         </li>
@@ -197,8 +190,7 @@
   <span class="kd">public</span> <span class="kt">void</span> <span class="nf">run</span><span class="o">()</span> <span class="o">{</span>
       <span class="n">streams</span><span class="o">.</span><span class="na">close</span><span class="o">();</span>
   <span class="o">}</span>
-<span class="o">}));</span>
-</pre></div>
+<span class="o">}));</span></code></pre></div>
             </div>
             </div></blockquote>
         </li>
@@ -224,10 +216,10 @@
 
 <!--#include virtual="../../../includes/_header.htm" -->
 <!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation ">
   <!--#include virtual="../../../includes/_nav.htm" -->
   <div class="right">
-    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <!--//#include virtual="../../../includes/_docs_banner.htm" -->
     <ul class="breadcrumbs">
       <li><a href="/documentation">Documentation</a></li>
       <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/index.html b/docs/streams/index.html
index 6872297..3d84bbf 100644
--- a/docs/streams/index.html
+++ b/docs/streams/index.html
@@ -48,16 +48,16 @@
                 <h3>TOUR OF THE STREAMS API</h3>
                 <div class="video__list">
                    <p class="video__item video_list_1 active" onclick="$('.video__item').removeClass('active'); $(this).addClass('active');$('.yt_series').hide();$('.video_1').show();">
-                       <span class="number">1</span><span class="video__text">Intro to Streams</span>
+                       <span class="video-number">1</span><span class="video__text">Intro to Streams</span>
                    </p>
                    <p class="video__item video_list_2" onclick="$('.video__item').removeClass('active'); $(this).addClass('active');$('.yt_series').hide();$('.video_2').show();">
-                       <span class="number">2</span><span class="video__text">Creating a Streams Application</span>
+                       <span class="video-number">2</span><span class="video__text">Creating a Streams Application</span>
                    </p>
                    <p class="video__item video_list_3" onclick="$('.video__item').removeClass('active'); $(this).addClass('active');$('.yt_series').hide();$('.video_3').show();">
-                       <span class="number">3</span><span class="video__text">Transforming Data Pt. 1</span>
+                       <span class="video-number">3</span><span class="video__text">Transforming Data Pt. 1</span>
                    </p>
                    <p class="video__item video_list_4" onclick="$('.video__item').removeClass('active'); $(this).addClass('active');$('.yt_series').hide();$('.video_4').show();">
-                      <span class="number">4</span><span class="video__text">Transforming Data Pt. 11</span>
+                      <span class="video-number">4</span><span class="video__text">Transforming Data Pt. 11</span>
                    </p>
                 </div>
             </div>
@@ -103,7 +103,7 @@
                <p class="grid__item__customer__description extra__space">As the leading online fashion retailer in Europe, Zalando uses Kafka as an ESB (Enterprise Service Bus), which helps us in transitioning from a monolithic to a micro services architecture. Using Kafka for processing
                  <a href="https://www.confluent.io/blog/ranking-websites-real-time-apache-kafkas-streams-api/" target='blank'> event streams</a> enables our technical team to do near-real time business intelligence.
                </p>
-           </div>
+            </div>
            </div>
            <div class="customer__grid">
              <div class="customer__item  streams_logo_grid streams__line__grid">
@@ -154,8 +154,7 @@
            </div>
 
            <div class="code-example__snippet b-java-8 selected">
-               <pre class="brush: java;">
-                   import org.apache.kafka.common.serialization.Serdes;
+               <pre class="line-numbers"><code class="language-java">                   import org.apache.kafka.common.serialization.Serdes;
                    import org.apache.kafka.common.utils.Bytes;
                    import org.apache.kafka.streams.KafkaStreams;
                    import org.apache.kafka.streams.StreamsBuilder;
@@ -190,13 +189,11 @@
                            streams.start();
                        }
 
-                   }
-               </pre>
+                   }</code></pre>
            </div>
 
            <div class="code-example__snippet b-java-7">
-               <pre class="brush: java;">
-                   import org.apache.kafka.common.serialization.Serdes;
+               <pre class="line-numbers"><code class="language-java">                   import org.apache.kafka.common.serialization.Serdes;
                    import org.apache.kafka.common.utils.Bytes;
                    import org.apache.kafka.streams.KafkaStreams;
                    import org.apache.kafka.streams.StreamsBuilder;
@@ -245,13 +242,11 @@
                            streams.start();
                        }
 
-                   }
-               </pre>
+                   }</code></pre>
            </div>
 
            <div class="code-example__snippet b-scala">
-               <pre class="brush: scala;">
-import java.util.Properties
+               <pre class="line-numbers"><code class="language-scala">import java.util.Properties
 import java.util.concurrent.TimeUnit
 
 import org.apache.kafka.streams.kstream.Materialized
@@ -284,8 +279,7 @@ object WordCountApplication extends App {
   sys.ShutdownHookThread {
      streams.close(10, TimeUnit.SECONDS)
   }
-}
-               </pre>
+}</code></pre>
            </div>
        </div>
 
@@ -297,10 +291,10 @@ object WordCountApplication extends App {
 </script>
 <!--#include virtual="../../includes/_header.htm" -->
 <!--#include virtual="../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation">
   <!--#include virtual="../../includes/_nav.htm" -->
   <div class="right">
-    <!--#include virtual="../../includes/_docs_banner.htm" -->
+    <!--//#include virtual="../../includes/_docs_banner.htm" -->
     <ul class="breadcrumbs">
       <li><a href="/documentation">Documentation</a>
       <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/quickstart.html b/docs/streams/quickstart.html
index a6781f6..2cc48ef 100644
--- a/docs/streams/quickstart.html
+++ b/docs/streams/quickstart.html
@@ -48,8 +48,7 @@
 This quickstart example will demonstrate how to run a streaming application coded in this library. Here is the gist
 of the <code><a href="https://github.com/apache/kafka/blob/{{dotVersion}}/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java">WordCountDemo</a></code> example code (converted to use Java 8 lambda expressions for easy reading).
 </p>
-<pre class="brush: java;">
-// Serializers/deserializers (serde) for String and Long types
+<pre class="line-numbers"><code class="language-java">// Serializers/deserializers (serde) for String and Long types
 final Serde&lt;String&gt; stringSerde = Serdes.String();
 final Serde&lt;Long&gt; longSerde = Serdes.Long();
 
@@ -72,8 +71,7 @@ KTable&lt;String, Long&gt; wordCounts = textLines
     .count();
 
 // Store the running counts as a changelog stream to the output topic.
-wordCounts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
-</pre>
+wordCounts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));</code></pre>
 
 <p>
 It implements the WordCount
@@ -94,10 +92,8 @@ because it cannot know when it has processed "all" the input data.
 <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/{{fullDotVersion}}/kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz" title="Kafka downloads">Download</a> the {{fullDotVersion}} release and un-tar it.
 Note that there are multiple downloadable Scala versions and we choose to use the recommended version ({{scalaVersion}}) here:
 
-<pre class="brush: bash;">
-&gt; tar -xzf kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz
-&gt; cd kafka_{{scalaVersion}}-{{fullDotVersion}}
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; tar -xzf kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz
+&gt; cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
 
 <h4><a id="quickstart_streams_startserver" href="#quickstart_streams_startserver">Step 2: Start the Kafka server</a></h4>
 
@@ -105,79 +101,63 @@ Note that there are multiple downloadable Scala versions and we choose to use th
 Kafka uses <a href="https://zookeeper.apache.org/">ZooKeeper</a> so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
 </p>
 
-<pre class="brush: bash;">
-&gt; bin/zookeeper-server-start.sh config/zookeeper.properties
+<pre class="line-numbers"><code class="language-bash">&gt; bin/zookeeper-server-start.sh config/zookeeper.properties
 [2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
-...
-</pre>
+...</code></pre>
 
 <p>Now start the Kafka server:</p>
-<pre class="brush: bash;">
-&gt; bin/kafka-server-start.sh config/server.properties
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-server-start.sh config/server.properties
 [2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
 [2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
-...
-</pre>
+...</code></pre>
 
 
 <h4><a id="quickstart_streams_prepare" href="#quickstart_streams_prepare">Step 3: Prepare input topic and start Kafka producer</a></h4>
 
 <!--
 
-<pre class="brush: bash;">
-&gt; echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > file-input.txt
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > file-input.txt</code></pre>
 Or on Windows:
-<pre class="brush: bash;">
-&gt; echo all streams lead to kafka> file-input.txt
+<pre class="line-numbers"><code class="language-bash">&gt; echo all streams lead to kafka> file-input.txt
 &gt; echo hello kafka streams>> file-input.txt
-&gt; echo|set /p=join kafka summit>> file-input.txt
-</pre>
+&gt; echo|set /p=join kafka summit>> file-input.txt</code></pre>
 
 -->
 
 Next, we create the input topic named <b>streams-plaintext-input</b> and the output topic named <b>streams-wordcount-output</b>:
 
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --create \
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --create \
     --bootstrap-server localhost:9092 \
     --replication-factor 1 \
     --partitions 1 \
     --topic streams-plaintext-input
-Created topic "streams-plaintext-input".
-</pre>
+Created topic "streams-plaintext-input".</code></pre>
 
 Note: we create the output topic with compaction enabled because the output stream is a changelog stream
 (cf. <a href="#anchor-changelog-output">explanation of application output</a> below).
 
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --create \
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --create \
     --bootstrap-server localhost:9092 \
     --replication-factor 1 \
     --partitions 1 \
     --topic streams-wordcount-output \
     --config cleanup.policy=compact
-Created topic "streams-wordcount-output".
-</pre>
+Created topic "streams-wordcount-output".</code></pre>
 
 The created topic can be described with the same <b>kafka-topics</b> tool:
 
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe
 
 Topic:streams-wordcount-output	PartitionCount:1	ReplicationFactor:1	Configs:cleanup.policy=compact,segment.bytes=1073741824
 	Topic: streams-wordcount-output	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
 Topic:streams-plaintext-input	PartitionCount:1	ReplicationFactor:1	Configs:segment.bytes=1073741824
-	Topic: streams-plaintext-input	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
-</pre>
+	Topic: streams-plaintext-input	Partition: 0	Leader: 0	Replicas: 0	Isr: 0</code></pre>
 
 <h4><a id="quickstart_streams_start" href="#quickstart_streams_start">Step 4: Start the Wordcount Application</a></h4>
 
 The following command starts the WordCount demo application:
 
-<pre class="brush: bash;">
-&gt; bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo</code></pre>
 
 <p>
 The demo application will read from the input topic <b>streams-plaintext-input</b>, perform the computations of the WordCount algorithm on each of the read messages,
@@ -187,22 +167,18 @@ Hence there won't be any STDOUT output except log entries as the results are wri
 
 Now we can start the console producer in a separate terminal to write some input data to this topic:
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input</code></pre>
 
 and inspect the output of the WordCount demo application by reading from its output topic with the console consumer in a separate terminal:
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
     --topic streams-wordcount-output \
     --from-beginning \
     --formatter kafka.tools.DefaultMessageFormatter \
     --property print.key=true \
     --property print.value=true \
     --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
-    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
-</pre>
+    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer</code></pre>
 
 
 <h4><a id="quickstart_streams_process" href="#quickstart_streams_process">Step 5: Process some data</a></h4>
@@ -211,17 +187,14 @@ Now let's write some message with the console producer into the input topic <b>s
 This will send a new message to the input topic, where the message key is null and the message value is the string encoded text line that you just entered
 (in practice, input data for applications will typically be streaming continuously into Kafka, rather than being manually entered as we do in this quickstart):
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
-all streams lead to kafka
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
+all streams lead to kafka</code></pre>
 
 <p>
 This message will be processed by the Wordcount application and the following output data will be written to the <b>streams-wordcount-output</b> topic and printed by the console consumer:
 </p>
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
     --topic streams-wordcount-output \
     --from-beginning \
     --formatter kafka.tools.DefaultMessageFormatter \
@@ -234,8 +207,7 @@ all	    1
 streams	1
 lead	1
 to	    1
-kafka	1
-</pre>
+kafka	1</code></pre>
 
 <p>
 Here, the first column is the Kafka message key in <code>java.lang.String</code> format and represents a word that is being counted, and the second column is the message value in <code>java.lang.Long</code>format, representing the word's latest count.
@@ -245,16 +217,13 @@ Now let's continue writing one more message with the console producer into the i
 Enter the text line "hello kafka streams" and hit &lt;RETURN&gt;.
 Your terminal should look as follows:
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
 all streams lead to kafka
-hello kafka streams
-</pre>
+hello kafka streams</code></pre>
 
 In your other terminal in which the console consumer is running, you will observe that the WordCount application wrote new output data:
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
     --topic streams-wordcount-output \
     --from-beginning \
     --formatter kafka.tools.DefaultMessageFormatter \
@@ -270,26 +239,22 @@ to	    1
 kafka	1
 hello	1
 kafka	2
-streams	2
-</pre>
+streams	2</code></pre>
 
 Here the last printed lines <b>kafka 2</b> and <b>streams 2</b> indicate updates to the keys <b>kafka</b> and <b>streams</b> whose counts have been incremented from <b>1</b> to <b>2</b>.
 Whenever you write further input messages to the input topic, you will observe new messages being added to the <b>streams-wordcount-output</b> topic,
 representing the most recent word counts as computed by the WordCount application.
 Let's enter one final input text line "join kafka summit" and hit &lt;RETURN&gt; in the console producer to the input topic <b>streams-plaintext-input</b> before we wrap up this quickstart:
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
 all streams lead to kafka
 hello kafka streams
-join kafka summit
-</pre>
+join kafka summit</code></pre>
 
 <a name="anchor-changelog-output"></a>
 The <b>streams-wordcount-output</b> topic will subsequently show the corresponding updated word counts (see last three lines):
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
     --topic streams-wordcount-output \
     --from-beginning \
     --formatter kafka.tools.DefaultMessageFormatter \
@@ -308,8 +273,7 @@ kafka	2
 streams	2
 join	1
 kafka	3
-summit	1
-</pre>
+summit	1</code></pre>
 
 As one can see, outputs of the Wordcount application is actually a continuous stream of updates, where each output record (i.e. each line in the original output above) is
 an updated count of a single word, aka record key such as "kafka". For multiple records with the same key, each later record is an update of the previous one.
@@ -352,10 +316,10 @@ Looking beyond the scope of this concrete example, what Kafka Streams is doing h
 
 <!--#include virtual="../../includes/_header.htm" -->
 <!--#include virtual="../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation">
     <!--#include virtual="../../includes/_nav.htm" -->
     <div class="right">
-        <!--#include virtual="../../includes/_docs_banner.htm" -->
+        <!--//#include virtual="../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
             <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/tutorial.html b/docs/streams/tutorial.html
index eeb90d6..71915cf 100644
--- a/docs/streams/tutorial.html
+++ b/docs/streams/tutorial.html
@@ -36,30 +36,27 @@
         It is highly recommended to read the <a href="/{{version}}/documentation/streams/quickstart">quickstart</a> first on how to run a Streams application written in Kafka Streams if you have not done so.
     </p>
 
-    <h4><a id="tutorial_maven_setup" href="#tutorial_maven_setup">Setting up a Maven Project</a></h4>
+    <h4 class="anchor-heading"><a id="tutorial_maven_setup" class="anchor-link"></a><a href="#tutorial_maven_setup">Setting up a Maven Project</a></h4>
 
     <p>
         We are going to use a Kafka Streams Maven Archetype for creating a Streams project structure with the following commands:
     </p>
 
-    <pre class="brush: bash;">
-        mvn archetype:generate \
+    <pre class="line-numbers"><code class="language-bash">        mvn archetype:generate \
             -DarchetypeGroupId=org.apache.kafka \
             -DarchetypeArtifactId=streams-quickstart-java \
             -DarchetypeVersion={{fullDotVersion}} \
             -DgroupId=streams.examples \
             -DartifactId=streams.examples \
             -Dversion=0.1 \
-            -Dpackage=myapps
-    </pre>
+            -Dpackage=myapps</code></pre>
 
     <p>
         You can use a different value for <code>groupId</code>, <code>artifactId</code> and <code>package</code> parameters if you like.
         Assuming the above parameter values are used, this command will create a project structure that looks like this:
     </p>
 
-    <pre class="brush: bash;">
-        &gt; tree streams.examples
+    <pre class="line-numbers"><code class="language-bash">        &gt; tree streams.examples
         streams-quickstart
         |-- pom.xml
         |-- src
@@ -70,8 +67,7 @@
                 |       |-- Pipe.java
                 |       |-- WordCount.java
                 |-- resources
-                    |-- log4j.properties
-    </pre>
+                    |-- log4j.properties</code></pre>
 
     <p>
         The <code>pom.xml</code> file included in the project already has the Streams dependency defined.
@@ -83,26 +79,22 @@
         Since we are going to start writing such programs from scratch, we can now delete these examples:
     </p>
 
-    <pre class="brush: bash;">
-        &gt; cd streams-quickstart
-        &gt; rm src/main/java/myapps/*.java
-    </pre>
+    <pre class="line-numbers"><code class="language-bash">        &gt; cd streams-quickstart
+        &gt; rm src/main/java/myapps/*.java</code></pre>
 
     <h4><a id="tutorial_code_pipe" href="#tutorial_code_pipe">Writing a first Streams application: Pipe</a></h4>
 
     It's coding time now! Feel free to open your favorite IDE and import this Maven project, or simply open a text editor and create a java file under <code>src/main/java/myapps</code>.
     Let's name it <code>Pipe.java</code>:
 
-    <pre class="brush: java;">
-        package myapps;
+    <pre class="line-numbers"><code class="language-java">        package myapps;
 
         public class Pipe {
 
             public static void main(String[] args) throws Exception {
 
             }
-        }
-    </pre>
+        }</code></pre>
 
     <p>
         We are going to fill in the <code>main</code> function to write this pipe program. Note that we will not list the import statements as we go since IDEs can usually add them automatically.
@@ -115,20 +107,16 @@
         and <code>StreamsConfig.APPLICATION_ID_CONFIG</code>, which gives the unique identifier of your Streams application to distinguish itself with other applications talking to the same Kafka cluster:
     </p>
 
-    <pre class="brush: java;">
-        Properties props = new Properties();
+    <pre class="line-numbers"><code class="language-java">        Properties props = new Properties();
         props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
-        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");    // assuming that the Kafka broker this application is talking to runs on local machine with port 9092
-    </pre>
+        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");    // assuming that the Kafka broker this application is talking to runs on local machine with port 9092</code></pre>
 
     <p>
         In addition, you can customize other configurations in the same map, for example, default serialization and deserialization libraries for the record key-value pairs:
     </p>
 
-    <pre class="brush: java;">
-        props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
-        props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+        props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());</code></pre>
 
     <p>
         For a full list of configurations of Kafka Streams please refer to this <a href="/{{version}}/documentation/#streamsconfigs">table</a>.
@@ -140,17 +128,13 @@
         We can use a topology builder to construct such a topology,
     </p>
 
-    <pre class="brush: java;">
-        final StreamsBuilder builder = new StreamsBuilder();
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        final StreamsBuilder builder = new StreamsBuilder();</code></pre>
 
     <p>
         And then create a source stream from a Kafka topic named <code>streams-plaintext-input</code> using this topology builder:
     </p>
 
-    <pre class="brush: java;">
-        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");</code></pre>
 
     <p>
         Now we get a <code>KStream</code> that is continuously generating records from its source Kafka topic <code>streams-plaintext-input</code>.
@@ -158,48 +142,38 @@
         The simplest thing we can do with this stream is to write it into another Kafka topic, say it's named <code>streams-pipe-output</code>:
     </p>
 
-    <pre class="brush: java;">
-        source.to("streams-pipe-output");
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        source.to("streams-pipe-output");</code></pre>
 
     <p>
         Note that we can also concatenate the above two lines into a single line as:
     </p>
 
-    <pre class="brush: java;">
-        builder.stream("streams-plaintext-input").to("streams-pipe-output");
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        builder.stream("streams-plaintext-input").to("streams-pipe-output");</code></pre>
 
     <p>
         We can inspect what kind of <code>topology</code> is created from this builder by doing the following:
     </p>
 
-    <pre class="brush: java;">
-        final Topology topology = builder.build();
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        final Topology topology = builder.build();</code></pre>
 
     <p>
         And print its description to standard output as:
     </p>
 
-    <pre class="brush: java;">
-        System.out.println(topology.describe());
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        System.out.println(topology.describe());</code></pre>
 
     <p>
         If we just stop here, compile and run the program, it will output the following information:
     </p>
 
-    <pre class="brush: bash;">
-        &gt; mvn clean package
+    <pre class="line-numbers"><code class="language-bash">        &gt; mvn clean package
         &gt; mvn exec:java -Dexec.mainClass=myapps.Pipe
         Sub-topologies:
           Sub-topology: 0
             Source: KSTREAM-SOURCE-0000000000(topics: streams-plaintext-input) --> KSTREAM-SINK-0000000001
             Sink: KSTREAM-SINK-0000000001(topic: streams-pipe-output) <-- KSTREAM-SOURCE-0000000000
         Global Stores:
-          none
-    </pre>
+          none</code></pre>
 
     <p>
         As shown above, it illustrates that the constructed topology has two processor nodes, a source node <code>KSTREAM-SOURCE-0000000000</code> and a sink node <code>KSTREAM-SINK-0000000001</code>.
@@ -215,9 +189,7 @@
         we can now construct the Streams client with the two components we have just constructed above: the configuration map specified in a <code>java.util.Properties</code> instance and the <code>Topology</code> object.
     </p>
 
-    <pre class="brush: java;">
-        final KafkaStreams streams = new KafkaStreams(topology, props);
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        final KafkaStreams streams = new KafkaStreams(topology, props);</code></pre>
 
     <p>
         By calling its <code>start()</code> function we can trigger the execution of this client.
@@ -225,8 +197,7 @@
         We can, for example, add a shutdown hook with a countdown latch to capture a user interrupt and close the client upon terminating this program:
     </p>
 
-    <pre class="brush: java;">
-        final CountDownLatch latch = new CountDownLatch(1);
+    <pre class="line-numbers"><code class="language-java">        final CountDownLatch latch = new CountDownLatch(1);
 
         // attach shutdown handler to catch control-c
         Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
@@ -243,15 +214,13 @@
         } catch (Throwable e) {
             System.exit(1);
         }
-        System.exit(0);
-    </pre>
+        System.exit(0);</code></pre>
 
     <p>
         The complete code so far looks like this:
     </p>
 
-    <pre class="brush: java;">
-        package myapps;
+    <pre class="line-numbers"><code class="language-java">        package myapps;
 
         import org.apache.kafka.common.serialization.Serdes;
         import org.apache.kafka.streams.KafkaStreams;
@@ -297,8 +266,7 @@
                 }
                 System.exit(0);
             }
-        }
-    </pre>
+        }</code></pre>
 
     <p>
         If you already have the Kafka broker up and running at <code>localhost:9092</code>,
@@ -306,10 +274,8 @@
         you can run this code in your IDE or on the command line, using Maven:
     </p>
 
-    <pre class="brush: brush;">
-        &gt; mvn clean package
-        &gt; mvn exec:java -Dexec.mainClass=myapps.Pipe
-    </pre>
+    <pre class="line-numbers"><code class="language-brush">        &gt; mvn clean package
+        &gt; mvn exec:java -Dexec.mainClass=myapps.Pipe</code></pre>
 
     <p>
         For detailed instructions on how to run a Streams application and observe its computing results,
@@ -325,39 +291,33 @@
         We can first create another program by first copy the existing <code>Pipe.java</code> class:
     </p>
 
-    <pre class="brush: brush;">
-        &gt; cp src/main/java/myapps/Pipe.java src/main/java/myapps/LineSplit.java
-    </pre>
+    <pre class="line-numbers"><code class="language-brush">        &gt; cp src/main/java/myapps/Pipe.java src/main/java/myapps/LineSplit.java</code></pre>
 
     <p>
         And change its class name as well as the application id config to distinguish with the original program:
     </p>
 
-    <pre class="brush: java;">
-        public class LineSplit {
+    <pre class="line-numbers"><code class="language-java">        public class LineSplit {
 
             public static void main(String[] args) throws Exception {
                 Properties props = new Properties();
                 props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-linesplit");
                 // ...
             }
-        }
-    </pre>
+        }</code></pre>
 
     <p>
         Since each of the source stream's record is a <code>String</code> typed key-value pair,
         let's treat the value string as a text line and split it into words with a <code>FlatMapValues</code> operator:
     </p>
 
-    <pre class="brush: java;">
-        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+    <pre class="line-numbers"><code class="language-java">        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
         KStream&lt;String, String&gt; words = source.flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
                     @Override
                     public Iterable&lt;String&gt; apply(String value) {
                         return Arrays.asList(value.split("\\W+"));
                     }
-                });
-    </pre>
+                });</code></pre>
 
     <p>
         The operator will take the <code>source</code> stream as its input, and generate a new stream named <code>words</code>
@@ -367,28 +327,23 @@
         Note if you are using JDK 8 you can use lambda expression and simplify the above code as:
     </p>
 
-    <pre class="brush: java;">
-        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
-        KStream&lt;String, String&gt; words = source.flatMapValues(value -> Arrays.asList(value.split("\\W+")));
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+        KStream&lt;String, String&gt; words = source.flatMapValues(value -> Arrays.asList(value.split("\\W+")));</code></pre>
 
     <p>
         And finally we can write the word stream back into another Kafka topic, say <code>streams-linesplit-output</code>.
         Again, these two steps can be concatenated as the following (assuming lambda expression is used):
     </p>
 
-    <pre class="brush: java;">
-        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+    <pre class="line-numbers"><code class="language-java">        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
         source.flatMapValues(value -> Arrays.asList(value.split("\\W+")))
-              .to("streams-linesplit-output");
-    </pre>
+              .to("streams-linesplit-output");</code></pre>
 
     <p>
         If we now describe this augmented topology as <code>System.out.println(topology.describe())</code>, we will get the following:
     </p>
 
-    <pre class="brush: bash;">
-        &gt; mvn clean package
+    <pre class="line-numbers"><code class="language-bash">        &gt; mvn clean package
         &gt; mvn exec:java -Dexec.mainClass=myapps.LineSplit
         Sub-topologies:
           Sub-topology: 0
@@ -396,8 +351,7 @@
             Processor: KSTREAM-FLATMAPVALUES-0000000001(stores: []) --> KSTREAM-SINK-0000000002 <-- KSTREAM-SOURCE-0000000000
             Sink: KSTREAM-SINK-0000000002(topic: streams-linesplit-output) <-- KSTREAM-FLATMAPVALUES-0000000001
           Global Stores:
-            none
-    </pre>
+            none</code></pre>
 
     <p>
         As we can see above, a new processor node <code>KSTREAM-FLATMAPVALUES-0000000001</code> is injected into the topology between the original source and sink nodes.
@@ -411,8 +365,7 @@
         The complete code looks like this (assuming lambda expression is used):
     </p>
 
-    <pre class="brush: java;">
-        package myapps;
+    <pre class="line-numbers"><code class="language-java">        package myapps;
 
         import org.apache.kafka.common.serialization.Serdes;
         import org.apache.kafka.streams.KafkaStreams;
@@ -446,8 +399,7 @@
 
                 // ... same as Pipe.java above
             }
-        }
-    </pre>
+        }</code></pre>
 
     <h4><a id="tutorial_code_wordcount" href="#tutorial_code_wordcount">Writing a third Streams application: Wordcount</a></h4>
 
@@ -456,37 +408,32 @@
         Following similar steps let's create another program based on the <code>LineSplit.java</code> class:
     </p>
 
-    <pre class="brush: java;">
-        public class WordCount {
+    <pre class="line-numbers"><code class="language-java">        public class WordCount {
 
             public static void main(String[] args) throws Exception {
                 Properties props = new Properties();
                 props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount");
                 // ...
             }
-        }
-    </pre>
+        }</code></pre>
 
     <p>
         In order to count the words we can first modify the <code>flatMapValues</code> operator to treat all of them as lower case (assuming lambda expression is used):
     </p>
 
-    <pre class="brush: java;">
-        source.flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
+    <pre class="line-numbers"><code class="language-java">        source.flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
                     @Override
                     public Iterable&lt;String&gt; apply(String value) {
                         return Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+"));
                     }
-                });
-    </pre>
+                });</code></pre>
 
     <p>
         In order to do the counting aggregation we have to first specify that we want to key the stream on the value string, i.e. the lower cased word, with a <code>groupBy</code> operator.
         This operator generate a new grouped stream, which can then be aggregated by a <code>count</code> operator, which generates a running count on each of the grouped keys:
     </p>
 
-    <pre class="brush: java;">
-        KTable&lt;String, Long&gt; counts =
+    <pre class="line-numbers"><code class="language-java">        KTable&lt;String, Long&gt; counts =
         source.flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
                     @Override
                     public Iterable&lt;String&gt; apply(String value) {
@@ -501,8 +448,7 @@
                 })
               // Materialize the result into a KeyValueStore named "counts-store".
               // The Materialized store is always of type &lt;Bytes, byte[]&gt; as this is the format of the inner most store.
-              .count(Materialized.&lt;String, Long, KeyValueStore&lt;Bytes, byte[]&gt;&gt; as("counts-store"));
-    </pre>
+              .count(Materialized.&lt;String, Long, KeyValueStore&lt;Bytes, byte[]&gt;&gt; as("counts-store"));</code></pre>
 
     <p>
         Note that the <code>count</code> operator has a <code>Materialized</code> parameter that specifies that the
@@ -517,9 +463,7 @@
         We need to provide overridden serialization methods for <code>Long</code> types, otherwise a runtime exception will be thrown:
     </p>
 
-    <pre class="brush: java;">
-        counts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
-    </pre>
+    <pre class="line-numbers"><code class="language-java">        counts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));</code></pre>
 
     <p>
         Note that in order to read the changelog stream from topic <code>streams-wordcount-output</code>,
@@ -528,21 +472,18 @@
         Assuming lambda expression from JDK 8 can be used, the above code can be simplified as:
     </p>
 
-    <pre class="brush: java;">
-        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+    <pre class="line-numbers"><code class="language-java">        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
         source.flatMapValues(value -> Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+")))
               .groupBy((key, value) -> value)
               .count(Materialized.&lt;String, Long, KeyValueStore&lt;Bytes, byte[]&gt;&gt;as("counts-store"))
               .toStream()
-              .to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
-    </pre>
+              .to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));</code></pre>
 
     <p>
         If we again describe this augmented topology as <code>System.out.println(topology.describe())</code>, we will get the following:
     </p>
 
-    <pre class="brush: bash;">
-        &gt; mvn clean package
+    <pre class="line-numbers"><code class="language-bash">        &gt; mvn clean package
         &gt; mvn exec:java -Dexec.mainClass=myapps.WordCount
         Sub-topologies:
           Sub-topology: 0
@@ -557,8 +498,7 @@
             Processor: KTABLE-TOSTREAM-0000000007(stores: []) --> KSTREAM-SINK-0000000008 <-- KSTREAM-AGGREGATE-0000000003
             Sink: KSTREAM-SINK-0000000008(topic: streams-wordcount-output) <-- KTABLE-TOSTREAM-0000000007
         Global Stores:
-          none
-    </pre>
+          none</code></pre>
 
     <p>
         As we can see above, the topology now contains two disconnected sub-topologies.
@@ -577,8 +517,7 @@
         The complete code looks like this (assuming lambda expression is used):
     </p>
 
-    <pre class="brush: java;">
-        package myapps;
+    <pre class="line-numbers"><code class="language-java">        package myapps;
 
         import org.apache.kafka.common.serialization.Serdes;
         import org.apache.kafka.streams.KafkaStreams;
@@ -616,8 +555,7 @@
 
                 // ... same as Pipe.java above
             }
-        }
-    </pre>
+        }</code></pre>
 
     <div class="pagination">
         <a href="/{{version}}/documentation/streams/quickstart" class="pagination__btn pagination__btn__prev">Previous</a>
@@ -629,10 +567,10 @@
 
 <!--#include virtual="../../includes/_header.htm" -->
 <!--#include virtual="../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation">
     <!--#include virtual="../../includes/_nav.htm" -->
     <div class="right">
-        <!--#include virtual="../../includes/_docs_banner.htm" -->
+        <!--//#include virtual="../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
             <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/streams/upgrade-guide.html b/docs/streams/upgrade-guide.html
index 4694451..b308b4b 100644
--- a/docs/streams/upgrade-guide.html
+++ b/docs/streams/upgrade-guide.html
@@ -86,7 +86,7 @@
         More details about the new config <code>StreamsConfig#TOPOLOGY_OPTIMIZATION</code> can be found in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-295%3A+Add+Streams+Configuration+Allowing+for+Optional+Topology+Optimization">KIP-295</a>.
     </p>
 
-    <h3><a id="streams_api_changes_260" href="#streams_api_changes_260">Streams API changes in 2.6.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_260" class="anchor-link"></a><a href="#streams_api_changes_260">Streams API changes in 2.6.0</a></h3>
     <p>
         We added a new processing mode that improves application scalability using exactly-once guarantees
         (via <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-447%3A+Producer+scalability+for+exactly+once+semantics">KIP-447</a>).
@@ -94,7 +94,6 @@
         new value <code>"exactly_once_beta"</code>.
         Note that you need brokers with version 2.5 or newer to use this feature.
     </p>
-
     <p>
         For more highly available stateful applications, we've modified the task assignment algorithm to delay the movement of stateful active tasks to instances
         that aren't yet caught up with that task's state. Instead, to migrate a task from one instance to another (eg when scaling out),
@@ -103,14 +102,12 @@
         tasks to their new owners in the background. Check out <a href="https://cwiki.apache.org/confluence/x/0i4lBg">KIP-441</a>
         for full details, including several new configs for control over this new feature.
     </p>
-
     <p>
         New end-to-end latency metrics have been added. These task-level metrics will be logged at the INFO level and report the min and max end-to-end latency of a record at the beginning/source node(s)
         and end/terminal node(s) of a task. See <a href="https://cwiki.apache.org/confluence/x/gBkRCQ">KIP-613</a> for more information.
     </p>
-
     <p>
-        As of 2.6.0 Kafka Streams deprecates <code>KStream.through()</code> in favor of the new <code>KStream.repartition()</code> operator
+        As of 2.6.0 Kafka Streams deprecates <code>KStream.through()</code> if favor of the new <code>KStream.repartition()</code> operator
         (as per <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+DSL+with+Connecting+Topic+Creation+and+Repartition+Hint">KIP-221</a>).
         <code>KStream.repartition()</code> is similar to <code>KStream.through()</code>, however Kafka Streams will manage the topic for you.
         If you need to write into and read back from a topic that you mange, you can fall back to use <code>KStream.to()</code> in combination with <code>StreamsBuilder#stream()</code>.
@@ -128,7 +125,7 @@
         as per <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-571%3A+Add+option+to+force+remove+members+in+StreamsResetter">KIP-571</a>.
     </p>
 
-    <h3><a id="streams_api_changes_250" href="#streams_api_changes_250">Streams API changes in 2.5.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_250" class="anchor-link"></a><a href="#streams_api_changes_250">Streams API changes in 2.5.0</a></h3>
     <p>
         We add a new <code>cogroup()</code> operator (via <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-150+-+Kafka-Streams+Cogroup">KIP-150</a>)
         that allows to aggregate multiple streams in a single operation.
@@ -154,7 +151,7 @@
         <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-535%3A+Allow+state+stores+to+serve+stale+reads+during+rebalance">KIP-535</a> respectively.
     </p>
 
-    <h3><a id="streams_api_changes_240" href="#streams_api_changes_240">Streams API changes in 2.4.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_240" class="anchor-link"></a><a href="#streams_api_changes_240">Streams API changes in 2.4.0</a></h3>
     <p>     
          As of 2.4.0 Kafka Streams offers a KTable-KTable foreign-key join (as per <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable">KIP-213</a>). 
          This joiner allows for records to be joined between two KTables with different keys. 
@@ -245,7 +242,7 @@
         Hence, you will need to reset your application to upgrade it.
     
 
-    <h3><a id="streams_api_changes_230" href="#streams_api_changes_230">Streams API changes in 2.3.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_230" class="anchor-link"></a><a href="#streams_api_changes_230">Streams API changes in 2.3.0</a></h3>
 
     <p>Version 2.3.0 adds the Suppress operator to the <code>kafka-streams-scala</code> Ktable API.</p>
 
@@ -314,13 +311,13 @@
         For more details please read <a href="https://issues.apache.org/jira/browse/KAFKA-8215">KAFKA-8215</a>.
     </p>
 
-    <h3><a id="streams_notable_changes_221" href="#streams_api_changes_221">Notable changes in Kafka Streams 2.2.1</a></h3>
+    <h3 class="anchor-heading"><a id="streams_notable_changes_221" class="anchor-link"></a><a href="#streams_notable_changes_221">Notable changes in Kafka Streams 2.2.1</a></h3>
     <p>
         As of Kafka Streams 2.2.1 a message format 0.11 or higher is required;
         this implies that brokers must be on version 0.11.0 or higher.
     </p>
 
-    <h3><a id="streams_api_changes_220" href="#streams_api_changes_220">Streams API changes in 2.2.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_220" class="anchor-link"></a><a href="#streams_api_changes_220">Streams API changes in 2.2.0</a></h3>
     <p>
         We've simplified the <code>KafkaStreams#state</code> transition diagram during the starting up phase a bit in 2.2.0: in older versions the state will transit from <code>CREATED</code> to <code>RUNNING</code>, and then to <code>REBALANCING</code> to get the first
         stream task assignment, and then back to <code>RUNNING</code>; starting in 2.2.0 it will transit from <code>CREATED</code> directly to <code>REBALANCING</code> and then to <code>RUNNING</code>.
@@ -337,7 +334,7 @@
         used in a try-with-resource statement. For a full list of public interfaces that get impacted please read <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-376%3A+Implement+AutoClosable+on+appropriate+classes+that+want+to+be+used+in+a+try-with-resource+statement">KIP-376</a>.
     </p>
 
-    <h3><a id="streams_api_changes_210" href="#streams_api_changes_210">Streams API changes in 2.1.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_210" class="anchor-link"></a><a href="#streams_api_changes_210">Streams API changes in 2.1.0</a></h3>
     <p>
         We updated <code>TopologyDescription</code> API to allow for better runtime checking.
         Users are encouraged to use <code>#topicSet()</code> and <code>#topicPattern()</code> accordingly on <code>TopologyDescription.Source</code> nodes,
@@ -437,7 +434,7 @@
         different stream instances in one application.
     </p>
 
-    <h3><a id="streams_api_changes_200" href="#streams_api_changes_200">Streams API changes in 2.0.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_200" class="anchor-link"></a><a href="#streams_api_changes_200">Streams API changes in 2.0.0</a></h3>
     <p>
         In 2.0.0 we have added a few new APIs on the <code>ReadOnlyWindowStore</code> interface (for details please read <a href="#streams_api_changes_200">Streams API changes</a> below).
         If you have customized window store implementations that extends the <code>ReadOnlyWindowStore</code> interface you need to make code changes.
@@ -573,7 +570,7 @@
         <li><code>StreamsConfig#ZOOKEEPER_CONNECT_CONFIG</code> are removed as we do not need ZooKeeper dependency in Streams any more (it is deprecated since 0.10.2.0). </li>
     </ul>
 
-    <h3><a id="streams_api_changes_110" href="#streams_api_changes_110">Streams API changes in 1.1.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_110" class="anchor-link"></a><a href="#streams_api_changes_110">Streams API changes in 1.1.0</a></h3>
     <p>
         We have added support for methods in <code>ReadOnlyWindowStore</code> which allows for querying <code>WindowStore</code>s without the necessity of providing keys.
         For users who have customized window store implementations on the above interface, they'd need to update their code to implement the newly added method as well.
@@ -631,7 +628,7 @@
         <li> added options to specify input topics offsets to reset according to <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+Application">KIP-171</a></li>
     </ul>
 
-    <h3><a id="streams_api_changes_100" href="#streams_api_changes_100">Streams API changes in 1.0.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_100" class="anchor-link"></a><a href="#streams_api_changes_100">Streams API changes in 1.0.0</a></h3>
 
     <p>
         With 1.0 a major API refactoring was accomplished and the new API is cleaner and easier to use.
@@ -765,7 +762,7 @@
         If you already use <code>StateStoreSupplier</code> or <code>Materialized</code> to provide configs for changelogs, then they will take precedence over those supplied in the config.
     </p>
 
-    <h3><a id="streams_api_changes_0110" href="#streams_api_changes_0110">Streams API changes in 0.11.0.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_0110" class="anchor-link"></a><a href="#streams_api_changes_0110">Streams API changes in 0.11.0.0</a></h3>
 
     <p> Updates in <code>StreamsConfig</code>: </p>
     <ul>
@@ -834,7 +831,7 @@
     </ul>
     <p> <code>[client.Id]</code> is either set via Streams configuration parameter <code>client.id</code> or defaults to <code>[application.id]-[processId]</code> (<code>[processId]</code> is a random UUID). </p>
 
-    <h3><a id="streams_api_changes_01021" href="#streams_api_changes_01021">Notable changes in 0.10.2.1</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_01021" class="anchor-link"></a><a href="#streams_api_changes_01021">Notable changes in 0.10.2.1</a></h3>
 
     <p>
         Parameter updates in <code>StreamsConfig</code>:
@@ -843,7 +840,7 @@
         <li> The default config values of embedded producer's <code>retries</code> and consumer's <code>max.poll.interval.ms</code> have been changed to improve the resiliency of a Kafka Streams application </li>
     </ul>
 
-    <h3><a id="streams_api_changes_0102" href="#streams_api_changes_0102">Streams API changes in 0.10.2.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_0102" class="anchor-link"></a><a href="#streams_api_changes_0102">Streams API changes in 0.10.2.0</a></h3>
 
     <p>
         New methods in <code>KafkaStreams</code>:
@@ -914,7 +911,7 @@
 
     <p> Relaxed type constraints of many DSL interfaces, classes, and methods (cf. <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-100+-+Relax+Type+constraints+in+Kafka+Streams+API">KIP-100</a>). </p>
 
-    <h3><a id="streams_api_changes_0101" href="#streams_api_changes_0101">Streams API changes in 0.10.1.0</a></h3>
+    <h3 class="anchor-heading"><a id="streams_api_changes_0101" class="anchor-link"></a><a href="#streams_api_changes_0101">Streams API changes in 0.10.1.0</a></h3>
 
     <p> Stream grouping and aggregation split into two methods: </p>
     <ul>
@@ -957,10 +954,10 @@
 
 <!--#include virtual="../../includes/_header.htm" -->
 <!--#include virtual="../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
+<div class="content documentation">
     <!--#include virtual="../../includes/_nav.htm" -->
     <div class="right">
-        <!--#include virtual="../../includes/_docs_banner.htm" -->
+        <!--//#include virtual="../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
             <li><a href="/documentation/streams">Kafka Streams</a></li>
diff --git a/docs/upgrade.html b/docs/upgrade.html
index abc80e5..da8ad99 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -19,7 +19,49 @@
 
 <script id="upgrade-template" type="text/x-handlebars-template">
 
-<h5><a id="upgrade_260_notable" href="#upgrade_260_notable">Notable changes in 2.6.0</a></h5>
+<h4><a id="upgrade_2_6_1" href="#upgrade_2_6_1">Upgrading to 2.6.1 from any version 0.8.x through 2.5.x</a></h4>
+
+<p><b>If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
+    Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.</b></p>
+
+<p><b>For a rolling upgrade:</b></p>
+
+<ol>
+    <li> Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
+        are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
+        overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
+        to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
+        <ul>
+            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g., <code>2.5</code>, <code>2.4</code>, etc.)</li>
+            <li>log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION  (See <a href="#upgrade_10_performance_impact">potential performance impact
+                following the upgrade</a> for the details on what this configuration does.)</li>
+        </ul>
+        If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
+        the inter-broker protocol version.
+        <ul>
+            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g., <code>2.5</code>, <code>2.4</code>, etc.)</li>
+        </ul>
+    </li>
+    <li> Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
+        brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
+        It is still possible to downgrade at this point if there are any problems.
+    </li>
+    <li> Once the cluster's behavior and performance has been verified, bump the protocol version by editing
+        <code>inter.broker.protocol.version</code> and setting it to <code>2.6</code>.
+    </li>
+    <li> Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
+        protocol version, it will no longer be possible to downgrade the cluster to an older version.
+    </li>
+    <li> If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
+        upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
+        change log.message.format.version to 2.6 on each broker and restart them one by one. Note that the older Scala clients,
+        which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
+        (or to take advantage of <a href="#upgrade_11_exactly_once_semantics">exactly once semantics</a>),
+        the newer Java clients must be used.
+    </li>
+</ol>
+
+<h5 class="anchor-heading"><a id="upgrade_260_notable" class="anchor-link"></a><a href="#upgrade_260_notable">Notable changes in 2.6.0</a></h5>
 <ul>
     <li>Kafka Streams adds a new processing mode (requires broker 2.5 or newer) that improves application
         scalability using exactly-once guarantees
@@ -49,7 +91,49 @@
     </li>
 </ul>
 
-<h5><a id="upgrade_250_notable" href="#upgrade_250_notable">Notable changes in 2.5.0</a></h5>
+<h4><a id="upgrade_2_5_0" href="#upgrade_2_5_0">Upgrading to 2.5.0 from any version 0.8.x through 2.4.x</a></h4>
+
+<p><b>If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
+    Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.</b></p>
+
+<p><b>For a rolling upgrade:</b></p>
+
+<ol>
+    <li> Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
+        are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
+        overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
+        to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
+        <ul>
+            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g., <code>2.4</code>, <code>2.3</code>, etc.)</li>
+            <li>log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION  (See <a href="#upgrade_10_performance_impact">potential performance impact
+                following the upgrade</a> for the details on what this configuration does.)</li>
+        </ul>
+        If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
+        the inter-broker protocol version.
+        <ul>
+            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g., <code>2.4</code>, <code>2.3</code>, etc.)</li>
+        </ul>
+    </li>
+    <li> Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
+        brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
+        It is still possible to downgrade at this point if there are any problems.
+    </li>
+    <li> Once the cluster's behavior and performance has been verified, bump the protocol version by editing
+        <code>inter.broker.protocol.version</code> and setting it to <code>2.5</code>.
+    </li>
+    <li> Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
+        protocol version, it will no longer be possible to downgrade the cluster to an older version.
+    </li>
+    <li> If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
+        upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
+        change log.message.format.version to 2.5 on each broker and restart them one by one. Note that the older Scala clients,
+        which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
+        (or to take advantage of <a href="#upgrade_11_exactly_once_semantics">exactly once semantics</a>),
+        the newer Java clients must be used.
+    </li>
+</ol>
+
+<h5 class="anchor-heading"><a id="upgrade_250_notable" class="anchor-link"></a><a href="#upgrade_250_notable">Notable changes in 2.5.0</a></h5>
 <ul>
     <li>When <code>RebalanceProtocol#COOPERATIVE</code> is used, <code>Consumer#poll</code> can still return data
         while it is in the middle of a rebalance for those partitions still owned by the consumer; in addition
@@ -97,7 +181,7 @@
     </li>
 </ul>
 
-<h4><a id="upgrade_2_4_0" href="#upgrade_2_4_0">Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, 1.1.x, 2.0.x or 2.1.x or 2.2.x or 2.3.x to 2.4.0</a></h4>
+<h4><a id="upgrade_2_4_0" href="#upgrade_2_4_0">Upgrading to 2.4.0 from any version 0.8.x through 2.3.x</a></h4>
 
 <p><b>If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
     Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.</b></p>
@@ -110,14 +194,14 @@
         overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
         to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
         <ul>
-            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.10.0, 0.11.0, 1.0, 2.0, 2.2).</li>
+            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g., <code>2.3</code>, <code>2.2</code>, etc.)</li>
             <li>log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION  (See <a href="#upgrade_10_performance_impact">potential performance impact
                 following the upgrade</a> for the details on what this configuration does.)</li>
         </ul>
         If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
         the inter-broker protocol version.
         <ul>
-            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3).</li>
+            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g., <code>2.3</code>, <code>2.2</code>, etc.)</li>
         </ul>
     </li>
     <li> Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
@@ -125,7 +209,7 @@
         It is still possible to downgrade at this point if there are any problems.
     </li>
     <li> Once the cluster's behavior and performance has been verified, bump the protocol version by editing
-        <code>inter.broker.protocol.version</code> and setting it to 2.4.
+        <code>inter.broker.protocol.version</code> and setting it to <code>2.4</code>.
     </li>
     <li> Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
         protocol version, it will no longer be possible to downgrade the cluster to an older version.
@@ -160,7 +244,7 @@
     </li>
 </ol>
 
-<h5><a id="upgrade_240_notable" href="#upgrade_240_notable">Notable changes in 2.4.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_240_notable" class="anchor-link"></a><a href="#upgrade_240_notable">Notable changes in 2.4.0</a></h5>
 <ul>
     <li>A new Admin API has been added for partition reassignments. Due to changing the way Kafka propagates reassignment information,
         it is possible to lose reassignment state in failure edge cases while upgrading to the new version. It is not recommended to start reassignments while upgrading.</li>
@@ -269,7 +353,7 @@
     </li>
 </ol>
 
-<h5><a id="upgrade_230_notable" href="#upgrade_230_notable">Notable changes in 2.3.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_230_notable" class="anchor-link"></a><a href="#upgrade_230_notable">Notable changes in 2.3.0</a></h5>
 <ul>
     <li> We are introducing a new rebalancing protocol for Kafka Connect based on
         <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-415%3A+Incremental+Cooperative+Rebalancing+in+Kafka+Connect">incremental cooperative rebalancing</a>.
@@ -329,12 +413,12 @@
     </li>
 </ol>
 
-<h5><a id="upgrade_221_notable" href="#upgrade_221_notable">Notable changes in 2.2.1</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_221_notable" class="anchor-link"></a><a href="#upgrade_221_notable">Notable changes in 2.2.1</a></h5>
 <ul>
     <li>Kafka Streams 2.2.1 requires 0.11 message format or higher and does not work with older message format.</li>
 </ul>
 
-<h5><a id="upgrade_220_notable" href="#upgrade_220_notable">Notable changes in 2.2.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_220_notable" class="anchor-link"></a><a href="#upgrade_220_notable">Notable changes in 2.2.0</a></h5>
 <ul>
     <li>The default consumer group id has been changed from the empty string (<code>""</code>) to <code>null</code>. Consumers who use the new default group id will not be able to subscribe to topics,
         and fetch or commit offsets. The empty string as consumer group id is deprecated but will be supported until a future major release. Old clients that rely on the empty string group id will now
@@ -412,7 +496,7 @@
         in a Java 7 compatible fashion. Now we consolidated these interfaces as Java 7 support has been dropped since.</li>
 </ol>
 
-<h5><a id="upgrade_210_notable" href="#upgrade_210_notable">Notable changes in 2.1.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_210_notable" class="anchor-link"></a><a href="#upgrade_210_notable">Notable changes in 2.1.0</a></h5>
 <ul>
     <li>Jetty has been upgraded to 9.4.12, which excludes TLS_RSA_* ciphers by default because they do not support forward
         secrecy, see https://github.com/eclipse/jetty.project/issues/2807 for more information.</li>
@@ -476,7 +560,7 @@
     </li>
 </ol>
 
-<h5><a id="upgrade_200_notable" href="#upgrade_200_notable">Notable changes in 2.0.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_200_notable" class="anchor-link"></a><a href="#upgrade_200_notable">Notable changes in 2.0.0</a></h5>
 <ul>
     <li><a href="https://cwiki.apache.org/confluence/x/oYtjB">KIP-186</a> increases the default offset retention time from 1 day to 7 days. This makes it less likely to "lose" offsets in an application that commits infrequently. It also increases the active set of offsets and therefore can increase memory usage on the broker. Note that the console consumer currently enables offset commit by default and can be the source of a large number of offsets which this change will now preserve for [...]
     <li>Support for Java 7 has been dropped, Java 8 is now the minimum version required.</li>
@@ -546,7 +630,7 @@
     <code>zookeeper.connection.timeout.ms</code>.</li>
 </ul>
 
-<h5><a id="upgrade_200_new_protocols" href="#upgrade_200_new_protocols">New Protocol Versions</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_200_new_protocols" class="anchor-link"></a><a href="#upgrade_200_new_protocols">New Protocol Versions</a></h5>
 <ul>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-279%3A+Fix+log+divergence+between+leader+and+follower+after+fast+leader+fail+over">KIP-279</a>: OffsetsForLeaderEpochResponse v1 introduces a partition-level <code>leader_epoch</code> field. </li>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-219+-+Improve+quota+communication">KIP-219</a>: Bump up the protocol versions of non-cluster action requests and responses that are throttled on quota violation.</li>
@@ -554,7 +638,7 @@
 </ul>
 
 
-<h5><a id="upgrade_200_streams_from_11" href="#upgrade_200_streams_from_11">Upgrading a 1.1 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_200_streams_from_11" class="anchor-link"></a><a href="#upgrade_200_streams_from_11">Upgrading a 1.1 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 1.1 to 2.0 does not require a broker upgrade.
          A Kafka Streams 2.0 application can connect to 2.0, 1.1, 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
@@ -606,13 +690,13 @@
         Hot-swapping the jar-file only might not work.</li>
 </ol>
 
-<h5><a id="upgrade_111_notable" href="#upgrade_111_notable">Notable changes in 1.1.1</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_111_notable" class="anchor-link"></a><a href="#upgrade_111_notable">Notable changes in 1.1.1</a></h5>
 <ul>
     <li> New Kafka Streams configuration parameter <code>upgrade.from</code> added that allows rolling bounce upgrade from version 0.10.0.x </li>
     <li> See the <a href="/{{version}}/documentation/streams/upgrade-guide.html"><b>Kafka Streams upgrade guide</b></a> for details about this new config.
 </ul>
 
-<h5><a id="upgrade_110_notable" href="#upgrade_110_notable">Notable changes in 1.1.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_110_notable" class="anchor-link"></a><a href="#upgrade_110_notable">Notable changes in 1.1.0</a></h5>
 <ul>
     <li>The kafka artifact in Maven no longer depends on log4j or slf4j-log4j12. Similarly to the kafka-clients artifact, users
         can now choose the logging back-end by including the appropriate slf4j module (slf4j-log4j12, logback, etc.). The release
@@ -628,13 +712,13 @@
         explicitly or implicitly due to any of the other options like decoder.</li>
 </ul>
 
-<h5><a id="upgrade_110_new_protocols" href="#upgrade_110_new_protocols">New Protocol Versions</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_110_new_protocols" class="anchor-link"></a><a href="#upgrade_110_new_protocols">New Protocol Versions</a></h5>
 <ul>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration">KIP-226</a> introduced DescribeConfigs Request/Response v1.</li>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-227%3A+Introduce+Incremental+FetchRequests+to+Increase+Partition+Scalability">KIP-227</a> introduced Fetch Request/Response v7.</li>
 </ul>
 
-<h5><a id="upgrade_110_streams_from_10" href="#upgrade_110_streams_from_10">Upgrading a 1.0 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_110_streams_from_10" class="anchor-link"></a><a href="#upgrade_110_streams_from_10">Upgrading a 1.0 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 1.0 to 1.1 does not require a broker upgrade.
         A Kafka Streams 1.1 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
@@ -686,19 +770,19 @@
         Similarly for the message format version.</li>
 </ol>
 
-<h5><a id="upgrade_102_notable" href="#upgrade_102_notable">Notable changes in 1.0.2</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_102_notable" class="anchor-link"></a><a href="#upgrade_102_notable">Notable changes in 1.0.2</a></h5>
 <ul>
     <li> New Kafka Streams configuration parameter <code>upgrade.from</code> added that allows rolling bounce upgrade from version 0.10.0.x </li>
     <li> See the <a href="/{{version}}/documentation/streams/upgrade-guide.html"><b>Kafka Streams upgrade guide</b></a> for details about this new config.
 </ul>
 
-<h5><a id="upgrade_101_notable" href="#upgrade_101_notable">Notable changes in 1.0.1</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_101_notable" class="anchor-link"></a><a href="#upgrade_101_notable">Notable changes in 1.0.1</a></h5>
 <ul>
     <li>Restored binary compatibility of AdminClient's Options classes (e.g. CreateTopicsOptions, DeleteTopicsOptions, etc.) with
         0.11.0.x. Binary (but not source) compatibility had been broken inadvertently in 1.0.0.</li>
 </ul>
 
-<h5><a id="upgrade_100_notable" href="#upgrade_100_notable">Notable changes in 1.0.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_100_notable" class="anchor-link"></a><a href="#upgrade_100_notable">Notable changes in 1.0.0</a></h5>
 <ul>
     <li>Topic deletion is now enabled by default, since the functionality is now stable. Users who wish to
         to retain the previous behavior should set the broker config <code>delete.topic.enable</code> to <code>false</code>. Keep in mind that topic deletion removes data and the operation is not reversible (i.e. there is no "undelete" operation)</li>
@@ -745,7 +829,7 @@
     <li>config/consumer.properties file updated to use new consumer config properties.</li>
 </ul>
 
-<h5><a id="upgrade_100_new_protocols" href="#upgrade_100_new_protocols">New Protocol Versions</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_100_new_protocols" class="anchor-link"></a><a href="#upgrade_100_new_protocols">New Protocol Versions</a></h5>
 <ul>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD">KIP-112</a>: LeaderAndIsrRequest v1 introduces a partition-level <code>is_new</code> field. </li>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD">KIP-112</a>: UpdateMetadataRequest v4 introduces a partition-level <code>offline_replicas</code> field. </li>
@@ -757,7 +841,7 @@
          be used if the SaslHandshake request version is greater than 0. </li>
 </ul>
 
-<h5><a id="upgrade_100_streams_from_0110" href="#upgrade_100_streams_from_0110">Upgrading a 0.11.0 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_100_streams_from_0110" class="anchor-link"></a><a href="#upgrade_100_streams_from_0110">Upgrading a 0.11.0 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.11.0 to 1.0 does not require a broker upgrade.
          A Kafka Streams 1.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
@@ -768,7 +852,7 @@
     <li> See <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_100">Streams API changes in 1.0.0</a> for more details. </li>
 </ul>
 
-<h5><a id="upgrade_100_streams_from_0102" href="#upgrade_100_streams_from_0102">Upgrading a 0.10.2 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_100_streams_from_0102" class="anchor-link"></a><a href="#upgrade_100_streams_from_0102">Upgrading a 0.10.2 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.2 to 1.0 does not require a broker upgrade.
          A Kafka Streams 1.0 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
@@ -779,7 +863,7 @@
     <li> See <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_0110">Streams API changes in 0.11.0</a> for more details. </li>
 </ul>
 
-<h5><a id="upgrade_100_streams_from_0101" href="#upgrade_1100_streams_from_0101">Upgrading a 0.10.1 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_100_streams_from_0101" class="anchor-link"></a><a href="#upgrade_100_streams_from_0101">Upgrading a 0.10.1 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.1 to 1.0 does not require a broker upgrade.
          A Kafka Streams 1.0 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
@@ -795,7 +879,7 @@
          <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_0102">Streams API changes in 0.10.2</a> for more details. </li>
 </ul>
 
-<h5><a id="upgrade_100_streams_from_0100" href="#upgrade_100_streams_from_0100">Upgrading a 0.10.0 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_100_streams_from_0100" class="anchor-link"></a><a href="#upgrade_100_streams_from_0100">Upgrading a 0.10.0 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.0 to 1.0 does require a <a href="#upgrade_10_1">broker upgrade</a> because a Kafka Streams 1.0 application can only connect to 0.1, 0.11.0, 0.10.2, or 0.10.1 brokers. </li>
     <li> There are couple of API changes, that are not backward compatible (cf. <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_100">Streams API changes in 1.0.0</a>,
@@ -868,7 +952,7 @@
     before you switch to 0.11.0.</li>
 </ol>
 
-<h5><a id="upgrade_1100_streams_from_0102" href="#upgrade_1100_streams_from_0102">Upgrading a 0.10.2 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1100_streams_from_0102" class="anchor-link"></a><a href="#upgrade_1100_streams_from_0102">Upgrading a 0.10.2 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.2 to 0.11.0 does not require a broker upgrade.
          A Kafka Streams 0.11.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
@@ -876,7 +960,7 @@
     <li> See <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_0110">Streams API changes in 0.11.0</a> for more details. </li>
 </ul>
 
-<h5><a id="upgrade_1100_streams_from_0101" href="#upgrade_1100_streams_from_0101">Upgrading a 0.10.1 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1100_streams_from_0101" class="anchor-link"></a><a href="#upgrade_1100_streams_from_0101">Upgrading a 0.10.1 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.1 to 0.11.0 does not require a broker upgrade.
          A Kafka Streams 0.11.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
@@ -888,7 +972,7 @@
          <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_0102">Streams API changes in 0.10.2</a> for more details. </li>
 </ul>
 
-<h5><a id="upgrade_1100_streams_from_0100" href="#upgrade_1100_streams_from_0100">Upgrading a 0.10.0 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1100_streams_from_0100" class="anchor-link"></a><a href="#upgrade_1100_streams_from_0100">Upgrading a 0.10.0 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.0 to 0.11.0 does require a <a href="#upgrade_10_1">broker upgrade</a> because a Kafka Streams 0.11.0 application can only connect to 0.11.0, 0.10.2, or 0.10.1 brokers. </li>
     <li> There are couple of API changes, that are not backward compatible (cf. <a href="/{{version}}/documentation/streams#streams_api_changes_0110">Streams API changes in 0.11.0</a>,
@@ -914,13 +998,13 @@
     </li>
 </ul>
 
-<h5><a id="upgrade_1103_notable" href="#upgrade_1103_notable">Notable changes in 0.11.0.3</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1103_notable" class="anchor-link"></a><a href="#upgrade_1103_notable">Notable changes in 0.11.0.3</a></h5>
 <ul>
 <li> New Kafka Streams configuration parameter <code>upgrade.from</code> added that allows rolling bounce upgrade from version 0.10.0.x </li>
 <li> See the <a href="/{{version}}/documentation/streams/upgrade-guide.html"><b>Kafka Streams upgrade guide</b></a> for details about this new config.
 </ul>
 
-<h5><a id="upgrade_1100_notable" href="#upgrade_1100_notable">Notable changes in 0.11.0.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1100_notable" class="anchor-link"></a><a href="#upgrade_1100_notable">Notable changes in 0.11.0.0</a></h5>
 <ul>
     <li>Unclean leader election is now disabled by default. The new default favors durability over availability. Users who wish to
         to retain the previous behavior should set the broker config <code>unclean.leader.election.enable</code> to <code>true</code>.</li>
@@ -965,7 +1049,7 @@
     </li>
 </ul>
 
-<h5><a id="upgrade_1100_new_protocols" href="#upgrade_1100_new_protocols">New Protocol Versions</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1100_new_protocols" class="anchor-link"></a><a href="#upgrade_1100_new_protocols">New Protocol Versions</a></h5>
 <ul>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-107%3A+Add+purgeDataBefore()+API+in+AdminClient">KIP-107</a>: FetchRequest v5 introduces a partition-level <code>log_start_offset</code> field. </li>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-107%3A+Add+purgeDataBefore()+API+in+AdminClient">KIP-107</a>: FetchResponse v5 introduces a partition-level <code>log_start_offset</code> field. </li>
@@ -973,7 +1057,7 @@
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers">KIP-82</a>: FetchResponse v5 introduces an array of <code>header</code> in the message protocol, containing <code>key</code> field and <code>value</code> field.</li>
 </ul>
 
-<h5><a id="upgrade_11_exactly_once_semantics" href="#upgrade_11_exactly_once_semantics">Notes on Exactly Once Semantics</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_11_exactly_once_semantics" class="anchor-link"></a><a href="#upgrade_11_exactly_once_semantics">Notes on Exactly Once Semantics</a></h5>
 <p>Kafka 0.11.0 includes support for idempotent and transactional capabilities in the producer. Idempotent delivery
   ensures that messages are delivered exactly once to a particular topic partition during the lifetime of a single producer.
   Transactional delivery allows producers to send data to multiple partitions such that either all messages are successfully
@@ -996,7 +1080,7 @@
     for the full details</li>
 </ol>
 
-<h5><a id="upgrade_11_message_format" href="#upgrade_11_message_format">Notes on the new message format in 0.11.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_11_message_format" class="anchor-link"></a><a href="#upgrade_11_message_format">Notes on the new message format in 0.11.0</a></h5>
 <p>The 0.11.0 message format includes several major enhancements in order to support better delivery semantics for the producer
   (see <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging">KIP-98</a>)
   and improved replication fault tolerance
@@ -1060,7 +1144,7 @@ Kafka cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.
 
 <p><b>Note:</b> Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.
 
-<h5><a id="upgrade_1020_streams_from_0101" href="#upgrade_1020_streams_from_0101">Upgrading a 0.10.1 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1020_streams_from_0101" class="anchor-link"></a><a href="#upgrade_1020_streams_from_0101">Upgrading a 0.10.1 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.1 to 0.10.2 does not require a broker upgrade.
          A Kafka Streams 0.10.2 application can connect to 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
@@ -1070,7 +1154,7 @@ Kafka cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.
     <li> See <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_0102">Streams API changes in 0.10.2</a> for more details. </li>
 </ul>
 
-<h5><a id="upgrade_1020_streams_from_0100" href="#upgrade_1020_streams_from_0100">Upgrading a 0.10.0 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1020_streams_from_0100" class="anchor-link"></a><a href="#upgrade_1020_streams_from_0100">Upgrading a 0.10.0 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.0 to 0.10.2 does require a <a href="#upgrade_10_1">broker upgrade</a> because a Kafka Streams 0.10.2 application can only connect to 0.10.2 or 0.10.1 brokers. </li>
     <li> There are couple of API changes, that are not backward compatible (cf. <a href="/{{version}}/documentation/streams#streams_api_changes_0102">Streams API changes in 0.10.2</a> for more details).
@@ -1094,18 +1178,18 @@ Kafka cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.
     </li>
 </ul>
 
-<h5><a id="upgrade_10202_notable" href="#upgrade_10202_notable">Notable changes in 0.10.2.2</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_10202_notable" class="anchor-link"></a><a href="#upgrade_10202_notable">Notable changes in 0.10.2.2</a></h5>
 <ul>
 <li> New configuration parameter <code>upgrade.from</code> added that allows rolling bounce upgrade from version 0.10.0.x </li>
 </ul>
 
-<h5><a id="upgrade_10201_notable" href="#upgrade_10201_notable">Notable changes in 0.10.2.1</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_10201_notable" class="anchor-link"></a><a href="#upgrade_10201_notable">Notable changes in 0.10.2.1</a></h5>
 <ul>
   <li> The default values for two configurations of the StreamsConfig class were changed to improve the resiliency of Kafka Streams applications. The internal Kafka Streams producer <code>retries</code> default value was changed from 0 to 10. The internal Kafka Streams consumer <code>max.poll.interval.ms</code>  default value was changed from 300000 to <code>Integer.MAX_VALUE</code>.
   </li>
 </ul>
 
-<h5><a id="upgrade_1020_notable" href="#upgrade_1020_notable">Notable changes in 0.10.2.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1020_notable" class="anchor-link"></a><a href="#upgrade_1020_notable">Notable changes in 0.10.2.0</a></h5>
 <ul>
     <li>The Java clients (producer and consumer) have acquired the ability to communicate with older brokers. Version 0.10.2 clients
         can talk to version 0.10.0 or newer brokers. Note that some features are not available or are limited when older brokers
@@ -1125,7 +1209,7 @@ Kafka cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.
         StreamsConfig class. User should pay attention to the default values and set these if needed. For more details please refer to <a href="/{{version}}/documentation/#streamsconfigs">3.5 Kafka Streams Configs</a>.</li>
 </ul>
 
-<h5><a id="upgrade_1020_new_protocols" href="#upgrade_1020_new_protocols">New Protocol Versions</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1020_new_protocols" class="anchor-link"></a><a href="#upgrade_1020_new_protocols">New Protocol Versions</a></h5>
 <ul>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-88%3A+OffsetFetch+Protocol+Update">KIP-88</a>: OffsetFetchRequest v2 supports retrieval of offsets for all topics if the <code>topics</code> array is set to <code>null</code>. </li>
     <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-88%3A+OffsetFetch+Protocol+Update">KIP-88</a>: OffsetFetchResponse v2 introduces a top-level <code>error_code</code> field. </li>
@@ -1164,13 +1248,13 @@ only support 0.10.1.x or later brokers while 0.10.1.x brokers also support older
 <p><b>Note:</b> Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.
 
 <!-- TODO: add when 0.10.1.2 is released
-<h5><a id="upgrade_1012_notable" href="#upgrade_1012_notable">Notable changes in 0.10.1.2</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1012_notable" class="anchor-link"></a><a href="#upgrade_1012_notable">Notable changes in 0.10.1.2</a></h5>
 <ul>
     <li> New configuration parameter <code>upgrade.from</code> added that allows rolling bounce upgrade from version 0.10.0.x </li>
 </ul>
 -->
 
-<h5><a id="upgrade_10_1_breaking" href="#upgrade_10_1_breaking">Potential breaking changes in 0.10.1.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_10_1_breaking" class="anchor-link"></a><a href="#upgrade_10_1_breaking">Potential breaking changes in 0.10.1.0</a></h5>
 <ul>
     <li> The log retention time is no longer based on last modified time of the log segments. Instead it will be based on the largest timestamp of the messages in a log segment.</li>
     <li> The log rolling time is no longer depending on log segment create time. Instead it is now based on the timestamp in the messages. More specifically. if the timestamp of the first message in the segment is T, the log will be rolled out when a new message has a timestamp greater than or equal to T + log.roll.ms </li>
@@ -1179,7 +1263,7 @@ only support 0.10.1.x or later brokers while 0.10.1.x brokers also support older
     <li> Due to the increased number of index files, on some brokers with large amount the log segments (e.g. >15K), the log loading process during the broker startup could be longer. Based on our experiment, setting the num.recovery.threads.per.data.dir to one may reduce the log loading time. </li>
 </ul>
 
-<h5><a id="upgrade_1010_streams_from_0100" href="#upgrade_1010_streams_from_0100">Upgrading a 0.10.0 Kafka Streams Application</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1010_streams_from_0100" class="anchor-link"></a><a href="#upgrade_1010_streams_from_0100">Upgrading a 0.10.0 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 0.10.0 to 0.10.1 does require a <a href="#upgrade_10_1">broker upgrade</a> because a Kafka Streams 0.10.1 application can only connect to 0.10.1 brokers. </li>
     <li> There are couple of API changes, that are not backward compatible (cf. <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_0101">Streams API changes in 0.10.1</a> for more details).
@@ -1203,7 +1287,7 @@ only support 0.10.1.x or later brokers while 0.10.1.x brokers also support older
     </li>
 </ul>
 
-<h5><a id="upgrade_1010_notable" href="#upgrade_1010_notable">Notable changes in 0.10.1.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1010_notable" class="anchor-link"></a><a href="#upgrade_1010_notable">Notable changes in 0.10.1.0</a></h5>
 <ul>
     <li> The new Java consumer is no longer in beta and we recommend it for all new development. The old Scala consumers are still supported, but they will be deprecated in the next release
          and will be removed in a future major release. </li>
@@ -1234,7 +1318,7 @@ only support 0.10.1.x or later brokers while 0.10.1.x brokers also support older
          the request is sent to avoid starvation issues. </li>
 </ul>
 
-<h5><a id="upgrade_1010_new_protocols" href="#upgrade_1010_new_protocols">New Protocol Versions</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_1010_new_protocols" class="anchor-link"></a><a href="#upgrade_1010_new_protocols">New Protocol Versions</a></h5>
 <ul>
     <li> ListOffsetRequest v1 supports accurate offset search based on timestamps. </li>
     <li> MetadataResponse v2 introduces a new field: "cluster_id". </li>
@@ -1243,7 +1327,7 @@ only support 0.10.1.x or later brokers while 0.10.1.x brokers also support older
     <li> JoinGroup v1 introduces a new field: "rebalance_timeout". </li>
 </ul>
 
-<h4><a id="upgrade_10" href="#upgrade_10">Upgrading from 0.8.x or 0.9.x to 0.10.0.0</a></h4>
+<h4 class="anchor-heading"><a id="upgrade_10" class="anchor-link"></a><a href="#upgrade_10">Upgrading from 0.8.x or 0.9.x to 0.10.0.0</a></h4>
 <p>
 0.10.0.0 has <a href="#upgrade_10_breaking">potential breaking changes</a> (please review before upgrading) and possible <a href="#upgrade_10_performance_impact">  performance impact following the upgrade</a>. By following the recommended rolling upgrade plan below, you guarantee no downtime and no performance impact during and following the upgrade.
 <br>
@@ -1276,7 +1360,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
 
 <p><b>Note:</b> Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.
 
-<h5><a id="upgrade_10_performance_impact" href="#upgrade_10_performance_impact">Potential performance impact following upgrade to 0.10.0.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_10_performance_impact" class="anchor-link"></a><a href="#upgrade_10_performance_impact">Potential performance impact following upgrade to 0.10.0.0</a></h5>
 <p>
     The message format in 0.10.0 includes a new timestamp field and uses relative offsets for compressed messages.
     The on disk message format can be configured through log.message.format.version in the server.properties file.
@@ -1317,7 +1401,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
 
 </p>
 
-<h5><a id="upgrade_10_breaking" href="#upgrade_10_breaking">Potential breaking changes in 0.10.0.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_10_breaking" class="anchor-link"></a><a href="#upgrade_10_breaking">Potential breaking changes in 0.10.0.0</a></h5>
 <ul>
     <li> Starting from Kafka 0.10.0.0, the message format version in Kafka is represented as the Kafka version. For example, message format 0.9.0 refers to the highest message version supported by Kafka 0.9.0. </li>
     <li> Message format 0.10.0 has been introduced and it is used by default. It includes a timestamp field in the messages and relative offsets are used for compressed messages. </li>
@@ -1339,7 +1423,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
          should use interoperable LZ4f framing. A list of interoperable LZ4 libraries is available at http://www.lz4.org/
 </ul>
 
-<h5><a id="upgrade_10_notable" href="#upgrade_10_notable">Notable changes in 0.10.0.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_10_notable" class="anchor-link"></a><a href="#upgrade_10_notable">Notable changes in 0.10.0.0</a></h5>
 
 <ul>
     <li> Starting from Kafka 0.10.0.0, a new client library named <b>Kafka Streams</b> is available for stream processing on data stored in Kafka topics. This new client library only works with 0.10.x and upward versioned brokers due to message format changes mentioned above. For more information please read <a href="/{{version}}/documentation/streams">Streams documentation</a>.</li>
@@ -1366,7 +1450,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
 
 <p><b>Note:</b> Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.
 
-<h5><a id="upgrade_9_breaking" href="#upgrade_9_breaking">Potential breaking changes in 0.9.0.0</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_9_breaking" class="anchor-link"></a><a href="#upgrade_9_breaking">Potential breaking changes in 0.9.0.0</a></h5>
 
 <ul>
     <li> Java 1.6 is no longer supported. </li>
@@ -1384,7 +1468,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
     <li> By default, all command line tools will print all logging messages to stderr instead of stdout. </li>
 </ul>
 
-<h5><a id="upgrade_901_notable" href="#upgrade_901_notable">Notable changes in 0.9.0.1</a></h5>
+<h5 class="anchor-heading"><a id="upgrade_901_notable" class="anchor-link"></a><a href="#upgrade_901_notable">Notable changes in 0.9.0.1</a></h5>
 
 <ul>
     <li> The new broker id generation feature can be disabled by setting broker.id.generation.enable to false. </li>
@@ -1401,15 +1485,15 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
     <li> The producer config block.on.buffer.full has been deprecated and will be removed in future release. Currently its default value has been changed to false. The KafkaProducer will no longer throw BufferExhaustedException but instead will use max.block.ms value to block, after which it will throw a TimeoutException. If block.on.buffer.full property is set to true explicitly, it will set the max.block.ms to Long.MAX_VALUE and metadata.fetch.timeout.ms will not be honoured</li>
 </ul>
 
-<h4><a id="upgrade_82" href="#upgrade_82">Upgrading from 0.8.1 to 0.8.2</a></h4>
+<h4 class="anchor-heading"><a id="upgrade_82" class="anchor-link"></a><a href="#upgrade_82">Upgrading from 0.8.1 to 0.8.2</a></h4>
 
 0.8.2 is fully compatible with 0.8.1. The upgrade can be done one broker at a time by simply bringing it down, updating the code, and restarting it.
 
-<h4><a id="upgrade_81" href="#upgrade_81">Upgrading from 0.8.0 to 0.8.1</a></h4>
+<h4 class="anchor-heading"><a id="upgrade_81" class="anchor-link"></a><a href="#upgrade_81">Upgrading from 0.8.0 to 0.8.1</a></h4>
 
 0.8.1 is fully compatible with 0.8. The upgrade can be done one broker at a time by simply bringing it down, updating the code, and restarting it.
 
-<h4><a id="upgrade_7" href="#upgrade_7">Upgrading from 0.7</a></h4>
+<h4 class="anchor-heading"><a id="upgrade_7" class="anchor-link"></a><a href="#upgrade_7">Upgrading from 0.7</a></h4>
 
 Release 0.7 is incompatible with newer releases. Major changes were made to the API, ZooKeeper data structures, and protocol, and configuration in order to add replication (Which was missing in 0.7). The upgrade from 0.7 to later versions requires a <a href="https://cwiki.apache.org/confluence/display/KAFKA/Migrating+from+0.7+to+0.8">special tool</a> for migration. This migration can be done without downtime.
 
diff --git a/docs/uses.html b/docs/uses.html
index 09bc45f..51f16a8 100644
--- a/docs/uses.html
+++ b/docs/uses.html
@@ -18,7 +18,7 @@
 <p> Here is a description of a few of the popular use cases for Apache Kafka&reg;.
 For an overview of a number of these areas in action, see <a href="https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying/">this blog post</a>. </p>
 
-<h4><a id="uses_messaging" href="#uses_messaging">Messaging</a></h4>
+<h4 class="anchor-heading"><a id="uses_messaging" class="anchor-link"></a><a href="#uses_messaging">Messaging</a></h4>
 
 Kafka works well as a replacement for a more traditional message broker.
 Message brokers are used for a variety of reasons (to decouple processing from data producers, to buffer unprocessed messages, etc).
@@ -31,7 +31,7 @@ durability guarantees Kafka provides.
 In this domain Kafka is comparable to traditional messaging systems such as <a href="http://activemq.apache.org">ActiveMQ</a> or
 <a href="https://www.rabbitmq.com">RabbitMQ</a>.
 
-<h4><a id="uses_website" href="#uses_website">Website Activity Tracking</a></h4>
+<h4 class="anchor-heading"><a id="uses_website" class="anchor-link"></a><a href="#uses_website">Website Activity Tracking</a></h4>
 
 The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds.
 This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type.
@@ -40,12 +40,12 @@ offline data warehousing systems for offline processing and reporting.
 <p>
 Activity tracking is often very high volume as many activity messages are generated for each user page view.
 
-<h4><a id="uses_metrics" href="#uses_metrics">Metrics</a></h4>
+<h4 class="anchor-heading"><a id="uses_metrics" class="anchor-link"></a><a href="#uses_metrics">Metrics</a></h4>
 
 Kafka is often used for operational monitoring data.
 This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.
 
-<h4><a id="uses_logs" href="#uses_logs">Log Aggregation</a></h4>
+<h4 class="anchor-heading"><a id="uses_logs" class="anchor-link"></a><a href="#uses_logs">Log Aggregation</a></h4>
 
 Many people use Kafka as a replacement for a log aggregation solution.
 Log aggregation typically collects physical log files off servers and puts them in a central place (a file server or HDFS perhaps) for processing.
@@ -55,7 +55,7 @@ This allows for lower-latency processing and easier support for multiple data so
 In comparison to log-centric systems like Scribe or Flume, Kafka offers equally good performance, stronger durability guarantees due to replication,
 and much lower end-to-end latency.
 
-<h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processing</a></h4>
+<h4 class="anchor-heading"><a id="uses_streamprocessing" class="anchor-link"></a><a href="#uses_streamprocessing">Stream Processing</a></h4>
 
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
@@ -68,12 +68,12 @@ is available in Apache Kafka to perform such data processing as described above.
 Apart from Kafka Streams, alternative open source stream processing tools include <a href="https://storm.apache.org/">Apache Storm</a> and
 <a href="http://samza.apache.org/">Apache Samza</a>.
 
-<h4><a id="uses_eventsourcing" href="#uses_eventsourcing">Event Sourcing</a></h4>
+<h4 class="anchor-heading"><a id="uses_eventsourcing" class="anchor-link"></a><a href="#uses_eventsourcing">Event Sourcing</a></h4>
 
 <a href="http://martinfowler.com/eaaDev/EventSourcing.html">Event sourcing</a> is a style of application design where state changes are logged as a
 time-ordered sequence of records. Kafka's support for very large stored log data makes it an excellent backend for an application built in this style.
 
-<h4><a id="uses_commitlog" href="#uses_commitlog">Commit Log</a></h4>
+<h4 class="anchor-heading"><a id="uses_commitlog" class="anchor-link"></a><a href="#uses_commitlog">Commit Log</a></h4>
 
 Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing
 mechanism for failed nodes to restore their data.


Mime
View raw message