kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject [kafka] branch 1.0 updated: Fix Streams navigation, menu order, and missig maven content (#4403)
Date Tue, 09 Jan 2018 17:24:32 GMT
This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
     new 90856c6  Fix Streams navigation, menu order, and missig maven content (#4403)
90856c6 is described below

commit 90856c6122cc30c91949548dbcf904d3d532fd71
Author: Joel Hamill <11722533+joel-hamill@users.noreply.github.com>
AuthorDate: Tue Jan 9 09:24:28 2018 -0800

    Fix Streams navigation, menu order, and missig maven content (#4403)
    
    * Fix Streams navigation, menu order, and missing maven content
    
    Reviewers: Guozhang Wang <wangguoz@gmail.com>
---
 docs/streams/architecture.html                     |   15 +-
 docs/streams/core-concepts.html                    |   22 +-
 docs/streams/developer-guide.html                  | 3026 ------------------
 docs/streams/developer-guide/app-reset-tool.html   |  173 ++
 docs/streams/developer-guide/datatypes.html        |  223 ++
 docs/streams/developer-guide/dsl-api.html          | 3208 ++++++++++++++++++++
 docs/streams/developer-guide/index.html            |  104 +
 .../developer-guide/interactive-queries.html       |  530 ++++
 docs/streams/developer-guide/memory-mgmt.html      |  241 ++
 docs/streams/developer-guide/processor-api.html    |  437 +++
 docs/streams/developer-guide/running-app.html      |  197 ++
 docs/streams/developer-guide/security.html         |  176 ++
 docs/streams/developer-guide/write-streams.html    |  248 ++
 docs/streams/index.html                            |   18 +-
 docs/streams/quickstart.html                       |   22 +-
 docs/streams/tutorial.html                         |   22 +-
 docs/streams/upgrade-guide.html                    |   15 +-
 17 files changed, 5611 insertions(+), 3066 deletions(-)

diff --git a/docs/streams/architecture.html b/docs/streams/architecture.html
index 0dbb1dc..69d1330 100644
--- a/docs/streams/architecture.html
+++ b/docs/streams/architecture.html
@@ -19,6 +19,19 @@
 
 <script id="content-template" type="text/x-handlebars-template">
     <h1>Architecture</h1>
+    <div class="sub-nav-sticky">
+        <div class="sticky-top">
+            <div style="height:35px">
+                <a href="/{{version}}/documentation/streams/">Introduction</a>
+                <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+                <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+                <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+                <a class="active-menu-item" href="/{{version}}/documentation/streams/architecture">Architecture</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
+                <a href="/{{version}}/documentation/streams/upgrade-guide">Upgrade</a>
+            </div>
+        </div>
+    </div>
 
     Kafka Streams simplifies application development by building on the Kafka producer and consumer libraries and leveraging the native capabilities of
     Kafka to offer data parallelism, distributed coordination, fault tolerance, and operational simplicity. In this section, we describe how Kafka Streams works underneath the covers.
@@ -131,7 +144,7 @@
 
     <div class="pagination">
         <a href="/{{version}}/documentation/streams/core-concepts" class="pagination__btn pagination__btn__prev">Previous</a>
-        <a href="/{{version}}/documentation/streams/upgrade-guide" class="pagination__btn pagination__btn__next">Next</a>
+        <a href="/{{version}}/documentation/streams/developer-guide" class="pagination__btn pagination__btn__next">Next</a>
     </div>
 </script>
 
diff --git a/docs/streams/core-concepts.html b/docs/streams/core-concepts.html
index dbc3f21..d803b3a 100644
--- a/docs/streams/core-concepts.html
+++ b/docs/streams/core-concepts.html
@@ -20,16 +20,18 @@
 <script id="content-template" type="text/x-handlebars-template">
     <h1>Core Concepts</h1>
     <div class="sub-nav-sticky">
-      <div class="sticky-top">
-        <div style="height:35px">
-          <a href="/{{version}}/documentation/streams/">Introduction</a>
-          <a href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-          <a class="active-menu-item" href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-          <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-          <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+        <div class="sticky-top">
+            <div style="height:35px">
+                <a href="/{{version}}/documentation/streams/">Introduction</a>
+                <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+                <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+                <a class="active-menu-item" href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+                <a href="/{{version}}/documentation/streams/architecture">Architecture</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
+                <a href="/{{version}}/documentation/streams/upgrade-guide">Upgrade</a>
+            </div>
         </div>
-      </div>
-  </div> 
+    </div>
     <p>
         Kafka Streams is a client library for processing and analyzing data stored in Kafka.
         It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, and simple yet efficient management and real-time querying of application state.
@@ -167,7 +169,7 @@
     </p>
 
     <div class="pagination">
-        <a href="/{{version}}/documentation/streams/developer-guide" class="pagination__btn pagination__btn__prev">Previous</a>
+        <a href="/{{version}}/documentation/streams/tutorial" class="pagination__btn pagination__btn__prev">Previous</a>
         <a href="/{{version}}/documentation/streams/architecture" class="pagination__btn pagination__btn__next">Next</a>
     </div>
 </script>
diff --git a/docs/streams/developer-guide.html b/docs/streams/developer-guide.html
deleted file mode 100644
index 5730f53..0000000
--- a/docs/streams/developer-guide.html
+++ /dev/null
@@ -1,3026 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<script><!--#include virtual="../js/templateData.js" --></script>
-
-<script id="content-template" type="text/x-handlebars-template">
-    <h1>Developer Guide for Kafka Streams API</h1>
-    <div class="sub-nav-sticky">
-      <div class="sticky-top">
-        <div style="height:35px">
-          <a href="/{{version}}/documentation/streams/">Introduction</a>
-          <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-          <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-          <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-          <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
-        </div>
-      </div>
-  </div> 
-    <p>
-        This developer guide describes how to write, configure, and execute a Kafka Streams application. There is a <a href="/{{version}}/documentation/#quickstart_kafkastreams">quickstart</a> example that provides how to run a stream processing program coded in the Kafka Streams library.
-    </p>
-
-    <p>
-        The computational logic of a Kafka Streams application is defined as a <a href="/{{version}}/documentation/streams/core-concepts#streams_topology">processor topology</a>. Kafka Streams provide two sets of APIs to define the processor topology, Low-Level Processor API and High-Level Streams DSL.
-    </p>
-
-    <ul class="toc">
-        <li><a href="#streams_processor">1. Low-level Processor API</a>
-            <ul>
-                <li><a href="#streams_processor_process">1.1 Processor</a>
-                <li><a href="#streams_processor_topology">1.2 Processor Topology</a>
-                <li><a href="#streams_processor_statestore">1.3 State Stores</a>
-                <li><a href="#restoration_progress">1.4 Monitoring the Restoration Progress of Fault-tolerant State Store</a>
-                <li><a href="#disable-changelogs">1.5 Enable / Disable Fault Tolerance of State Stores (Store Changelogs)</a>
-                <li><a href="#implementing-custom-state-stores">1.6 Implementing Custom State Stores</a>
-                <li><a href="#connecting-processors-and-state-stores">1.7 Connecting Processors and State Stores</a>
-                <li><a href="#streams_processor_describe">1.5 Describe a Topology</a>
-            </ul>
-        </li>
-        <li><a href="#streams_dsl">2. High-Level Streams DSL</a>
-            <ul>
-                <li><a href="#streams_duality">2.1 Duality of Streams and Tables</a>
-                <li><a href="#streams_dsl_source">2.2 Creating Source Streams from Kafka</a>
-                <li><a href="#streams_dsl_transform">2.3 Transform a stream</a>
-                <li><a href="#streams_dsl_sink">2.4 Write streams back to Kafka</a>
-                <li><a href="#streams_dsl_build">2.5 Generate the processor topology</a>
-            </ul>
-        </li>
-        <li><a href="#streams_interactive_queries">3. Interactive Queries</a>
-            <ul>
-                <li><a href="#streams_developer-guide_interactive-queries_your_app">3.1 Your application and interactive queries</a>
-                <li><a href="#streams_developer-guide_interactive-queries_local-stores">3.2 Querying local state stores (for an application instance)</a>
-                <li><a href="#streams_developer-guide_interactive-queries_local-key-value-stores">3.3 Querying local key-value stores</a>
-                <li><a href="#streams_developer-guide_interactive-queries_local-window-stores">3.4 Querying local window stores</a>
-                <li><a href="#streams_developer-guide_interactive-queries_custom-stores">3.5 Querying local custom state stores</a>
-                <li><a href="#streams_developer-guide_interactive-queries_discovery">3.6 Querying remote state stores (for the entire application)</a>
-                <li><a href="#streams_developer-guide_interactive-queries_rpc-layer">3.7 Adding an RPC layer to your application</a>
-                <li><a href="#streams_developer-guide_interactive-queries_expose-rpc">3.8 Exposing the RPC endpoints of your application</a>
-                <li><a href="#streams_developer-guide_interactive-queries_discover-app-instances-and-stores">3.9 Discovering and accessing application instances and their respective local state stores</a>
-            </ul>
-        </li>
-        <li><a href="#streams_developer-guide_memory-management">4. Memory Management</a>
-            <ul>
-                <li><a href="#streams_developer-guide_memory-management_record-cache">4.1 Record caches in the DSL</a>
-                <li><a href="#streams_developer-guide_memory-management_state-store-cache">4.2 State store caches in the Processor API</a>
-                <li><a href="#streams_developer-guide_memory-management_other_memory_usage">4.3 Other memory usage</a>
-            </ul>
-        </li>
-        <li><a href="#streams_configure_execute">5. Application Configuration and Execution</a>
-            <ul>
-                <li><a href="#streams_client_config">5.1 Producer and Consumer Configuration</a>
-                <li><a href="#streams_broker_config">5.2 Broker Configuration</a>
-                <li><a href="#streams_topic_config">5.3 Internal Topic Configuration</a>
-                <li><a href="#streams_execute">5.4 Executing Your Kafka Streams Application</a>
-            </ul>
-        </li>
-    </ul>
-    <p>
-        There is a <a href="/{{version}}/documentation/#quickstart_kafkastreams">quickstart</a> example that provides how to run a stream processing program coded in the Kafka Streams library.
-        This section focuses on how to write, configure, and execute a Kafka Streams application.
-    </p>
-
-    <p>
-        As we have mentioned above, the computational logic of a Kafka Streams application is defined as a <a href="/{{version}}/documentation/streams/core-concepts#streams_topology">processor topology</a>.
-        Currently Kafka Streams provides two sets of APIs to define the processor topology, which will be described in the subsequent sections.
-    </p>
-
-    <h3><a id="streams_processor" href="#streams_processor">Low-Level Processor API</a></h3>
-
-    <h4><a id="streams_processor_process" href="#streams_processor_process">Processor</a></h4>
-
-    <p>
-        A <a href="/{{version}}/documentation/streams/core-concepts"><b>stream processor</b></a> is a node in the processor topology that represents a single processing step.
-        With the <code>Processor</code> API, you can define arbitrary stream processors that process one received record at a time, and connect these processors with
-        their associated state stores to compose the processor topology that represents their customized processing logic.
-    </p>
-
-    <p>
-        The <code>Processor</code> interface provides the <code>process</code> method API, which is performed on each record that is received.
-        The processor can maintain the current <code>ProcessorContext</code> instance variable initialized in the <code>init</code> method
-        and use the context to schedule a periodically called punctuation function (<code>context().schedule</code>),
-        to forward the new or modified key-value pair to downstream processors (<code>context().forward</code>),
-        to commit the current processing progress (<code>context().commit</code>), and so on.
-    </p>
-
-    <p>
-        The following example <code>Processor</code> implementation defines a simple word-count algorithm:
-    </p>
-
-    <pre class="brush: java;">
-    public class MyProcessor implements Processor&lt;String, String&gt; {
-    private ProcessorContext context;
-    private KeyValueStore&lt;String, Long&gt; kvStore;
-
-    @Override
-    @SuppressWarnings("unchecked")
-    public void init(ProcessorContext context) {
-    // keep the processor context locally because we need it in punctuate() and commit()
-    this.context = context;
-
-    // schedule a punctuation method every 1000 milliseconds.
-    this.context.schedule(1000, PunctuationType.WALL_CLOCK_TIME, new Punctuator() {
-        @Override
-        public void punctuate(long timestamp) {
-            KeyValueIterator&lt;String, Long&gt; iter = this.kvStore.all();
-
-            while (iter.hasNext()) {
-                KeyValue&lt;String, Long&gt; entry = iter.next();
-                context.forward(entry.key, entry.value.toString());
-            }
-
-            // it is the caller's responsibility to close the iterator on state store;
-            // otherwise it may lead to memory and file handlers leak depending on the
-            // underlying state store implementation.
-            iter.close();
-
-            // commit the current processing progress
-            context.commit();
-        }
-        });
-
-    // retrieve the key-value store named "Counts"
-    this.kvStore = (KeyValueStore&lt;String, Long&gt;) context.getStateStore("Counts");
-    }
-
-    @Override
-    public void process(String dummy, String line) {
-    String[] words = line.toLowerCase().split(" ");
-
-    for (String word : words) {
-        Long oldValue = this.kvStore.get(word);
-
-        if (oldValue == null) {
-            this.kvStore.put(word, 1L);
-        } else {
-            this.kvStore.put(word, oldValue + 1L);
-        }
-    }
-    }
-
-    @Override
-    public void punctuate(long timestamp) {
-    KeyValueIterator&lt;String, Long&gt; iter = this.kvStore.all();
-
-    while (iter.hasNext()) {
-        KeyValue&lt;String, Long&gt; entry = iter.next();
-        context.forward(entry.key, entry.value.toString());
-    }
-
-    iter.close(); // avoid OOM
-    // commit the current processing progress
-    context.commit();
-    }
-
-    @Override
-    public void close() {
-    // close any resources managed by this processor.
-    // Note: Do not close any StateStores as these are managed
-    // by the library
-    }
-    };
-    </pre>
-
-    <p>
-        In the previous example, the following actions are performed:
-    </p>
-
-    <ul>
-        <li>In the <code>init</code> method, schedule the punctuation every 1 second and retrieve the local state store by its name "Counts".</li>
-        <li>In the <code>process</code> method, upon each received record, split the value string into words, and update their counts into the state store (we will talk about this feature later in the section).</li>
-        <li>In the scheduled <code>punctuate</code> method, iterate the local state store and send the aggregated counts to the downstream processor, and commit the current stream state.</li>
-        <li>When done with the <code>KeyValueIterator&lt;String, Long&gt;</code> you <em>must</em> close the iterator, as shown above or use the try-with-resources statement.</li>
-    </ul>
-
-
-    <h4><a id="streams_processor_topology" href="#streams_processor_topology">Processor Topology</a></h4>
-
-    <p>
-        With the customized processors defined in the Processor API, you can use <code>Topology</code> to build a processor topology
-        by connecting these processors together:
-    </p>
-
-    <pre class="brush: java;">
-    Topology topology = new Topology();
-
-    topology.addSource("SOURCE", "src-topic")
-    // add "PROCESS1" node which takes the source processor "SOURCE" as its upstream processor
-    .addProcessor("PROCESS1", () -> new MyProcessor1(), "SOURCE")
-
-    // add "PROCESS2" node which takes "PROCESS1" as its upstream processor
-    .addProcessor("PROCESS2", () -> new MyProcessor2(), "PROCESS1")
-
-    // add "PROCESS3" node which takes "PROCESS1" as its upstream processor
-    .addProcessor("PROCESS3", () -> new MyProcessor3(), "PROCESS1")
-
-    // add the sink processor node "SINK1" that takes Kafka topic "sink-topic1"
-    // as output and the "PROCESS1" node as its upstream processor
-    .addSink("SINK1", "sink-topic1", "PROCESS1")
-
-    // add the sink processor node "SINK2" that takes Kafka topic "sink-topic2"
-    // as output and the "PROCESS2" node as its upstream processor
-    .addSink("SINK2", "sink-topic2", "PROCESS2")
-
-    // add the sink processor node "SINK3" that takes Kafka topic "sink-topic3"
-    // as output and the "PROCESS3" node as its upstream processor
-    .addSink("SINK3", "sink-topic3", "PROCESS3");
-    </pre>
-
-    Here is a quick walk through of the previous code to build the topology:
-
-    <ul>
-        <li>A source node (<code>"SOURCE"</code>) is added to the topology using the <code>addSource</code> method, with one Kafka topic (<code>"src-topic"</code>) fed to it.</li>
-        <li>Three processor nodes are then added using the <code>addProcessor</code> method; here the first processor is a child of the source node, but is the parent of the other two processors.</li>
-        <li>Three sink nodes are added to complete the topology using the <code>addSink</code> method, each piping from a different parent processor node and writing to a separate topic.</li>
-    </ul>
-
-<h4><a id="streams_processor_statestore" href="#streams_processor_statestore">State Stores</a></h4>
-
-<p>
-To make state stores fault-tolerant (e.g., to recover from machine crashes) as well as to allow for state store migration without data loss (e.g., to migrate a stateful stream task from one machine to another when elastically adding or removing capacity from your application), a state store can be <strong>continuously backed up</strong> to a Kafka topic behind the scenes. 
-We sometimes refer to this topic as the state store's associated <em>changelog topic</em> or simply its <em>changelog</em>. 
-In the case of a machine failure, for example, the state store and thus the application's state can be fully restored from its changelog. 
-You can enable or disable this backup feature for a state store, and thus its fault tolerance.
-</p>
-
-<p>
-By default, persistent <strong>key-value stores</strong> are fault-tolerant. 
-They are backed by a <a href="https://kafka.apache.org/documentation.html#compaction">compacted</a> changelog topic. 
-The purpose of compacting this topic is to prevent the topic from growing indefinitely, to reduce the storage consumed in the associated Kafka cluster, and to minimize recovery time if a state store needs to be restored from its changelog topic.
-</p>
-
-<p>
-Similarly, persistent <strong>window stores</strong> are fault-tolerant. 
-They are backed by a topic that uses both <em>compaction</em> and <em>deletion</em>. 
-Using deletion in addition to compaction is required for the changelog topics of window stores because of the structure of the message keys that are being sent to the changelog topics: for window stores, the message keys are composite keys that include not only the &quot;normal&quot; key but also window timestamps. 
-For such composite keys it would not be sufficient to enable just compaction in order to prevent a changelog topic from growing out of bounds. 
-With deletion enabled, old windows that have expired will be cleaned up by Kafka's log cleaner as the log segments expire. 
-The default retention setting is <code>Windows#maintainMs()</code> + 1 day. This setting can be overriden by specifying <code>StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG</code> in the <code>StreamsConfig</code>.
-</p>
-
-<p>
-One additional note regarding the use of state stores.  Any time you open an <code>Iterator</code> from a state store you <em>must</em> call <code>close()</code> on the iterator
-when you are done working with it to reclaim resources.  Or you can use the iterator from within a try-with-resources statement.
-    By not closing an iterator, you may likely encounter an OOM error.
-</p>
-
-
-<h4><a id="restoration_progress" href="#restoration_progress">Monitoring the Restoration Progress of Fault-tolerant State Stores</a></h4>
-
-<p>
-When starting up your application any fault-tolerant state stores don't need a restoration process as the persisted state is read from local disk. 
-But there could be situations when a full restore from the backing changelog topic is required (e.g., a failure wiped out the local state or your application runs in a stateless environment and persisted data is lost on re-starts).
-</p>
-
-<p>
-If you have a significant amount of data in the changelog topic, the restoration process could take a non-negligible amount of time. 
-Given that processing of new data won't start until the restoration process is completed, having a window into the progress of restoration is useful.
-</p>
-
-<p>
-In order to observe the restoration of all state stores you provide your application an instance of the <code>org.apache.kafka.streams.processor.StateRestoreListener</code> interface. 
-You set the <code>org.apache.kafka.streams.processor.StateRestoreListener</code> by calling the <code>KafkaStreams#setGlobalStateRestoreListener</code> method.
-</p>
-
-<p>
- A basic implementation example that prints restoration status to the console:
-</p>
-
-<pre class="brush: java;">
-  import org.apache.kafka.common.TopicPartition;
-  import org.apache.kafka.streams.processor.StateRestoreListener;
-
-   public class ConsoleGlobalRestoreListerner implements StateRestoreListener {
-
-      @Override
-      public void onRestoreStart(final TopicPartition topicPartition,
-                                 final String storeName,
-                                 final long startingOffset,
-                                 final long endingOffset) {
-
-          System.out.print("Started restoration of " + storeName + " partition " + topicPartition.partition());
-          System.out.println(" total records to be restored " + (endingOffset - startingOffset));
-      }
-
-      @Override
-      public void onBatchRestored(final TopicPartition topicPartition,
-                                  final String storeName,
-                                  final long batchEndOffset,
-                                  final long numRestored) {
-
-          System.out.println("Restored batch " + numRestored + " for " + storeName + " partition " + topicPartition.partition());
-
-      }
-
-      @Override
-      public void onRestoreEnd(final TopicPartition topicPartition,
-                               final String storeName,
-                               final long totalRestored) {
-
-          System.out.println("Restoration complete for " + storeName + " partition " + topicPartition.partition());
-      }
-  }
-</pre>
-
-<blockquote>
-<p>
-  The <code>StateRestoreListener</code> instance is shared across all <code>org.apache.kafka.streams.processor.internals.StreamThread</code> instances and it is assumed all methods are stateless. 
-  If any stateful operations are desired, then the user will need to provide synchronization internally.
-</p>
-</blockquote>
-
-<h4> <a id="disable-changelogs" href="#disable-changelogs">Enable / Disable Fault Tolerance of State Stores (Store Changelogs)</a></h4>
-
-<p>
-    You can enable or disable fault tolerance for a state store by enabling or disabling, respectively ,the changelogging of the store through <code>StateStoreBuilder#withLoggingEnabled(Map&lt;String, String&gt;)</code>
-    and <code>StateStoreBuilder#withLoggingDisabled()</code>.
-    You can also fine-tune the associated topic’s configuration if needed.
-</p>
-
-<p>Example for disabling fault-tolerance:</p>
-
-<pre class="brush: java;">
-
-  import org.apache.kafka.streams.processor.state.KeyValueBytesStoreSupplier;
-  import org.apache.kafka.streams.processor.state.StateStoreBuilder;
-  import org.apache.kafka.streams.state.Stores;
-
-  KeyValueBytesStoreSupplier countStoreSupplier = Stores.inMemoryKeyValueStore("Counts");
-  StateStoreBuilder builder = Stores.keyValueStoreBuilder(countStoreSupplier,
-                                                          Serdes.String(),
-                                                          Serdes.Long())
-                                    .withLoggingDisabled(); // disable backing up the store to a changelog topic
-
-</pre>
-
-<blockquote>
-<p>If the changelog is disabled then the attached state store is no longer fault tolerant and it can't have any standby replicas</p>
-</blockquote>
-
-<p>
-   Example for enabling fault tolerance, with additional changelog-topic configuration: You can add any log config 
-   from kafka.log.LogConfig|core/src/main/scala/kafka/log/LogConfig.scala#L61. Unrecognized configs will be ignored.
-</p>
-
-<pre class="brush: java;">
-
-  import org.apache.kafka.streams.processor.state.KeyValueBytesStoreSupplier;
-  import org.apache.kafka.streams.processor.state.StateStoreBuilder;
-  import org.apache.kafka.streams.state.Stores;
-
-  Map&lt;String, String&gt; changelogConfig = new HashMap();
-  // override min.insync.replicas
-  changelogConfig.put("min.insyc.replicas", "1")
-
-  KeyValueBytesStoreSupplier countStoreSupplier = Stores.inMemoryKeyValueStore("Counts");
-  StateStoreBuilder builder = Stores.keyValueStoreBuilder(countStoreSupplier,
-                                                          Serdes.String(),
-                                                          Serdes.Long())
-                                    .withLoggingEnabled(changelogConfig); // enable changelogging, with custom changelog settings
-
-
-</pre>
-
-<h4><a id="implementing-custom-state-stores" href="#implementing-custom-state-stores">Implementing custom State Stores</a></h4>
-
-<p>
- Apart from using the built-in state store types, you can also implement your own. 
- The primary interface to implement for the store is <code>org.apache.kafka.streams.processor.StateStore</code>. 
- Beyond that, Kafka Streams also has a few extended interfaces such as <code>KeyValueStore</code>.
-</p>
-
-<p>
-  In addition to the actual store, you also need to provide a &quot;factory&quot; for the store by implementing the <code>org.apache.kafka.streams.processor.state.StoreSupplier</code> interface, which Kafka Streams uses to create instances of your store.
-</p>
-
-<p>
-  You also have the option of providing a <code>org.apache.kafka.streams.processor.StateRestoreCallback</code> instance used to restore the state store from its backing changelog topic. 
-  This is done via the <code>org.apache.kafka.streams.processor.ProcessorContext#register</code> call inside the <code>StateStore#init</code> all.
-</p>
-
-<pre class="brush: java;">
-  public void init(ProcessorContext context, StateStore store) {
-     context.register(store, false, stateRestoreCallBackIntance);
-   }    
-</pre>
-
-<p>
-  There is an additional interface <code>org.apache.kafka.streams.processor.BatchingStateRestoreCallback</code> that provides bulk restoration semantics vs. the single record-at-a-time restoration semantics offered by the <code>StateRestoreCallback</code> interface.
-</p>
-
-<p>
-  Addtionally there are two abstract classes that implement <code>StateRestoreCallback</code> or <code>BatchingStateRestoreCallback</code> in conjuntion with the <code>org.apache.kafka.streams.processor.StateRestoreListener</code> interface (<code>org.apache.kafka.streams.processor.AbstractNotifyingRestoreCallback</code> and <code>org.apache.kafka.streams.processor.AbstractNotifyingBatchingRestoreCallback</code> respectively) that provide the ability for the state store to recieve notifi [...]
-  The <code>StateRestoreListener</code> in this case is per state store instance and is used for internal purposes such as updating config settings based on the status of the restoration process.
-</p>
-
-<h4><a id="connecting-processors-and-state-stores" href="#connecting-processors-and-state-stores">Connecting Processors and State Stores</a></h4>
-
-<p>
-Now that we have defined a processor (WordCountProcessor) and the state stores, we can now construct the processor topology by connecting these processors and state stores together by using the <code>Topology</code> instance. 
-In addition, users can add <em>source processors</em> with the specified Kafka topics to generate input data streams into the topology, and <em>sink processors</em> with the specified Kafka topics to generate output data streams out of the topology.
-</p>
-
-<pre class="brush: java;">
-       Topology topology = new Topology();
-
-      // add the source processor node that takes Kafka topic "source-topic" as input
-      topology.addSource("Source", "source-topic")
-
-      // add the WordCountProcessor node which takes the source processor as its upstream processor
-      .addProcessor("Process", () -> new WordCountProcessor(), "Source")
-
-      // add the count store associated with the WordCountProcessor processor
-      .addStateStore(countStoreSupplier, "Process")
-
-      // add the sink processor node that takes Kafka topic "sink-topic" as output
-      // and the WordCountProcessor node as its upstream processor
-      .addSink("Sink", "sink-topic", "Process");
-</pre>
-
-<p>There are several steps in the above implementation to build the topology, and here is a quick walk-through:</p>
-<ul>
-   <li>A source processor node named &quot;Source&quot; is added to the topology using the <code>addSource</code> method, with one Kafka topic &quot;source-topic&quot; fed to it.</li>
-   <li>A processor node named &quot;Process&quot; with the pre-defined <code>WordCountProcessor</code> logic is then added as the downstream processor of the &quot;Source&quot; node using the <code>addProcessor</code> method.</li>
-   <li>A predefined persistent key-value state store is created and associated with the &quot;Process&quot; node, using <code>countStoreSupplier</code>.</li>
-   <li>A sink processor node is then added to complete the topology using the <code>addSink</code> method, taking the &quot;Process&quot; node as its upstream processor and writing to a separate &quot;sink-topic&quot; Kafka topic.</li>
-</ul>
-
-<p>
-In this topology, the &quot;Process&quot; stream processor node is considered a downstream processor of the &quot;Source&quot; node, and an upstream processor of the &quot;Sink&quot; node. 
-As a result, whenever the &quot;Source&quot; node forward a newly fetched record from Kafka to its downstream &quot;Process&quot; node, <code>WordCountProcessor#process()</code> method is triggered to process the record and update the associated state store; and whenever <code>context#forward()</code> is called in the <code>Punctuator#punctuate()</code> method, the aggregate key-value pair will be sent via the &quot;Sink&quot; processor node to the Kafka topic &quot;sink-topic&quot;.
-Note that in the <code>WordCountProcessor</code> implementation, users need to refer to the same store name &quot;Counts&quot; when accessing the key-value store; otherwise an exception will be thrown at runtime, indicating that the state store cannot be found; also, if the state store itself is not associated with the processor in the <code>Topology</code> code, accessing it in the processor's <code>init()</code> method will also throw an exception at runtime, indicating the state store [...]
-</p>
-
-
-    <h4><a id="streams_processor_describe" href="#streams_processor_describe">Describe a <code>Topology</code></a></h4>
-
-    <p>
-        After a <code>Topology</code> is specified, it is possible to retrieve a description of the corresponding DAG via <code>#describe()</code> that returns a <code>TopologyDescription</code>.
-        A <code>TopologyDescription</code> contains all added source, processor, and sink nodes as well as all attached stores.
-        You can access the specified input and output topic names and patterns for source and sink nodes.
-        For processor nodes, the attached stores are added to the description.
-        Additionally, all nodes have a list to all their connected successor and predecessor nodes.
-        Thus, <code>TopologyDescritpion</code> allows to retrieve the DAG structure of the specified topology.
-        <br />
-        Note that global stores are listed explicitly because they are accessible by all nodes without the need to explicitly connect them.
-        Furthermore, nodes are grouped by <code>Sub-topologies</code>, where each sub-topology is a group of processor nodes that are directly connected to each other (i.e., either by a direct connection&mdash;but not a topic&mdash;or by sharing a store).
-        During execution, each <code>Sub-topology</code> will be processed by <a href="/{{version}}/documentation/streams/architecture#streams_architecture_tasks">one or multiple tasks</a>.
-        Thus, each <code>Sub-topology</code> describes an independent unit of works that can be executed by different threads in parallel.
-        <br />
-        Describing a <code>Topology</code> before starting your streams application with the specified topology is helpful to reason about tasks and thus maximum parallelism (we will talk about how to execute your written application later in this section).
-        It is also helpful to get insight into a <code>Topology</code> if it is not specified directly as described above but via Kafka Streams DSL (we will describe the DSL in the next section.
-    </p>
-
-    In the next section we present another way to build the processor topology: the Kafka Streams DSL.
-    <br>
-
-    <h3><a id="streams_dsl" href="#streams_dsl">High-Level Streams DSL</a></h3>
-
-    To build a <code>Topology</code> using the Streams DSL, developers can apply the <code>StreamsBuilder</code> class.
-    A simple example is included with the source code for Kafka in the <code>streams/examples</code> package. The rest of this section will walk
-    through some code to demonstrate the key steps in creating a topology using the Streams DSL, but we recommend developers to read the full example source
-    codes for details.
-
-    <h4><a id="streams_duality" href="#streams_duality">Duality of Streams and Tables</a></h4>
-
-    <p>
-        Before we discuss concepts such as aggregations in Kafka Streams we must first introduce tables, and most importantly the relationship between tables and streams:
-        the so-called <a href="https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying/">stream-table duality</a>.
-        Essentially, this duality means that a stream can be viewed as a table, and vice versa. Kafka's log compaction feature, for example, exploits this duality.
-    </p>
-
-    <p>
-        A simple form of a table is a collection of key-value pairs, also called a map or associative array. Such a table may look as follows:
-    </p>
-    <img class="centered" src="/{{version}}/images/streams-table-duality-01.png">
-
-    The <b>stream-table duality</b> describes the close relationship between streams and tables.
-    <ul>
-        <li><b>Stream as Table</b>: A stream can be considered a changelog of a table, where each data record in the stream captures a state change of the table. A stream is thus a table in disguise, and it can be easily turned into a "real" table by replaying the changelog from beginning to end to reconstruct the table. Similarly, in a more general analogy, aggregating data records in a stream - such as computing the total number of pageviews by user from a stream of pageview events - w [...]
-        <li><b>Table as Stream</b>: A table can be considered a snapshot, at a point in time, of the latest value for each key in a stream (a stream's data records are key-value pairs). A table is thus a stream in disguise, and it can be easily turned into a "real" stream by iterating over each key-value entry in the table.</li>
-    </ul>
-
-    <p>
-        Let's illustrate this with an example. Imagine a table that tracks the total number of pageviews by user (first column of diagram below). Over time, whenever a new pageview event is processed, the state of the table is updated accordingly. Here, the state changes between different points in time - and different revisions of the table - can be represented as a changelog stream (second column).
-    </p>
-    <img class="centered" src="/{{version}}/images/streams-table-duality-02.png" style="width:300px">
-
-    <p>
-        Interestingly, because of the stream-table duality, the same stream can be used to reconstruct the original table (third column):
-    </p>
-    <img class="centered" src="/{{version}}/images/streams-table-duality-03.png" style="width:600px">
-
-    <p>
-        The same mechanism is used, for example, to replicate databases via change data capture (CDC) and, within Kafka Streams, to replicate its so-called state stores across machines for fault-tolerance.
-        The stream-table duality is such an important concept that Kafka Streams models it explicitly via the <a href="#streams_kstream_ktable">KStream, KTable, and GlobalKTable</a> interfaces, which we describe in the next sections.
-    </p>
-
-    <h5><a id="streams_kstream_ktable" href="#streams_kstream_ktable">KStream, KTable, and GlobalKTable</a></h5>
-    The DSL uses three main abstractions. A <b>KStream</b> is an abstraction of a record stream, where each data record represents a self-contained datum in the unbounded data set.
-    A <b>KTable</b> is an abstraction of a changelog stream, where each data record represents an update. More precisely, the value in a data record is considered to be an update of the last value for the same record key,
-    if any (if a corresponding key doesn't exist yet, the update will be considered a create).
-    Like a <b>KTable</b>, a <b>GlobalKTable</b> is an abstraction of a changelog stream, where each data record represents an update.
-    However, a <b>GlobalKTable</b> is different from a <b>KTable</b> in that it is fully replicated on each KafkaStreams instance.
-    <b>GlobalKTable</b> also provides the ability to look up current values of data records by keys.
-    This table-lookup functionality is available through <a href="#streams_dsl_joins">join operations</a>.
-
-    To illustrate the difference between KStreams and KTables/GlobalKTables, let's imagine the following two data records are being sent to the stream:
-
-    <pre>
-    ("alice", 1) --> ("alice", 3)
-    </pre>
-
-    If the stream is defined as a KStream and the stream processing application were to sum the values it would return <code>4</code>. If the stream is defined as a KTable or GlobalKTable, the return would be <code>3</code>, since the last record would be considered as an update.
-
-    <h4><a id="streams_dsl_source" href="#streams_dsl_source">Creating Source Streams from Kafka</a></h4>
-
-    <p>
-    You can easily read data from Kafka topics into your application. We support the following operations.
-    </p>
-    <table class="data-table" border="1">
-        <tbody><tr>
-            <th>Reading from Kafka</th>
-            <th>Description</th>
-        </tr>
-        <tr>
-            <td><b>Stream</b>: input topic(s) &rarr; <code>KStream</code></td>
-            <td>Create a <code>KStream</code> from the specified Kafka input topic(s), interpreting the data as a record stream.
-                A <code>KStream</code> represents a partitioned record stream.
-                <p>
-                    Slightly simplified, in the case of a KStream, the local KStream instance of every application instance will be populated
-                    with data from only a <b>subset</b> of the partitions of the input topic. Collectively, i.e. across all application instances,
-                    all the partitions of the input topic will be read and processed.
-                </p>
-                <pre class="brush: java;">
-                    import org.apache.kafka.common.serialization.Serdes;
-                    import org.apache.kafka.streams.StreamsBuilder;
-                    import org.apache.kafka.streams.kstream.KStream;
-
-                    StreamsBuilder builder = new StreamsBuilder();
-
-                    KStream&lt;String, Long&gt; wordCounts = builder.stream(
-                        "word-counts-input-topic" /* input topic */,
-                        Consumed.with(Serdes.String(), Serdes.Long()); // define key and value serdes
-                </pre>
-                When to provide serdes explicitly:
-                <ul>
-                    <li>If you do not specify serdes explicitly, the default serdes from the configuration are used.</li>
-                    <li>You must specificy serdes explicitly if the key and/or value types of the records in the Kafka input topic(s) do not match
-                        the configured default serdes. </li>
-                </ul>
-                Several variants of <code>stream</code> exist to e.g. specify a regex pattern for input topics to read from.</td>
-        </tr>
-        <tr>
-            <td><b>Table</b>: input topic(s) &rarr; <code>KTable</code></td>
-            <td>
-                Reads the specified Kafka input topic into a <code>KTable</code>. The topic is interpreted as a changelog stream,
-                where records with the same key are interpreted as UPSERT aka INSERT/UPDATE (when the record value is not <code>null</code>) or
-                as DELETE (when the value is null) for that key.
-                <p>
-                    Slightly simplified, in the case of a KTable, the local KTable instance of every application instance will be populated
-                    with data from only a subset of the partitions of the input topic. Collectively, i.e. across all application instances, all
-                    the partitions of the input topic will be read and processed.
-                </p>
-                <p>
-                You may provide an optional name for the table (more precisely, for the internal state store that backs the table).
-                When a name is provided, the table can be queryied using <a href="#streams_interactive_queries">interactive queries</a>.
-                When a name is not provided the table will not queryable and an internal name will be provided for the state store.
-                </p>
-                <pre class="brush: java;">
-                    import org.apache.kafka.common.serialization.Serdes;
-                    import org.apache.kafka.streams.StreamsBuilder;
-                    import org.apache.kafka.streams.kstream.KTable;
-
-                    StreamsBuilder builder = new StreamsBuilder();
-
-                    KTable&lt;String, Long&gt; wordCounts = builder.table(
-                        Serdes.String(), /* key serde */
-                        Serdes.Long(),   /* value serde */
-                        "word-counts-input-topic", /* input topic */
-                        "word-counts-partitioned-store" /* table/store name */);
-                </pre>
-
-                When to provide serdes explicitly:
-                <ul>
-                    <li>If you do not specify serdes explicitly, the default serdes from the configuration are used.</li>
-                    <li>You must specificy serdes explicitly if the key and/or value types of the records in the Kafka input topic do not
-                        match the configured default serdes.</li>
-                </ul>
-
-                Several variants of <code>table</code> exist to e.g. specify the <code>auto.offset.reset</code>
-                policy to be used when reading from the input topic.
-            </td>
-        <tr>
-            <td><b>Global Table</b>: input topic &rarr; <code>GlobalKTable</code></td>
-            <td>
-                Reads the specified Kafka input topic into a <code>GlobalKTable</code>. The topic is interpreted as a changelog stream, where records
-                with the same key are interpreted as UPSERT aka INSERT/UPDATE (when the record value is not <code>null</code>) or as DELETE (when the
-                value is <code>null</code>) for that key.
-                <p>
-                    Slightly simplified, in the case of a GlobalKTable, the local GlobalKTable instance of every application instance will be
-                    populated with data from all the partitions of the input topic. In other words, when using a global table, every application
-                    instance will get its own, full copy of the topic's data.
-                </p>
-                <p>
-                You may provide an optional name for the table (more precisely, for the internal state store that backs the table).
-                When a name is provided, the table can be queryied using <a href="#streams_interactive_queries">interactive queries</a>.
-                When a name is not provided the table will not queryable and an internal name will be provided for the state store.
-                </p>
-                <pre class="brush: java;">
-                    import org.apache.kafka.common.serialization.Serdes;
-                    import org.apache.kafka.streams.StreamsBuilder;
-                    import org.apache.kafka.streams.kstream.GlobalKTable;
-
-                    StreamsBuilder builder = new StreamsBuilder();
-
-                    GlobalKTable&lt;String, Long&gt; wordCounts = builder.globalTable(
-                        Serdes.String(), /* key serde */
-                        Serdes.Long(),   /* value serde */
-                        "word-counts-input-topic", /* input topic */
-                        "word-counts-global-store" /* table/store name */);
-                </pre>
-
-                When to provide serdes explicitly:
-                <ul>
-                    <li>If you do not specify serdes explicitly, the default serdes from the configuration are used.</li>
-                    <li>You must specificy serdes explicitly if the key and/or value types of the records in the Kafka input topic do not
-                        match the configured default serdes.</li>
-                </ul>
-                Several variants of <code>globalTable</code> exist to e.g. specify explicit serdes.
-
-            </td>
-        </tbody>
-    </table>
-
-    <h4><a id="streams_dsl_transform" href="#streams_dsl_transform">Transform a stream</a></h4>
-    <p>
-    <code>KStream</code> and <code>KTable</code> support a variety of transformation operations. Each of these operations
-    can be translated into one or more connected processors into the underlying processor topology. Since <code>KStream</code>
-    and <code>KTable</code> are strongly typed, all these transformation operations are defined as generic functions where
-    users could specify the input and output data types.
-    </p>
-    <p>
-    Some <code>KStream</code> transformations may generate one or more <code>KStream</code> objects (e.g., filter and
-    map on <code>KStream</code> generate another <code>KStream</code>, while branch on <code>KStream</code> can generate
-    multiple <code>KStream</code> instances) while some others may generate a <code>KTable</code> object (e.g., aggregation) interpreted
-    as the changelog stream to the resulted relation. This allows Kafka Streams to continuously update the computed value upon arrival
-    of late records after it has already been produced to the downstream transformation operators. As for <code>KTable</code>,
-    all its transformation operations can only generate another <code>KTable</code> (though the Kafka Streams DSL does
-    provide a special function to convert a <code>KTable</code> representation into a <code>KStream</code>, which we will
-    describe later). Nevertheless, all these transformation methods can be chained together to compose a complex processor topology.
-    </p>
-    <p>
-    We describe these transformation operations in the following subsections, categorizing them into two categories:
-    stateless and stateful transformations.
-    </p>
-    <h5><a id="streams_dsl_transformations_stateless" href="#streams_dsl_transformations_stateless">Stateless transformations</a></h5>
-    <p>
-    Stateless transformations, by definition, do not depend on any state for processing, and hence implementation-wise they do not
-    require a state store associated with the stream processor.
-    </p>
-    <table class="data-table" border="1">
-        <tbody><tr>
-            <th>Transformation</th>
-            <th>Description</th>
-        </tr>
-        <tr>
-            <td><b>Branch</b>: <code>KStream &rarr; KStream</code></td>
-            <td>
-                <p>
-                Branch (or split) a <code>KStream</code> based on the supplied predicates into one or more <code>KStream</code> instances.
-                </p>
-                <p>
-                Predicates are evaluated in order. A record is placed to one and only one output stream on the first match:
-                if the n-th predicate evaluates to true, the record is placed to n-th stream. If no predicate matches,
-                the record is dropped.
-                </p>
-                <p>
-                Branching is useful, for example, to route records to different downstream topics.
-                </p>
-                <pre class="brush: java;">
-                    KStream&lt;String, Long&gt; stream = ...;
-                    KStream&lt;String, Long&gt;[] branches = stream.branch(
-                            (key, value) -> key.startsWith("A"), /* first predicate  */
-                            (key, value) -> key.startsWith("B"), /* second predicate */
-                            (key, value) -> true                 /* third predicate  */
-                    );
-                    // KStream branches[0] contains all records whose keys start with "A"
-                    // KStream branches[1] contains all records whose keys start with "B"
-                    // KStream branches[2] contains all other records
-                    // Java 7 example: cf. `filter` for how to create `Predicate` instances
-            </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Filter</b>: <code>KStream &rarr; KStream or KTable &rarr; KTable</code></td>
-            <td>
-                <p>
-                Evaluates a boolean function for each element and retains those for which the function returns true.
-                </p>
-                <pre class="brush: java;">
-                     KStream&lt;String, Long&gt; stream = ...;
-                     KTable&lt;String, Long&gt; table = ...;
-                     // A filter that selects (keeps) only positive numbers
-                     // Java 8+ example, using lambda expressions
-                     KStream&lt;String, Long&gt; onlyPositives = stream.filter((key, value) -> value > 0);
-
-                     // Java 7 example
-                     KStream&lt;String, Long&gt; onlyPositives = stream.filter(
-                       new Predicate&lt;String, Long&gt;() {
-                         @Override
-                         public boolean test(String key, Long value) {
-                           return value > 0;
-                         }
-                       });
-
-                    // A filter on a KTable that materializes the result into a StateStore
-                    table.filter((key, value) -> value != 0, Materialized.&lt;String, Long, KeyValueStore&lt;Bytes, byte[]&gt;&gt;as("filtered"));
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Inverse Filter</b>: <code>KStream &rarr; KStream or KTable &rarr; KTable</code></td>
-            <td>
-                <p>
-                Evaluates a boolean function for each element and drops those for which the function returns true.
-                </p>
-                <pre class="brush: java;">
-                     KStream&lt;String, Long&gt; stream = ...;
-
-                     // An inverse filter that discards any negative numbers or zero
-                     // Java 8+ example, using lambda expressions
-                     KStream&lt;String, Long&gt; onlyPositives = stream.filterNot((key, value) -> value <= 0);
-
-                     // Java 7 example
-                     KStream&lt;String, Long&gt; onlyPositives = stream.filterNot(
-                      new Predicate&lt;String, Long&gt;() {
-                        @Override
-                        public boolean test(String key, Long value) {
-                            return value <= 0;
-                        }
-                     });
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>FlatMap</b>: <code>KStream &rarr; KStream </code></td>
-            <td>
-                <p>
-                Takes one record and produces zero, one, or more records. You can modify the record keys and values,
-                including their types.
-                </p>
-
-                <p>
-                Marks the stream for data re-partitioning: Applying a grouping or a join after <code>flatMap</code> will result in
-                re-partitioning of the records. If possible use <code>flatMapValues</code> instead, which will not cause data re-partitioning.
-                </p>
-                <pre class="brush: java;">
-                     KStream&lt;Long, String> stream = ...;
-                     KStream&lt;String, Integer&gt; transformed = stream.flatMap(
-                         // Here, we generate two output records for each input record.
-                         // We also change the key and value types.
-                         // Example: (345L, "Hello") -> ("HELLO", 1000), ("hello", 9000)
-                         (key, value) -> {
-                             List&lt;KeyValue&lt;String, Integer&gt;&gt; result = new LinkedList&lt;&gt;();
-                             result.add(KeyValue.pair(value.toUpperCase(), 1000));
-                             result.add(KeyValue.pair(value.toLowerCase(), 9000));
-                             return result;
-                         }
-                     );
-                     // Java 7 example: cf. `map` for how to create `KeyValueMapper` instances
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>FlatMap (values only)</b>: <code>KStream &rarr; KStream </code></td>
-            <td>
-                <p>
-                Takes one record and produces zero, one, or more records, while retaining the key of the original record.
-                You can modify the record values and the value type.
-                </p>
-                <p>
-                <code>flatMapValues</code> is preferable to <code>flatMap</code> because it will not cause data re-partitioning. However,
-                it does not allow you to modify the key or key type like <code>flatMap</code> does.
-                </p>
-                <pre class="brush: java;">
-                   // Split a sentence into words.
-                   KStream&lt;byte[], String&gt; sentences = ...;
-                   KStream&lt;byte[], String&gt; words = sentences.flatMapValues(value -> Arrays.asList(value.split("\\s+")));
-
-                   // Java 7 example: cf. `mapValues` for how to create `ValueMapper` instances
-               </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Foreach</b>: <code>KStream &rarr; void </code></td>
-            <td>
-                <p>
-                Terminal operation. Performs a stateless action on each record.
-                </p>
-                <p>
-                Note on processing guarantees: Any side effects of an action (such as writing to external systems)
-                are not trackable by Kafka, which means they will typically not benefit from Kafka's processing guarantees.
-                </p>
-                <pre class="brush: java;">
-                       KStream&lt;String, Long&gt; stream = ...;
-
-                       // Print the contents of the KStream to the local console.
-                       // Java 8+ example, using lambda expressions
-                       stream.foreach((key, value) -> System.out.println(key + " => " + value));
-
-                       // Java 7 example
-                       stream.foreach(
-                           new ForeachAction&lt;String, Long&gt;() {
-                               @Override
-                               public void apply(String key, Long value) {
-                                 System.out.println(key + " => " + value);
-                               }
-                       });
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>GroupByKey</b>: <code>KStream &rarr; KGroupedStream </code></td>
-            <td>
-                <p>
-                Groups the records by the existing key.
-                </p>
-                <p>
-                Grouping is a prerequisite for aggregating a stream or a table and ensures that data is properly
-                partitioned ("keyed") for subsequent operations.
-                </p>
-                <p>
-                <b>When to set explicit serdes</b>: Variants of <code>groupByKey</code> exist to override the configured default serdes of
-                your application, which you must do if the key and/or value types of the resulting <code>KGroupedStream</code> do
-                not match the configured default serdes.
-                </p>
-                <p>
-                <b>Note:</b>
-                Grouping vs. Windowing: A related operation is windowing, which lets you control how to "sub-group" the
-                grouped records of the same key into so-called windows for stateful operations such as windowed aggregations
-                or windowed joins.
-                </p>
-                <p>
-                Causes data re-partitioning if and only if the stream was marked for re-partitioning. <code>groupByKey</code> is
-                preferable to <code>groupBy</code> because it re-partitions data only if the stream was already marked for re-partitioning.
-                However, <code>groupByKey</code> does not allow you to modify the key or key type like <code>groupBy</code> does.
-                </p>
-                <pre class="brush: java;">
-                       KStream&lt;byte[], String&gt; stream = ...;
-
-                       // Group by the existing key, using the application's configured
-                       // default serdes for keys and values.
-                       KGroupedStream&lt;byte[], String&gt; groupedStream = stream.groupByKey();
-
-                       // When the key and/or value types do not match the configured
-                       // default serdes, we must explicitly specify serdes.
-                       KGroupedStream&lt;byte[], String&gt; groupedStream = stream.groupByKey(
-                           Serialized.with(
-                                Serdes.ByteArray(), /* key */
-                                Serdes.String())     /* value */
-                       );
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>GroupBy</b>: <code>KStream &rarr; KGroupedStream or KTable &rarr; KGroupedTable</code></td>
-            <td>
-                <p>
-                Groups the records by a new key, which may be of a different key type. When grouping a table,
-                you may also specify a new value and value type. groupBy is a shorthand for selectKey(...).groupByKey().
-                </p>
-                <p>
-                Grouping is a prerequisite for aggregating a stream or a table and ensures that data is properly
-                partitioned ("keyed") for subsequent operations.
-                </p>
-                <p>
-                <b>When to set explicit serdes</b>: Variants of groupBy exist to override the configured default serdes of your
-                application, which you must do if the key and/or value types of the resulting KGroupedStream or
-                KGroupedTable do not match the configured default serdes.
-                </p>
-                <p>
-                <b>Note:</b>
-                Grouping vs. Windowing: A related operation is windowing, which lets you control how to “sub-group” the
-                grouped records of the same key into so-called windows for stateful operations such as windowed aggregations
-                or windowed joins.
-                </p>
-                <p>
-                <b>Always causes data re-partitioning:</b> groupBy always causes data re-partitioning. If possible use groupByKey
-                instead, which will re-partition data only if required.
-                </p>
-                <pre class="brush: java;">
-                       KStream&lt;byte[], String&gt; stream = ...;
-                       KTable&lt;byte[], String&gt; table = ...;
-
-                       // Java 8+ examples, using lambda expressions
-
-                       // Group the stream by a new key and key type
-                       KGroupedStream&lt;String, String&gt; groupedStream = stream.groupBy(
-                           (key, value) -> value,
-                           Serialize.with(
-                                Serdes.String(), /* key (note: type was modified) */
-                                Serdes.String())  /* value */
-                       );
-
-                       // Group the table by a new key and key type, and also modify the value and value type.
-                       KGroupedTable&lt;String, Integer&gt; groupedTable = table.groupBy(
-                           (key, value) -> KeyValue.pair(value, value.length()),
-                           Serialized.with(
-                               Serdes.String(), /* key (note: type was modified) */
-                               Serdes.Integer()) /* value (note: type was modified) */
-                       );
-
-
-                       // Java 7 examples
-
-                       // Group the stream by a new key and key type
-                       KGroupedStream&lt;String, String&gt; groupedStream = stream.groupBy(
-                           new KeyValueMapper&lt;byte[], String, String&gt;&gt;() {
-                               @Override
-                               public String apply(byte[] key, String value) {
-                                  return value;
-                               }
-                           },
-                           Serialized.with(
-                                Serdes.String(), /* key (note: type was modified) */
-                                Serdes.String())  /* value */
-                       );
-
-                       // Group the table by a new key and key type, and also modify the value and value type.
-                       KGroupedTable&lt;String, Integer&gt; groupedTable = table.groupBy(
-                            new KeyValueMapper&lt;byte[], String, KeyValue&lt;String, Integer&gt;&gt;() {
-                            @Override
-                                public KeyValue&lt;String, Integer&gt; apply(byte[] key, String value) {
-                                   return KeyValue.pair(value, value.length());
-                                }
-                            },
-                            Serialized.with(
-                                Serdes.String(), /* key (note: type was modified) */
-                                Serdes.Integer()) /* value (note: type was modified) */
-                       );
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Map</b>: <code>KStream &rarr; KStream</code></td>
-            <td>
-                <p>
-                Takes one record and produces one record. You can modify the record key and value, including their types.
-                </p>
-
-                <p>
-                <b>Marks the stream for data re-partitioning:</b> Applying a grouping or a join after <code>flatMap</code> will result in
-                re-partitioning of the records. If possible use <code>mapValues</code> instead, which will not cause data re-partitioning.
-                </p>
-
-                <pre class="brush: java;">
-                       KStream&lt;byte[], String&gt; stream = ...;
-
-                       // Java 8+ example, using lambda expressions
-                       // Note how we change the key and the key type (similar to `selectKey`)
-                       // as well as the value and the value type.
-                       KStream&lt;String, Integer&gt; transformed = stream.map(
-                           (key, value) -> KeyValue.pair(value.toLowerCase(), value.length()));
-
-                       // Java 7 example
-                       KStream&lt;String, Integer&gt; transformed = stream.map(
-                           new KeyValueMapper&lt;byte[], String, KeyValue&lt;String, Integer&gt;&gt;() {
-                           @Override
-                           public KeyValue&lt;String, Integer&gt; apply(byte[] key, String value) {
-                               return new KeyValue&lt;&gt;(value.toLowerCase(), value.length());
-                           }
-                       });
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Map (values only)</b>: <code>KStream &rarr; KStream or KTable &rarr; KTable</code></td>
-            <td>
-                <p>
-                Takes one record and produces one record, while retaining the key of the original record. You can modify
-                the record value and the value type.
-                </p>
-                <p>
-                <code>mapValues</code> is preferable to <code>map</code> because it will not cause data re-partitioning. However, it does not
-                allow you to modify the key or key type like <code>map</code> does.
-                </p>
-
-                <pre class="brush: java;">
-                       KStream&lt;byte[], String&gt; stream = ...;
-                       KTable&lt;String, String&gt; table = ...;
-
-                       // Java 8+ example, using lambda expressions
-                       KStream&lt;byte[], String&gt; uppercased = stream.mapValues(value -> value.toUpperCase());
-
-                       // Java 7 example
-                       KStream&lt;byte[], String&gt; uppercased = stream.mapValues(
-                          new ValueMapper&lt;String&gt;() {
-                          @Override
-                          public String apply(String s) {
-                             return s.toUpperCase();
-                          }
-                       });
-
-                       // mapValues on a KTable and also materialize the results into a statestore
-                       table.mapValue(value -> value.toUpperCase(), Materialized.&lt;String, String, KeyValueStore&lt;Bytes, byte[]&gt;&gt;as("uppercased"));
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Print</b>: <code>KStream &rarr; void </code></td>
-            <td>
-                <p>
-                Terminal operation. Prints the records to <code>System.out</code>. See Javadocs for serde and <code>toString()</code> caveats.
-                </p>
-                <pre class="brush: java;">
-                       KStream&lt;byte[], String&gt; stream = ...;
-                       stream.print();
-                    
-                       // You can also override how and where the data is printed, i.e, to file:
-                       stream.print(Printed.toFile("stream.out"));
-
-                       // with a custom KeyValueMapper and label
-                       stream.print(Printed.toSysOut()
-                                .withLabel("my-stream")
-                                .withKeyValueMapper((key, value) -> key + " -> " + value));
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>SelectKey</b>: <code>KStream &rarr; KStream</code></td>
-            <td>
-                <p>
-                Assigns a new key, possibly of a new key type, to each record.
-                </p>
-                <p>
-                Marks the stream for data re-partitioning: Applying a grouping or a join after <code>flatMap</code> will result in
-                re-partitioning of the records.
-                </p>
-
-                <pre class="brush: java;">
-                       KStream&lt;byte[], String&gt; stream = ...;
-
-                       // Derive a new record key from the record's value.  Note how the key type changes, too.
-                       // Java 8+ example, using lambda expressions
-                       KStream&lt;String, String&gt; rekeyed = stream.selectKey((key, value) -> value.split(" ")[0])
-
-                       // Java 7 example
-                       KStream&lt;String, String&gt; rekeyed = stream.selectKey(
-                           new KeyValueMapper&lt;byte[], String, String&gt;() {
-                           @Override
-                           public String apply(byte[] key, String value) {
-                              return value.split(" ")[0];
-                           }
-                         });
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Table to Stream</b>: <code>KTable &rarr; KStream</code></td>
-            <td>
-                <p>
-                Converts this table into a stream.
-                </p>
-                <pre class="brush: java;">
-                       KTable&lt;byte[], String> table = ...;
-
-                       // Also, a variant of `toStream` exists that allows you
-                       // to select a new key for the resulting stream.
-                       KStream&lt;byte[], String> stream = table.toStream();
-                </pre>
-            </td>
-        </tr>
-        <tr>
-            <td><b>WriteAsText</b>: <code>KStream &rarr; void </code></td>
-            <td>
-                <p>
-                Terminal operation. Write the records to a file. See Javadocs for serde and <code>toString()</code> caveats.
-                </p>
-                <pre class="brush: java;">
-                       KStream&lt;byte[], String&gt; stream = ...;
-                       stream.writeAsText("/path/to/local/output.txt");
-
-                       // Several variants of `writeAsText` exist to e.g. override the
-                       // default serdes for record keys and record values.
-                       stream.writeAsText("/path/to/local/output.txt", Serdes.ByteArray(), Serdes.String());
-                </pre>
-            </td>
-        </tr>
-        </tbody>
-    </table>
-
-
-    <h5><a id="streams_dsl_transformations_stateful" href="#streams_dsl_transformations_stateful">Stateful transformations</a></h5>
-    <h6><a id="streams_dsl_transformations_stateful_overview" href="#streams_dsl_transformations_stateful_overview">Overview</a></h6>
-    <p>
-        Stateful transformations, by definition, depend on state for processing inputs and producing outputs, and
-        hence implementation-wise they require a state store associated with the stream processor. For example,
-        in aggregating operations, a windowing state store is used to store the latest aggregation results per window;
-        in join operations, a windowing state store is used to store all the records received so far within the
-        defined window boundary.
-    </p>
-    <p>
-        Note, that state stores are fault-tolerant. In case of failure, Kafka Streams guarantees to fully restore
-        all state stores prior to resuming the processing.
-    </p>
-    <p>
-        Available stateful transformations in the DSL include:
-    <ul>
-        <li><a href=#streams_dsl_aggregations>Aggregating</a></li>
-        <li><a href="#streams_dsl_joins">Joining</a></li>
-        <li><a href="#streams_dsl_windowing">Windowing (as part of aggregations and joins)</a></li>
-        <li>Applying custom processors and transformers, which may be stateful, for Processor API integration</li>
-    </ul>
-    </p>
-    <p>
-        The following diagram shows their relationships:
-    </p>
-    <figure>
-        <img class="centered" src="/{{version}}/images/streams-stateful_operations.png" style="width:500pt;">
-        <figcaption style="text-align: center;"><i>Stateful transformations in the DSL</i></figcaption>
-    </figure>
-
-    <p>
-        We will discuss the various stateful transformations in detail in the subsequent sections. However, let's start
-        with a first example of a stateful application: the canonical WordCount algorithm.
-    </p>
-    <p>
-        WordCount example in Java 8+, using lambda expressions:
-    </p>
-    <pre class="brush: java;">
-        // We assume record values represent lines of text.  For the sake of this example, we ignore
-        // whatever may be stored in the record keys.
-        KStream&lt;String, String&gt; textLines = ...;
-
-        KStream&lt;String, Long&gt; wordCounts = textLines
-            // Split each text line, by whitespace, into words.  The text lines are the record
-            // values, i.e. we can ignore whatever data is in the record keys and thus invoke
-            // `flatMapValues` instead of the more generic `flatMap`.
-            .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
-            // Group the stream by word to ensure the key of the record is the word.
-            .groupBy((key, word) -> word)
-            // Count the occurrences of each word (record key).
-            //
-            // This will change the stream type from `KGroupedStream&lt;String, String&gt;` to
-            // `KTable&lt;String, Long&gt;` (word -> count).  We must provide a name for
-            // the resulting KTable, which will be used to name e.g. its associated
-            // state store and changelog topic.
-            .count("Counts")
-            // Convert the `KTable&lt;String, Long&gt;` into a `KStream&lt;String, Long&gt;`.
-            .toStream();
-    </pre>
-    <p>
-        WordCount example in Java 7:
-    </p>
-    <pre class="brush: java;">
-        // Code below is equivalent to the previous Java 8+ example above.
-        KStream&lt;String, String&gt; textLines = ...;
-
-        KStream&lt;String, Long&gt; wordCounts = textLines
-            .flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
-                @Override
-                public Iterable&lt;String&gt; apply(String value) {
-                    return Arrays.asList(value.toLowerCase().split("\\W+"));
-                }
-            })
-            .groupBy(new KeyValueMapper&lt;String, String, String&gt;&gt;() {
-                @Override
-                public String apply(String key, String word) {
-                    return word;
-                }
-            })
-            .count("Counts")
-            .toStream();
-    </pre>
-
-    <h6><a id="streams_dsl_aggregations" href="#streams_dsl_aggregations">Aggregate a stream</a></h6>
-    <p>
-        Once records are grouped by key via <code>groupByKey</code> or <code>groupBy</code> -- and
-        thus represented as either a <code>KGroupedStream</code> or a
-        <code>KGroupedTable</code> -- they can be aggregated via an operation such as
-        <code>reduce</code>.
-        For windowed aggregations use <code>windowedBy(Windows).reduce(Reducer)</code>.
-        Aggregations are <i>key-based</i> operations, i.e.they always operate over records (notably record values) <i>of the same key</i>.
-        You maychoose to perform aggregations on
-        <a href="#streams_dsl_windowing">windowed</a> or non-windowed data.
-    </p>
-    <table class="data-table" border="1">
-        <tbody>
-        <tr>
-            <th>Transformation</th>
-            <th>Description</th>
-        </tr>
-        <tr>
-            <td><b>Aggregate</b>: <code>KGroupedStream &rarr; KTable</code> or <code>KGroupedTable
-                &rarr; KTable</code></td>
-            <td>
-                <p>
-                    <b>Rolling aggregation</b>. Aggregates the values of (non-windowed) records by
-                    the grouped key. Aggregating is a generalization of <code>reduce</code> and allows, for example, the
-                    aggregate value to have a different type than the input values.
-                </p>
-                <p>
-                    When aggregating a grouped stream, you must provide an initializer (think:
-                    <code>aggValue = 0</code>) and an "adder"
-                    aggregator (think: <code>aggValue + curValue</code>). When aggregating a <i>grouped</i>
-                    table, you must additionally provide a "subtractor" aggregator (think: <code>aggValue - oldValue</code>).
-                </p>
-                <p>
-                    Several variants of <code>aggregate</code> exist, see Javadocs for details.
-                </p>
-                <pre class="brush: java;">
-                    KGroupedStream&lt;Bytes, String&gt; groupedStream = ...;
-                    KGroupedTable&lt;Bytes, String&gt; groupedTable = ...;
-
-                    // Java 8+ examples, using lambda expressions
-
-                    // Aggregating a KGroupedStream (note how the value type changes from String to Long)
-                    KTable&lt;Bytes, Long&gt; aggregatedStream = groupedStream.aggregate(
-                        () -> 0L, /* initializer */
-                        (aggKey, newValue, aggValue) -> aggValue + newValue.length(), /* adder */
-                        Serdes.Long(), /* serde for aggregate value */
-                        "aggregated-stream-store" /* state store name */);
-
-                    // Aggregating a KGroupedTable (note how the value type changes from String to Long)
-                    KTable&lt;Bytes, Long&gt; aggregatedTable = groupedTable.aggregate(
-                        () -> 0L, /* initializer */
-                        (aggKey, newValue, aggValue) -> aggValue + newValue.length(), /* adder */
-                        (aggKey, oldValue, aggValue) -> aggValue - oldValue.length(), /* subtractor */
-                        Serdes.Long(), /* serde for aggregate value */
-                        "aggregated-table-store" /* state store name */);
-
-
-                    // windowed aggregation
-                    KTable&lt;Windowed&ltBytes&gt;, Long&gt; windowedAggregate = groupedStream.windowedBy(TimeWindows.of(TimeUnit.MINUTES(5).toMillis())
-                        .aggregate(() -> 0L, /* initializer */
-                            (aggKey, newValue, aggValue) -> aggValue + newValue.length(), /* aggregator */
-                            Serdes.Long()) /* serde for aggregate value */
-
-
-                    // Java 7 examples
-
-                    // Aggregating a KGroupedStream (note how the value type changes from String to Long)
-                    KTable&lt;Bytes, Long&gt; aggregatedStream = groupedStream.aggregate(
-                        new Initializer&lt;Long&gt;() { /* initializer */
-                          @Override
-                          public Long apply() {
-                            return 0L;
-                          }
-                        },
-                        new Aggregator&lt;Bytes, String, Long&gt;() { /* adder */
-                          @Override
-                          public Long apply(Bytes aggKey, String newValue, Long aggValue) {
-                            return aggValue + newValue.length();
-                          }
-                        },
-                        Serdes.Long(),
-                        "aggregated-stream-store");
-
-                    // Aggregating a KGroupedTable (note how the value type changes from String to Long)
-                    KTable&lt;Bytes, Long&gt; aggregatedTable = groupedTable.aggregate(
-                        new Initializer&lt;Long&gt;() { /* initializer */
-                          @Override
-                          public Long apply() {
-                            return 0L;
-                          }
-                        },
-                        new Aggregator&lt;Bytes, String, Long&gt;() { /* adder */
-                          @Override
-                          public Long apply(Bytes aggKey, String newValue, Long aggValue) {
-                            return aggValue + newValue.length();
-                          }
-                        },
-                        new Aggregator&lt;Bytes, String, Long&gt;() { /* subtractor */
-                          @Override
-                          public Long apply(Bytes aggKey, String oldValue, Long aggValue) {
-                            return aggValue - oldValue.length();
-                          }
-                        },
-                        Serdes.Long(),
-                        "aggregated-table-store");
-
-                    // Windowed aggregation
-                    KTable&lt;Bytes, Long&gt; aggregatedStream = groupedStream.windowedBy(TimeWindows.of(TimeUnit.MINUTES(5).toMillis())
-                        .aggregate(
-                            new Initializer&lt;Long&gt;() { /* initializer */
-                              @Override
-                              public Long apply() {
-                                return 0L;
-                              }
-                            },
-                            new Aggregator&lt;Bytes, String, Long&gt;() { /* adder */
-                              @Override
-                              public Long apply(Bytes aggKey, String newValue, Long aggValue) {
-                                return aggValue + newValue.length();
-                              }
-                            },
-                            Serdes.Long());
-                </pre>
-                <p>
-                    Detailed behavior of <code>KGroupedStream</code>:
-                </p>
-                <ul>
-                    <li>Input records with <code>null</code> keys are ignored in general.</li>
-                    <li>When a record key is received for the first time, the initializer is called
-                        (and called before the adder).</li>
-                    <li>Whenever a record with a non-null value is received, the adder is called.</li>
-                </ul>
-                <p>
-                    Detailed behavior of KGroupedTable:
-                </p>
-                <ul>
-                    <li>Input records with null keys are ignored in general.</li>
-                    <li>When a record key is received for the first time, the initializer is called
-                        (and called before the adder and subtractor). Note that, in contrast to <code>KGroupedStream</code>, over
-                        time the initializer may be called more
-                        than once for a key as a result of having received input tombstone records
-                        for that key (see below).</li>
-                    <li>When the first non-<code>null</code> value is received for a key (think:
-                        INSERT), then only the adder is called.</li>
-                    <li>When subsequent non-<code>null</code> values are received for a key (think:
-                        UPDATE), then (1) the subtractor is called
-                        with the old value as stored in the table and (2) the adder is called with
-                        the new value of the input record
-                        that was just received. The order of execution for the subtractor and adder
-                        is not defined.</li>
-                    <li>When a tombstone record -- i.e. a record with a <code>null</code> value -- is
-                        received for a key (think: DELETE), then
-                        only the subtractor is called. Note that, whenever the subtractor returns a
-                    <code>null</code> value itself, then the
-                    corresponding key is removed from the resulting KTable. If that happens, any
-                    next input record for that key will trigger the initializer again.</li>
-                </ul>
-                <p>
-                    See the example at the bottom of this section for a visualization of the
-                    aggregation semantics.
-                </p>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Aggregate (windowed)</b>: <code>KGroupedStream &rarr; KTable</code></td>
-            <td>
-                <p>
-                    <b>Windowed aggregation</b>. Aggregates the values of records, per window, by
-                    the grouped key. Aggregating is a generalization of
-                    <code>reduce</code> and allows, for example, the aggregate value to have a
-                    different type than the input values.
-                </p>
-                <p>
-                    You must provide an initializer (think: <code>aggValue = 0</code>), "adder"
-                    aggregator (think: <code>aggValue + curValue</code>),
-                    and a window. When windowing based on sessions, you must additionally provide a
-                    "session merger" aggregator (think:
-                    <code>mergedAggValue = leftAggValue + rightAggValue</code>).
-                </p>
-                <p>
-                    The windowed <code>aggregate</code> turns a <code>KGroupedStream
-                    &lt;K , V&gt;</code> into a windowed <code>KTable&lt;Windowed&lt;K&gt;, V&gt;</code>.
-                </p>
-                <p>
-                    Several variants of <code>aggregate</code> exist, see Javadocs for details.
-                </p>
-
-                <pre class="brush: java;">
-                    import java.util.concurrent.TimeUnit;
-                    KGroupedStream&lt;String, Long&gt; groupedStream = ...;
-
-                    // Java 8+ examples, using lambda expressions
-
-                    // Aggregating with time-based windowing (here: with 5-minute tumbling windows)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; timeWindowedAggregatedStream = groupedStream
-                        .windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(5))) /* time-based window */
-                        .aggregate(
-                            () -> 0L, /* initializer */
-                            (aggKey, newValue, aggValue) -> aggValue + newValue, /* adder */
-                            Materialized.&lt;String, Long, WindowStore&lt;Bytes, byte[]&gt;&gt;as("time-windowed-aggregated-stream-store") /* state store name */
-                                .withValueSerde(Serdes.Long())); /* serde for aggregate value */
-
-
-                    // Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; sessionizedAggregatedStream = groupedStream
-                        .windowedBy(SessionWindows.with(TimeUnit.MINUTES.toMillis(5))) /* session window */
-                        .aggregate(
-                            () -> 0L, /* initializer */
-                            (aggKey, newValue, aggValue) -> aggValue + newValue, /* adder */
-                            (aggKey, leftAggValue, rightAggValue) -> leftAggValue + rightAggValue, /* session merger */
-                            Materialized.&lt;String, Long, SessionStore&lt;Bytes, byte[]&gt;&gt;as("sessionized-aggregated-stream-store") /* state store name */
-                                .withValueSerde(Serdes.Long())); /* serde for aggregate value */
-
-                    // Java 7 examples
-
-                    // Aggregating with time-based windowing (here: with 5-minute tumbling windows)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; timeWindowedAggregatedStream = groupedStream
-                        .windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(5))) /* time-based window */
-                        .aggregate(
-                            new Initializer&lt;Long&gt;() { /* initializer */
-                              @Override
-                              public Long apply() {
-                                return 0L;
-                              }
-                            },
-                            new Aggregator&lt;String, Long, Long&gt;() { /* adder */
-                              @Override
-                              public Long apply(String aggKey, Long newValue, Long aggValue) {
-                                return aggValue + newValue;
-                              }
-                            },
-                            Materialized.&lt;String, Long, WindowStore&lt;Bytes, byte[]&gt;&gt;as("time-windowed-aggregated-stream-store") /* state store name */
-                                    .withValueSerde(Serdes.Long()) /* serde for aggregate value */
-                    );
-
-                    // Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; sessionizedAggregatedStream = groupedStream
-                        .windowedBy(SessionWindows.with(TimeUnit.MINUTES.toMillis(5))) /* session window */
-                        .aggregate(
-                            new Initializer&lt;Long&gt;() { /* initializer */
-                              @Override
-                              public Long apply() {
-                                return 0L;
-                              }
-                            },
-                            new Aggregator&lt;String, Long, Long&gt;() { /* adder */
-                              @Override
-                              public Long apply(String aggKey, Long newValue, Long aggValue) {
-                                return aggValue + newValue;
-                              }
-                            },
-                            new Merger&lt;String, Long&gt;() { /* session merger */
-                              @Override
-                              public Long apply(String aggKey, Long leftAggValue, Long rightAggValue) {
-                                return rightAggValue + leftAggValue;
-                              }
-                            },
-                            Materialized.&lt;String, Long, SessionStore&lt;Bytes, byte[]&gt;&gt;as("sessionized-aggregated-stream-store") /* state store name */
-                                .withValueSerde(Serdes.Long()) /* serde for aggregate value */
-                    );
-                </pre>
-
-                <p>
-                    Detailed behavior:
-                </p>
-                <ul>
-                    <li>The windowed aggregate behaves similar to the rolling aggregate described
-                        above. The additional twist is that the behavior applies per window.</li>
-                    <li>Input records with <code>null</code> keys are ignored in general.</li>
-                    <li>When a record key is received for the first time for a given window, the
-                        initializer is called (and called before the adder).</li>
-                    <li>Whenever a record with a non-<code>null</code> value is received for a given window, the
-                        adder is called.
-                        (Note: As a result of a known bug in Kafka 0.11.0.0, the adder is currently
-                        also called for <code>null</code> values. You can work around this, for example, by
-                        manually filtering out <code>null</code> values prior to grouping the stream.)</li>
-                    <li>When using session windows: the session merger is called whenever two
-                        sessions are being merged.</li>
-                </ul>
-                <p>
-                See the example at the bottom of this section for a visualization of the aggregation semantics.
-                </p>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Count</b>: <code>KGroupedStream &rarr; KTable or KGroupedTable &rarr; KTable</code></td>
-            <td>
-                <p>
-                    <b>Rolling aggregation</b>. Counts the number of records by the grouped key.
-                    Several variants of <code>count</code> exist, see Javadocs for details.
-                </p>
-                <pre class="brush: java;">
-                    KGroupedStream&lt;String, Long&gt; groupedStream = ...;
-                    KGroupedTable&lt;String, Long&gt; groupedTable = ...;
-
-                    // Counting a KGroupedStream
-                    KTable&lt;String, Long&gt; aggregatedStream = groupedStream.count();
-
-                    // Counting a KGroupedTable
-                    KTable&lt;String, Long&gt; aggregatedTable = groupedTable.count();
-                </pre>
-                <p>
-                    Detailed behavior for <code>KGroupedStream</code>:
-                </p>
-                <ul>
-                    <li>Input records with null keys or values are ignored.</li>
-                </ul>
-                <p>
-                    Detailed behavior for <code>KGroupedTable</code>:
-                </p>
-                <ul>
-                    <li>Input records with <code>null</code> keys are ignored. Records with <code>null</code>
-                        values are not ignored but interpreted as "tombstones" for the corresponding key, which
-                        indicate the deletion of the key from the table.</li>
-                </ul>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Count (Windowed)</b>: <code>KGroupedStream &rarr; KTable</code></td>
-            <td>
-                <p>
-                    Windowed aggregation. Counts the number of records, per window, by the grouped key.
-                </p>
-                <p>
-                    The windowed <code>count</code> turns a <code>KGroupedStream<&lt;K, V&gt;</code> into a windowed <code>KTable&lt;Windowed&lt;K&gt;, V&gt;</code>.
-                </p>
-                <p>
-                    Several variants of count exist, see Javadocs for details.
-                </p>
-                <pre class="brush: java;">
-                    import java.util.concurrent.TimeUnit;
-                    KGroupedStream&lt;String, Long&gt; groupedStream = ...;
-
-                    // Counting a KGroupedStream with time-based windowing (here: with 5-minute tumbling windows)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; aggregatedStream = groupedStream
-                        .windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(5))) /* time-based window */
-                        .count();
-
-                    // Counting a KGroupedStream with session-based windowing (here: with 5-minute inactivity gaps)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; aggregatedStream = groupedStream
-                        .windowedBy(SessionWindows.with(TimeUnit.MINUTES.toMillis(5))) /* session window */
-                        .count();
-                </pre>
-                <p>
-                    Detailed behavior:
-                </p>
-                <ul>
-                    <li>Input records with <code>null</code> keys or values are ignored. (Note: As a result of a known bug in Kafka 0.11.0.0,
-                        records with <code>null</code> values are not ignored yet. You can work around this, for example, by manually
-                        filtering out <code>null</code> values prior to grouping the stream.)</li>
-                </ul>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Reduce</b>: <code>KGroupedStream &rarr; KTable or KGroupedTable &rarr; KTable</code></td>
-            <td>
-                <p>
-                <b>Rolling aggregation</b>. Combines the values of (non-windowed) records by the grouped key. The current record value is
-                combined with the last reduced value, and a new reduced value is returned. The result value type cannot be changed,
-                unlike <code>aggregate</code>.
-                </p>
-
-                <p>
-                When reducing a grouped stream, you must provide an "adder" reducer (think: <code>aggValue + curValue</code>).
-                When reducing a grouped table, you must additionally provide a "subtractor" reducer (think: <code>aggValue - oldValue</code>).
-                </p>
-                <p>
-                Several variants of <code>reduce</code> exist, see Javadocs for details.
-                </p>
-                <pre class="brush: java;">
-                    KGroupedStream&lt;String, Long&gt; groupedStream = ...;
-                    KGroupedTable&lt;String, Long&gt; groupedTable = ...;
-
-                    // Java 8+ examples, using lambda expressions
-
-                    // Reducing a KGroupedStream
-                    KTable&lt;String, Long&gt; aggregatedStream = groupedStream.reduce(
-                        (aggValue, newValue) -> aggValue + newValue /* adder */
-                    );
-
-                    // Reducing a KGroupedTable
-                    KTable&lt;String, Long&gt; aggregatedTable = groupedTable.reduce(
-                        (aggValue, newValue) -> aggValue + newValue, /* adder */
-                        (aggValue, oldValue) -> aggValue - oldValue /* subtractor */
-                    );
-
-
-                    // Java 7 examples
-
-                    // Reducing a KGroupedStream
-                    KTable&lt;String, Long&gt; aggregatedStream = groupedStream.reduce(
-                        new Reducer&lt;Long&gt;() { /* adder */
-                          @Override
-                          public Long apply(Long aggValue, Long newValue) {
-                            return aggValue + newValue;
-                          }
-                        }
-                    );
-
-                    // Reducing a KGroupedTable
-                    KTable&lt;String, Long&gt; aggregatedTable = groupedTable.reduce(
-                        new Reducer&lt;Long&gt;() { /* adder */
-                          @Override
-                          public Long apply(Long aggValue, Long newValue) {
-                            return aggValue + newValue;
-                          }
-                        },
-                        new Reducer&lt;Long&gt;() { /* subtractor */
-                          @Override
-                          public Long apply(Long aggValue, Long oldValue) {
-                            return aggValue - oldValue;
-                          }
-                        }
-                    );
-                </pre>
-                <p>
-                    Detailed behavior for <code>KGroupedStream</code>:
-                </p>
-                <ul>
-                    <li>Input records with <code>null</code> keys are ignored in general.</li>
-                    <li>When a record key is received for the first time, then the value of that
-                        record is used as the initial aggregate value.</li>
-                    <li>Whenever a record with a non-<code>null</code> value is received, the adder is called.</li>
-                </ul>
-                <p>
-                Detailed behavior for <code>KGroupedTable</code>:
-                </p>
-                <ul>
-                    <li>Input records with null keys are ignored in general.</li>
-                    <li>When a record key is received for the first time, then the value of that
-                        record is used as the initial aggregate value.
-                        Note that, in contrast to KGroupedStream, over time this initialization step
-                        may happen more than once for a key as a
-                        result of having received input tombstone records for that key (see below).</li>
-                    <li>When the first non-<code>null</code> value is received for a key (think: INSERT), then
-                        only the adder is called.</li>
-                    <li>When subsequent non-<code>null</code> values are received for a key (think: UPDATE), then
-                        (1) the subtractor is called with the
-                        old value as stored in the table and (2) the adder is called with the new
-                        value of the input record that was just received.
-                        The order of execution for the subtractor and adder is not defined.</li>
-                    <li>When a tombstone record -- i.e. a record with a <code>null</code> value -- is received
-                        for a key (think: DELETE), then only the
-                        subtractor is called. Note that, whenever the subtractor returns a <code>null</code>
-                        value itself, then the corresponding key
-                        is removed from the resulting KTable. If that happens, any next input
-                        record for that key will re-initialize its aggregate value.</li>
-                </ul>
-                <p>
-                See the example at the bottom of this section for a visualization of the
-                aggregation semantics.
-                <p>
-            </td>
-        </tr>
-        <tr>
-            <td><b>Reduce (windowed)</b>: <code>KGroupedStream &rarr; KTable</code></td>
-            <td>
-                <p>
-                Windowed aggregation. Combines the values of records, per window, by the grouped key. The current record value
-                is combined with the last reduced value, and a new reduced value is returned. Records with null key or value are
-                ignored. The result value type cannot be changed, unlike aggregate. (KGroupedStream details)
-                </p>
-                <p>
-                The windowed reduce turns a <code>KGroupedStream&lt;K, V&gt;</code> into a windowed <code>KTable&lt;Windowed&lt;K&gt;, V&gt;</code>.
-                </p>
-                <p>
-                Several variants of reduce exist, see Javadocs for details.
-                </p>
-                <pre class="brush: java;">
-                    import java.util.concurrent.TimeUnit;
-                    KGroupedStream&lt;String, Long&gt; groupedStream = ...;
-
-                    // Java 8+ examples, using lambda expressions
-
-                    // Aggregating with time-based windowing (here: with 5-minute tumbling windows)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; timeWindowedAggregatedStream = groupedStream
-                        .windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(5))) /* time-based window */
-                        .reduce((aggValue, newValue) -> aggValue + newValue /* adder */);
-
-                    // Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; sessionzedAggregatedStream = groupedStream
-                        .windowedBy(SessionWindows.with(TimeUnit.MINUTES.toMillis(5))) /* session window */
-                        .reduce((aggValue, newValue) -> aggValue + newValue); /* adder */
-
-
-                    // Java 7 examples
-
-                    // Aggregating with time-based windowing (here: with 5-minute tumbling windows)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; timeWindowedAggregatedStream = groupedStream
-                        .windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(5))) /* time-based window */
-                        .reduce(
-                            new Reducer&lt;Long&gt;() { /* adder */
-                              @Override
-                              public Long apply(Long aggValue, Long newValue) {
-                                return aggValue + newValue;
-                              }
-                            });
-
-                    // Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)
-                    KTable&lt;Windowed&lt;String&gt;, Long&gt; timeWindowedAggregatedStream = groupedStream
-                        .windowedBy(SessionWindows.with(TimeUnit.MINUTES.toMillis(5))) /* session window */
-                        .reduce(
-                            new Reducer&lt;Long&gt;() { /* adder */
-                              @Override
-                              public Long apply(Long aggValue, Long newValue) {
-                                return aggValue + newValue;
-                              }
-                        });
-                </pre>
-
-                <p>
-                Detailed behavior:
-                </p>
-                <ul>
-                    <li>The windowed reduce behaves similar to the rolling reduce described above. The additional twist is that
-                        the behavior applies per window.</li>
-                    <li>Input records with <code>null</code> keys are ignored in general.</li>
-                    <li>When a record key is received for the first time for a given window, then the value of that record is
-                    used as the initial aggregate value.</li>
-                    <li>Whenever a record with a non-<code>null</code> value is received for a given window, the adder is called. (Note: As
-                    a result of a known bug in Kafka 0.11.0.0, the adder is currently also called for <code>null</code> values. You can work
-                    around this, for example, by manually filtering out <code>null</code> values prior to grouping the stream.)</li>
-                    <li>See the example at the bottom of this section for a visualization of the aggregation semantics.</li>
-                </ul>
-            </td>
-        </tr>
-        </tbody>
-    </table>
-
-    <p>
-        <b>Example of semantics for stream aggregations</b>: A <code>KGroupedStream &rarr; KTable</code> example is shown below. The streams and the table are
-        initially empty. We use bold font in the column for "KTable <code>aggregated</code>" to highlight changed state. An entry such as <code>(hello, 1)</code>
-        denotes a record with key <code>hello</code> and value <code>1</code>. To improve the readability of the semantics table we assume that all records are
-        processed in timestamp order.
-    </p>
-    <pre class="brush: java;">
-        // Key: word, value: count
-        Properties streamsProperties == ...;
-
-        // specify the default serdes so we don't need to elsewhere.
-        streamsProperties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
-        streamsProperties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.Integer().getClass());
-        StreamsConfig config = new StreamsConfig(streamsProperties);
-
-        KStream&lt;String, Integer&gt; wordCounts = ...;
-
-        KGroupedStream&lt;String, Integer&gt; groupedStream = wordCounts
-            .groupByKey();
-
-        KTable&lt;String, Integer&gt; aggregated = groupedStream.aggregate(
-            () -> 0, /* initializer */
-            (aggKey, newValue, aggValue) -> aggValue + newValue /* adder */
-        );
-    </pre>
-
-    <p>
-        <b>Impact of <a href=#streams_developer-guide_memory-management_record-cache>record caches</a></b>: For illustration purposes,
-        the column "KTable <code>aggregated</code>" below shows the table's state changes over
-        time in a very granular way. In practice, you would observe state changes in such a granular way only when record caches are
-        disabled (default: enabled). When record caches are enabled, what might happen for example is that the output results of the
-        rows with timestamps 4 and 5 would be compacted, and there would only be a single state update for the key <code>kafka</code> in the KTable
-        (here: from <code>(kafka 1)</code> directly to <code>(kafka, 3)</code>. Typically, you should only disable record caches for testing or debugging purposes
-        -- under normal circumstances it is better to leave record caches enabled.
-    </p>
-    <table class="data-table" border="1">
-        <thead>
-        <col>
-        <colgroup span="2"></colgroup>
-        <colgroup span="2"></colgroup>
-        <col>
-        <tr>
-            <th scope="col"></th>
-            <th colspan="2">KStream wordCounts</th>
-            <th colspan="2">KGroupedStream groupedStream</th>
-            <th scope="col">KTable aggregated</th>
-        </tr>
-        </thead>
-        <tbody>
-        <tr>
-            <th scope="col">Timestamp</th>
-            <th scope="col">Input record</th>
-            <th scope="col">Grouping</th>
-            <th scope="col">Initializer</th>
-            <th scope="col">Adder</th>
-            <th scope="col">State</th>
-        </tr>
-        <tr>
-            <td>1</td>
-            <td>(hello, 1)</td>
-            <td>(hello, 1)</td>
-            <td>0 (for hello)</td>
-            <td>(hello, 0 + 1)</td>
-            <td>(hello, 1)</td>
-        </tr>
-        <tr>
-            <td>2</td>
-            <td>(kafka, 1)</td>
-            <td>(kafka, 1)</td>
-            <td>0 (for kafka)</td>
-            <td>(kafka, 0 + 1)</td>
-            <td>(hello, 1), (kafka, 1)</td>
-        </tr>
-        <tr>
-            <td>3</td>
-            <td>(streams, 1)</td>
-            <td>(streams, 1)</td>
-            <td>0 (for streams)</td>
-            <td>(streams, 0 + 1)</td>
-            <td>(hello, 1), (kafka, 1), (streams, 1)</td>
-        </tr>
-        <tr>
-            <td>4</td>
-            <td>(kafka, 1)</td>
-            <td>(kafka, 1)</td>
-            <td></td>
-            <td>(kafka, 1 + 1)</td>
-            <td>(hello, 1), (kafka, 2), (streams, 1)</td>
-        </tr>
-        <tr>
-            <td>5</td>
-            <td>(kafka, 1)</td>
-            <td>(kafka, 1)</td>
-            <td></td>
-            <td>(kafka, 2 + 1)</td>
-            <td>(hello, 1), (kafka, 3), (streams, 1)</td>
-        </tr>
-        <tr>
-            <td>6</td>
-            <td>(streams, 1)</td>
-            <td>(streams, 1)</td>
-            <td></td>
-            <td>(streams, 1 + 1)</td>
-            <td>(hello, 1), (kafka, 3), (streams, 2)</td>
-        </tr>
-        </tbody>
-    </table>
-    <p>
-    Example of semantics for table aggregations: A <code>KGroupedTable &rarr; KTable</code> example is shown below. The tables are initially empty.
-    We use bold font in the column for "KTable <code>aggregated</code>" to highlight changed state. An entry such as <code>(hello, 1)</code> denotes a
-    record with key <code>hello</code> and value <code>1</code>. To improve the readability of the semantics table we assume that all records are processed
-    in timestamp order.
-    </p>
-    <pre class="brush: java;">
-        // Key: username, value: user region (abbreviated to "E" for "Europe", "A" for "Asia")
-        KTable&lt;String, String&gt; userProfiles = ...;
-
-        // Re-group `userProfiles`.  Don't read too much into what the grouping does:
-        // its prime purpose in this example is to show the *effects* of the grouping
-        // in the subsequent aggregation.
-        KGroupedTable&lt;String, Integer&gt; groupedTable = userProfiles
-            .groupBy((user, region) -> KeyValue.pair(region, user.length()), Serialized.with(Serdes.String(), Serdes.Integer()));
-
-        KTable&lt;String, Integer&gt; aggregated = groupedTable.aggregate(
-            () -> 0, /* initializer */
-            (aggKey, newValue, aggValue) -> aggValue + newValue, /* adder */
-            (aggKey, oldValue, aggValue) -> aggValue - oldValue, /* subtractor */
-            Materialized.&lt;String, Integer, KeyValueStore&lt;Bytes, byte[]&gt;&gt;as("aggregated-table-store")
-                .withKeySerde(Serdes.String() /* serde for aggregate key */)
-                .withValueSerde(Serdes.Long() /* serde for aggregate value */)
-        );
-    </pre>
-    <p>
-        <b>Impact of <a href=#streams_developer-guide_memory-management_record-cache>record caches</a></b>:
-        For illustration purposes, the column "KTable <code>aggregated</code>" below shows
-        the table's state changes over time in a very granular way. In practice, you would observe state changes
-        in such a granular way only when record caches are disabled (default: enabled). When record caches are enabled,
-        what might happen for example is that the output results of the rows with timestamps 4 and 5 would be
-        compacted, and there would only be a single state update for the key <code>kafka</code> in the KTable
-        (here: from <code>(kafka 1)</code> directly to <code>(kafka, 3)</code>. Typically, you should only disable
-        record caches for testing or debugging purposes -- under normal circumstances it is better to leave record caches enabled.
-    </p>
-    <table class="data-table" border="1">
-        <thead>
-        <col>
-        <colgroup span="2"></colgroup>
-        <colgroup span="2"></colgroup>
-        <col>
-        <tr>
-            <th scope="col"></th>
-            <th colspan="3">KTable userProfiles</th>
-            <th colspan="3">KGroupedTable groupedTable</th>
-            <th scope="col">KTable aggregated</th>
-        </tr>
-        </thead>
-        <tbody>
-        <tr>
-            <th scope="col">Timestamp</th>
-            <th scope="col">Input record</th>
-            <th scope="col">Interpreted as</th>
-            <th scope="col">Grouping</th>
-            <th scope="col">Initializer</th>
-            <th scope="col">Adder</th>
-            <th scope="col">Subtractor</th>
-            <th scope="col">State</th>
-        </tr>
-        <tr>
-            <td>1</td>
-            <td>(alice, E)</td>
-            <td>INSERT alice</td>
-            <td>(E, 5)</td>
-            <td>0 (for E)</td>
-            <td>(E, 0 + 5)</td>
-            <td></td>
-            <td>(E, 5)</td>
-        </tr>
-        <tr>
-            <td>2</td>
-            <td>(bob, A)</td>
-            <td>INSERT bob</td>
-            <td>(A, 3)</td>
-            <td>0 (for A)</td>
-            <td>(A, 0 + 3)</td>
-            <td></td>
-            <td>(A, 3), (E, 5)</td>
-        </tr>
-        <tr>
-            <td>3</td>
-            <td>(charlie, A)</td>
-            <td>INSERT charlie</td>
-            <td>(A, 7)</td>
-            <td></td>
-            <td>(A, 3 + 7)</td>
-            <td></td>
-            <td>(A, 10), (E, 5)</td>
-        </tr>
-        <tr>
-            <td>4</td>
-            <td>(alice, A)</td>
-            <td>UPDATE alice</td>
-            <td>(A, 5)</td>
-            <td></td>
-            <td>(A, 10 + 5)</td>
-            <td>(E, 5 - 5)</td>
-            <td>(A, 15), (E, 0)</td>
-        </tr>
-        <tr>
-            <td>5</td>
-            <td>(charlie, null)</td>
-            <td>DELETE charlie</td>
-            <td>(null, 7)</td>
-            <td></td>
-            <td></td>
-            <td>(A, 15 - 7)</td>
-            <td>(A, 8), (E, 0)</td>
-        </tr>
-        <tr>
-            <td>6</td>
-            <td>(null, E)</td>
-            <td>ignored</td>
-            <td></td>
-            <td></td>
-            <td></td>
-            <td></td>
-            <td>(A, 8), (E, 0)</td>
-        </tr>
-        <tr>
-            <td>7</td>
-            <td>(bob, E)</td>
-            <td>UPDATE bob</td>
-            <td>(E, 3)</td>
-            <td></td>
-            <td>(E, 0 + 3)</td>
-            <td>(A, 8 - 3)</td>
-            <td>(A, 5), (E, 3)</td>
-        </tr>
-        </tbody>
-    </table>
-
-    <h6><a id="streams_dsl_windowing" href="#streams_dsl_windowing">Windowing a stream</a></h6>
-    A stream processor may need to divide data records into time buckets, i.e. to <b>window</b> the stream by time. This is usually needed for join and aggregation operations, etc. Kafka Streams currently defines the following types of windows:
-    <ul>
-        <li><b>Hopping time windows</b> are windows based on time intervals. They model fixed-sized, (possibly) overlapping windows. A hopping window is defined by two properties: the window's size and its advance interval (aka "hop"). The advance interval specifies by how much a window moves forward relative to the previous one. For example, you can configure a hopping window with a size 5 minutes and an advance interval of 1 minute. Since hopping windows can overlap a data record may b [...]
-        <li><b>Tumbling time windows</b> are a special case of hopping time windows and, like the latter, are windows based on time intervals. They model fixed-size, non-overlapping, gap-less windows. A tumbling window is defined by a single property: the window's size. A tumbling window is a hopping window whose window size is equal to its advance interval. Since tumbling windows never overlap, a data record will belong to one and only one window.</li>
-        <li><b>Sliding windows</b> model a fixed-size window that slides continuously over the time axis; here, two data records are said to be included in the same window if the difference of their timestamps is within the window size. Thus, sliding windows are not aligned to the epoch, but on the data record timestamps. In Kafka Streams, sliding windows are used only for join operations, and can be specified through the <code>JoinWindows</code> class.</li>
-        <li><b>Session windows</b> are used to aggregate key-based events into sessions.
-            Sessions represent a period of activity separated by a defined gap of inactivity.
-            Any events processed that fall within the inactivity gap of any existing sessions are merged into the existing sessions.
-            If the event falls outside of the session gap, then a new session will be created.
-            Session windows are tracked independently across keys (e.g. windows of different keys typically have different start and end times) and their sizes vary (even windows for the same key typically have different sizes);
-            as such session windows can't be pre-computed and are instead derived from analyzing the timestamps of the data records.
-        </li>
-    </ul>
-
-    <p>
-        In the Kafka Streams DSL users can specify a <b>retention period</b> for the window. This allows Kafka Streams to retain old window buckets for a period of time in order to wait for the late arrival of records whose timestamps fall within the window interval.
-        If a record arrives after the retention period has passed, the record cannot be processed and is dropped.
-    </p>
-
-    <p>
-        Late-arriving records are always possible in real-time data streams. However, it depends on the effective <a href="/{{version}}/documentation/streams/core-concepts#streams_time">time semantics</a> how late records are handled. Using processing-time, the semantics are "when the data is being processed",
-        which means that the notion of late records is not applicable as, by definition, no record can be late. Hence, late-arriving records only really can be considered as such (i.e. as arriving "late") for event-time or ingestion-time semantics. In both cases,
-        Kafka Streams is able to properly handle late-arriving records.
-    </p>
-
-    <h6><a id="streams_dsl_joins" href="#streams_dsl_joins">Join multiple streams</a></h6>
-    A <b>join</b> operation merges two streams based on the keys of their data records, and yields a new stream. A join over record streams usually needs to be performed on a windowing basis because otherwise the number of records that must be maintained for performing the join may grow indefinitely. In Kafka Streams, you may perform the following join operations:
-    <ul>
-        <li><b>KStream-to-KStream Joins</b> are always windowed joins, since otherwise the memory and state required to compute the join would grow infinitely in size. Here, a newly received record from one of the streams is joined with the other stream's records within the specified window interval to produce one result for each matching pair based on user-provided <code>ValueJoiner</code>. A new <code>KStream</code> instance representing the result stream of the join is returned from t [...]
-
-        <li><b>KTable-to-KTable Joins</b> are join operations designed to be consistent with the ones in relational databases. Here, both changelog streams are materialized into local state stores first. When a new record is received from one of the streams, it is joined with the other stream's materialized state stores to produce one result for each matching pair based on user-provided ValueJoiner. A new <code>KTable</code> instance representing the result stream of the join, which is a [...]
-        <li><b>KStream-to-KTable Joins</b> allow you to perform table lookups against a changelog stream (<code>KTable</code>) upon receiving a new record from another record stream (<code>KStream</code>). An example use case would be to enrich a stream of user activities (<code>KStream</code>) with the latest user profile information (<code>KTable</code>). Only records received from the record stream will trigger the join and produce results via <code>ValueJoiner</code>, not vice versa  [...]
-        <li><b>KStream-to-GlobalKTable Joins</b> allow you to perform table lookups against a fully replicated changelog stream (<code>GlobalKTable</code>) upon receiving a new record from another record stream (<code>KStream</code>).
-            Joins with a <code>GlobalKTable</code> don't require repartitioning of the input <code>KStream</code> as all partitions of the <code>GlobalKTable</code> are available on every KafkaStreams instance.
-            The <code>KeyValueMapper</code> provided with the join operation is applied to each KStream record to extract the join-key that is used to do the lookup to the GlobalKTable so non-record-key joins are possible.
-            An example use case would be to enrich a stream of user activities (<code>KStream</code>) with the latest user profile information (<code>GlobalKTable</code>).
-            Only records received from the record stream will trigger the join and produce results via <code>ValueJoiner</code>, not vice versa (i.e., records received from the changelog stream will be used only to update the materialized state store).
-            A new <code>KStream</code> instance representing the result stream of the join is returned from this operator.</li>
-    </ul>
-
-    Depending on the operands the following join operations are supported: <b>inner joins</b>, <b>outer joins</b> and <b>left joins</b>.
-    Their <a href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+Join+Semantics">semantics</a> are similar to the corresponding operators in relational databases.
-
-
-
-    <h4><a id="streams_dsl_sink" href="#streams_dsl_sink">Write streams back to Kafka</a></h4>
-
-    <p>
-        At the end of the processing, users can choose to (continuously) write the final resulted streams back to a Kafka topic through
-        <code>KStream.to</code> and <code>KTable.to</code>.
-    </p>
-
-    <pre class="brush: java;">
-        joined.to("topic4");
-        // or using custom Serdes and a StreamPartitioner
-        joined.to("topic5", Produced.with(keySerde, valueSerde, myStreamPartitioner));
-    </pre>
-
-    If your application needs to continue reading and processing the records after they have been materialized
-    to a topic via <code>to</code> above, one option is to construct a new stream that reads from the output topic;
-    Kafka Streams provides a convenience method called <code>through</code>:
-
-    <pre class="brush: java;">
-        // equivalent to
-        //
-        // joined.to("topic4");
-        // materialized = builder.stream("topic4");
-        KStream&lt;String, String&gt; materialized = joined.through("topic4");
-        // if you need to provide serdes or a custom StreamPartitioner you can use
-        // the overloaded version
-        KStream&lt;String, String&gt; materialized = joined.through("topic5",
-                Produced.with(keySerde, valueSerde, myStreamPartitioner));
-    </pre>
-    <br>
-
-    <h4><a id="streams_dsl_build" href="#streams_dsl_build">Generate the processor topology</a></h4>
-
-    <p>
-        Within the Streams DSL, while users are specifying the operations to create / transform various streams as described above, a <code>Topology</code> is constructed implicitly within the <code>StreamsBuilder</code>.
-        Users can generate the constructed topology at any given point in time by calling <code>build</code>:
-    </p>
-
-    <pre class="brush: java;">
-    Topology topology = builder.build();
-    </pre>
-
-    <p>
-        Users can investigate the generated <code>Topology</code> via its <code>describe</code> API, and continue building or modifying the topology until they are satisfied with it.
-        The topology then can be used to execute the application (we will talk about this later in this section).
-    </p>
-
-    <h3><a id="streams_interactive_queries" href="#streams_interactive_queries">Interactive Queries</a></h3>
-    <p>
-        Interactive queries let you get more from streaming than just the processing of data. This feature allows you to treat the stream processing layer as a lightweight embedded database and, more concretely, <i>to directly query the latest state</i> of your stream processing application, without needing to materialize that state to external databases or external storage first.
-        As a result, interactive queries simplify the architecture of many use cases and lead to more application-centric architectures.  For example, you often no longer need to operate and interface with a separate database cluster -- or a separate infrastructure team in your company that runs that cluster -- to share data between a Kafka Streams application (say, an event-driven microservice) and downstream applications, regardless of whether these applications use Kafka Streams or no [...]
-        The following diagrams juxtapose two architectures:  the first does not use interactive queries whereas the second architecture does.  It depends on the concrete use case to determine which of these architectures is a better fit -- the important takeaway is that Kafka Streams and interactive queries give you the flexibility to pick and to compose the right one, rather than limiting you to just a single way.
-    </p>
-
-
-    <figure>
-        <img class="centered" src="/{{version}}/images/streams-interactive-queries-01.png" style="width:600pt;">
-        <figcaption style="text-align: center;"><i>Without interactive queries: increased complexity and heavier footprint of architecture</i></figcaption>
-    </figure>
-
-
-    <figure>
-        <img class="centered" src="/{{version}}/images/streams-interactive-queries-02.png" style="width:500pt;">
-        <figcaption style="text-align: center;"><i>With interactive queries: simplified, more application-centric architecture</i></figcaption>
-    </figure>
-
-    <p>
-        Here are some use case examples for applications that benefit from interactive queries:
-    </p>
-    <ul>
-        <li>Real-time monitoring:  A front-end dashboard that provides threat intelligence (e.g., web servers currently
-            under attack by cyber criminals) can directly query a Kafka Streams application that continuously generates the
-            relevant information by processing network telemetry data in real-time.
-        </li>
-        <li>Video gaming:  A Kafka Streams application continuously tracks location updates from players in the gaming universe.
-            A mobile companion app can then directly query the Kafka Streams application to show the current location of a player
-            to friends and family, and invite them to come along.  Similarly, the game vendor can use the data to identify unusual
-            hotspots of players, which may indicate a bug or an operational issue.
-        </li>
-        <li>Risk and fraud:  A Kafka Streams application continuously analyzes user transactions for anomalies and suspicious
-            behavior.  An online banking application can directly query the Kafka Streams application when a user logs in to deny
-            access to those users that have been flagged as suspicious.
-        </li>
-        <li>Trend detection:  A Kafka Streams application continuously computes the latest top charts across music genres based on
-            user listening behavior that is collected in real-time.  Mobile or desktop applications of a music store can then
-            interactively query for the latest charts while users are browsing the store.
-        </li>
-    </ul>
-
-    <h4><a id="streams_developer-guide_interactive-queries_your_app" href="#streams_developer-guide_interactive-queries_your_app">Your application and interactive queries</a></h4>
-    <p>
-        Interactive queries allow you to tap into the <i>state</i> of your application, and notably to do that from outside your application.
-        However, an application is not interactively queryable out of the box: you make it queryable by leveraging the API of Kafka Streams.
-    </p>
-
-    <p>
-        It is important to understand that the state of your application -- to be extra clear, we might call it "the full state of the entire application" -- is typically split across many distributed instances of your application, and thus across many state stores that are managed locally by these application instances.
-    </p>
-
-    <img class="centered" src="/{{version}}/images/streams-interactive-queries-03.png" style="width:400pt; height:400pt;">
-
-    <p>
-        Accordingly, the API to let you interactively query your application's state has two parts, a <i>local</i> and a <i>remote</i> one:
-    </p>
-
-    <ol>
-        <li><a href="#streams_developer-guide_interactive-queries_local-stores">Querying local state stores (for an application instance)</a>:  You can query that (part of the full) state that is managed locally by an instance of your application.  Here, an application instance can directly query its own local state stores.  You can thus use the corresponding (local) data in other parts of your application code that are not related to calling the Kafka Streams API.  Querying state stores [...]
-        </li>
-        <li><a href="#streams_developer-guide_interactive-queries_discovery">Querying remote state stores (for the entire application)</a>:  To query the full state of your entire application we must be able to piece together the various local fragments of the state.  In addition to being able to (a) query local state stores as described in the previous bullet point, we also need to (b) discover all the running instances of your application in the network, including their respective stat [...]
-        </li>
-    </ol>
-
-    <table class="data-table">
-        <tbody>
-        <tr>
-            <th>What of the below is required to access the state of ...</th>
-            <th>... an app instance (local state)</th>
-            <th>... the entire application (full state)</th>
-        </tr>
-        <tr>
-            <td>Query local state stores of an app instance</td><td>Required (but already built-in)</td><td>Required (but already built-in)</td>
-        </tr>
-        <tr>
-            <td>Make an app instance discoverable to others</td><td>Not needed</td><td>Required (but already built-in)</td>
-        </tr>
-        <tr>
-            <td>Discover all running app instances and their state stores</td><td>Not needed</td><td>Required (but already built-in)</td>
-        </tr>
-        <tr>
-            <td>Communicate with app instances over the network (RPC)</td><td>Not needed</td><td>Required <b>user must provide</b></td>
-        </tr>
-        </tbody>
-    </table>
-
-    <p>
-        Kafka Streams provides all the required functionality for interactively querying your application's state out of the box, with but one exception:  if you want to expose your application's full state via interactive queries, then --
-        for reasons we explain further down below -- it is your responsibility to add an appropriate RPC layer (such as a REST
-        API) to your application that allows application instances to communicate over the network.  If, however, you only need
-        to let your application instances access their own local state, then you do not need to add such an RPC layer at all.
-    </p>
-
-    <h4><a id="streams_developer-guide_interactive-queries_local-stores" href="#streams_developer-guide_interactive-queries_local-stores">Querying local state stores (for an application instance)</a></h4>
-    <p>
-        A Kafka Streams application is typically running on many instances.
-        The state that is locally available on any given instance is only a subset of the application's entire state.
-        Querying the local stores on an instance will, by definition, <i>only return data locally available on that particular instance</i>.
-        We explain how to access data in state stores that are not locally available in section <a href="#streams_developer-guide_interactive-queries_discovery"><b>Querying remote state stores</b></a> (for the entire application).
-    </p>
-
-    <p>
-        The method <code>KafkaStreams#store(...)</code> finds an application instance's local state stores <i>by name</i> and <i>by type</i>.
-    </p>
-
-    <figure>
-        <img class="centered" src="/{{version}}/images/streams-interactive-queries-api-01.png" style="width:500pt;">
-        <figcaption style="text-align: center;"><i>Every application instance can directly query any of its local state stores</i></figcaption>
-    </figure>
-
-    <p>
-        The <i>name</i> of a state store is defined when you are creating the store, either when creating the store explicitly (e.g. when using the Processor API) or when creating the store implicitly (e.g. when using stateful operations in the DSL).
-        We show examples of how to name a state store further down below.
-    </p>
-
-    <p>
-        The <i>type</i> of a state store is defined by <code>QueryableStoreType</code>, and you can access the built-in types via the class <code>QueryableStoreTypes</code>.
-        Kafka Streams currently has two built-in types:
-    </p>
-    <ul>
-        <li>A key-value store <code>QueryableStoreTypes#keyValueStore()</code>, see <a href="#streams_developer-guide_interactive-queries_local-key-value-stores">Querying local key-value stores</a>.</li>
-        <li>A window store <code>QueryableStoreTypes#windowStore()</code>, see <a href="#streams_developer-guide_interactive-queries_local-window-stores">Querying local window stores</a>.</li>
-    </ul>
-
-    <p>
-        Both store types return <i>read-only</i> versions of the underlying state stores.
-        This read-only constraint is important to guarantee that the underlying state stores will never be mutated (e.g. new entries added) out-of-band, i.e. only the corresponding processing topology of Kafka Streams is allowed to mutate and update the state stores in order to ensure data consistency.
-    </p>
-    <p>
-        You can also implement your own <code>QueryableStoreType</code> as described in section <a href="#streams_developer-guide_interactive-queries_custom-stores#"><b>Querying local custom stores</b></a>
-    </p>
-
-    <p>
-        Kafka Streams materializes one state store per stream partition, which means your application will potentially manage many underlying state stores.
-        The API to query local state stores enables you to query all of the underlying stores without having to know which partition the data is in.
-        The objects returned from <code>KafkaStreams#store(...)</code> are therefore wrapping potentially many underlying state stores.
-        Note that it is the caller's responsibility to close the iterator on state store;
-        otherwise it may lead to OOM and leaked file handlers depending on the state store implementation.
-    </p>
-
-    <h4><a id="streams_developer-guide_interactive-queries_local-key-value-stores" href="#streams_developer-guide_interactive-queries_local-key-value-stores">Querying local key-value stores</a></h4>
-    <p>
-        To query a local key-value store, you must first create a topology with a key-value store:
-    </p>
-
-    <pre class="brush: java;">
-          StreamsConfig config = ...;
-          StreamsBuilder builder = ...;
-          KStream&lt;String, String&gt; textLines = ...;
-
-          // Define the processing topology (here: WordCount)
-          KGroupedStream&lt;String, String&gt; groupedByWord = textLines
-            .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
-            .groupBy((key, word) -> word, Serialized.with(stringSerde, stringSerde));
-
-          // Create a key-value store named "CountsKeyValueStore" for the all-time word counts
-          groupedByWord.count("CountsKeyValueStore");
-
-          // Start an instance of the topology
-          KafkaStreams streams = new KafkaStreams(builder.build(), config);
-          streams.start();
-        </pre>
-
-    <p>
-        Above we created a key-value store named "CountsKeyValueStore".
-        This store will hold the latest count for any word that is found on the topic "word-count-input".
-        Once the application has started we can get access to "CountsKeyValueStore" and then query it via the <code>ReadOnlyKeyValueStore</code> API:
-    </p>
-
-    <pre class="brush: java;">
-          // Get the key-value store CountsKeyValueStore
-          ReadOnlyKeyValueStore&lt;String, Long&gt; keyValueStore =
-              streams.store("CountsKeyValueStore", QueryableStoreTypes.keyValueStore());
-
-          // Get value by key
-          System.out.println("count for hello:" + keyValueStore.get("hello"));
-
-          // Get the values for a range of keys available in this application instance
-          KeyValueIterator&lt;String, Long&gt; range = keyValueStore.range("all", "streams");
-          while (range.hasNext()) {
-            KeyValue&lt;String, Long&gt; next = range.next();
-            System.out.println("count for " + next.key + ": " + value);
-          }
-          range.close(); // close iterator to avoid memory leak
-
-          // Get the values for all of the keys available in this application instance
-          KeyValueIterator&lt;String, Long&gt; range = keyValueStore.all();
-          while (range.hasNext()) {
-            KeyValue&lt;String, Long&gt; next = range.next();
-            System.out.println("count for " + next.key + ": " + value);
-          }
-          range.close(); // close iterator to avoid memory leak
-        </pre>
-
-    <h4><a id="streams_developer-guide_interactive-queries_local-window-stores" href="#streams_developer-guide_interactive-queries_local-window-stores">Querying local window stores</a></h4>
-    <p>
-        A window store differs from a key-value store in that you will potentially have many results for any given key because the key can be present in multiple windows.
-        However, there will ever be at most one result per window for a given key.
-    </p>
-    <p>
-        To query a local window store, you must first create a topology with a window store:
-    </p>
-
-    <pre class="brush: java;">
-          StreamsConfig config = ...;
-          StreamsBuilder builder = ...;
-          KStream&lt;String, String&gt; textLines = ...;
-
-          // Define the processing topology (here: WordCount)
-          KGroupedStream&lt;String, String&gt; groupedByWord = textLines
-            .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
-            .groupBy((key, word) -> word, Serialized.with(stringSerde, stringSerde));
-
-          // Create a window state store named "CountsWindowStore" that contains the word counts for every minute
-          groupedByWord.windowedBy(TimeWindows.of(60000))
-            .count(Materialized.&lt;String, Long, WindowStore&lt;Bytes, byte[]&gt;&gt;as("CountsWindowStore")
-                withKeySerde(Serdes.String()); // count() sets value serde to Serdes.Long() automatically
-        );
-        </pre>
-
-    <p>
-        Above we created a window store named "CountsWindowStore" that contains the counts for words in 1-minute windows.
-        Once the application has started we can get access to "CountsWindowStore" and then query it via the <code>ReadOnlyWindowStore</code> API:
-    </p>
-
-    <pre class="brush: java;">
-          // Get the window store named "CountsWindowStore"
-          ReadOnlyWindowStore&lt;String, Long&gt; windowStore =
-              streams.store("CountsWindowStore", QueryableStoreTypes.windowStore());
-
-          // Fetch values for the key "world" for all of the windows available in this application instance.
-          // To get *all* available windows we fetch windows from the beginning of time until now.
-          long timeFrom = 0; // beginning of time = oldest available
-          long timeTo = System.currentTimeMillis(); // now (in processing-time)
-          WindowStoreIterator&lt;Long&gt; iterator = windowStore.fetch("world", timeFrom, timeTo);
-          while (iterator.hasNext()) {
-            KeyValue&lt;Long, Long&gt; next = iterator.next();
-            long windowTimestamp = next.key;
-            System.out.println("Count of 'world' @ time " + windowTimestamp + " is " + next.value);
-          }
-          iterator.close();
-        </pre>
-
-    <h4><a id="streams_developer-guide_interactive-queries_custom-stores" href="#streams_developer-guide_interactive-queries_custom-stores">Querying local custom state stores</a></h4>
-    <p>
-        Any custom state stores you use in your Kafka Streams applications can also be queried.
-        However there are some interfaces that will need to be implemented first:
-    </p>
-
-    <ol>
-        <li>Your custom state store must implement <code>StateStore</code>.</li>
-        <li>You should have an interface to represent the operations available on the store.</li>
-        <li>It is recommended that you also provide an interface that restricts access to read-only operations so users of this API can't mutate the state of your running Kafka Streams application out-of-band.</li>
-        <li>You also need to provide an implementation of <code>StoreSupplier</code> for creating instances of your store.</li>
-    </ol>
-
-    <p>
-        The class/interface hierarchy for your custom store might look something like:
-    </p>
-
-    <pre class="brush: java;">
-          public class MyCustomStore&lt;K,V&gt; implements StateStore, MyWriteableCustomStore&lt;K,V&gt; {
-            // implementation of the actual store
-          }
-
-          // Read-write interface for MyCustomStore
-          public interface MyWriteableCustomStore&lt;K,V&gt; extends MyReadableCustomStore&lt;K,V&gt; {
-            void write(K Key, V value);
-          }
-
-          // Read-only interface for MyCustomStore
-          public interface MyReadableCustomStore&lt;K,V&gt; {
-            V read(K key);
-          }
-
-          public class MyCustomStoreSupplier implements StoreSupplier {
-            // implementation of the supplier for MyCustomStore
-          }
-        </pre>
-
-    <p>
-        To make this store queryable you need to:
-    </p>
-    <ul>
-        <li>Provide an implementation of <code>QueryableStoreType</code>.</li>
-        <li>Provide a wrapper class that will have access to all of the underlying instances of the store and will be used for querying.</li>
-    </ul>
-
-    <p>
-        Implementing <code>QueryableStoreType</code> is straight forward:
-    </p>
-
-    <pre class="brush: java;">
-
-          public class MyCustomStoreType&lt;K,V&gt; implements QueryableStoreType&lt;MyReadableCustomStore&lt;K,V&gt;&gt; {
-
-            // Only accept StateStores that are of type MyCustomStore
-            public boolean accepts(final StateStore stateStore) {
-              return stateStore instanceOf MyCustomStore;
-            }
-
-            public MyReadableCustomStore&lt;K,V&gt; create(final StateStoreProvider storeProvider, final String storeName) {
-                return new MyCustomStoreTypeWrapper(storeProvider, storeName, this);
-            }
-
-          }
-        </pre>
-
-    <p>
-        A wrapper class is required because even a single instance of a Kafka Streams application may run multiple stream tasks and, by doing so, manage multiple local instances of a particular state store.
-        The wrapper class hides this complexity and lets you query a "logical" state store with a particular name without having to know about all of the underlying local instances of that state store.
-    </p>
-
-    <p>
-        When implementing your wrapper class you will need to make use of the <code>StateStoreProvider</code>
-        interface to get access to the underlying instances of your store.
-        <code>StateStoreProvider#stores(String storeName, QueryableStoreType&lt;T&gt; queryableStoreType)</code> returns a <code>List</code> of state stores with the given <code>storeName</code> and of the type as defined by <code>queryableStoreType</code>.
-    </p>
-    <p>
-        An example implementation of the wrapper follows (Java 8+):
-    </p>
-
-    <pre class="brush: java;">
-          // We strongly recommended implementing a read-only interface
-          // to restrict usage of the store to safe read operations!
-          public class MyCustomStoreTypeWrapper&lt;K,V&gt; implements MyReadableCustomStore&lt;K,V&gt; {
-
-            private final QueryableStoreType&lt;MyReadableCustomStore&lt;K, V&gt;&gt; customStoreType;
-            private final String storeName;
-            private final StateStoreProvider provider;
-
-            public CustomStoreTypeWrapper(final StateStoreProvider provider,
-                                          final String storeName,
-                                          final QueryableStoreType&lt;MyReadableCustomStore&lt;K, V&gt;&gt; customStoreType) {
-
-              // ... assign fields ...
-            }
-
-            // Implement a safe read method
-            @Override
-            public V read(final K key) {
-              // Get all the stores with storeName and of customStoreType
-              final List&lt;MyReadableCustomStore&lt;K, V&gt;&gt; stores = provider.getStores(storeName, customStoreType);
-              // Try and find the value for the given key
-              final Optional&lt;V&gt; value = stores.stream().filter(store -> store.read(key) != null).findFirst();
-              // Return the value if it exists
-              return value.orElse(null);
-            }
-          }
-        </pre>
-
-    <p>
-        Putting it all together you can now find and query your custom store:
-    </p>
-
-    <pre class="brush: java;">
-          StreamsConfig config = ...;
-          Topology topology = ...;
-          ProcessorSupplier processorSuppler = ...;
-
-          // Create CustomStoreBuilder for store name the-custom-store
-          MyCustomStoreBuilder customStoreBuilder = new MyCustomStoreBuilder("the-custom-store");
-          // Add the source topic
-          topology.addSource("input", "inputTopic");
-          // Add a custom processor that reads from the source topic
-          topology.addProcessor("the-processor", processorSupplier, "input");
-          // Connect your custom state store to the custom processor above
-          topology.addStateStore(customStoreBuilder, "the-processor");
-
-          KafkaStreams streams = new KafkaStreams(topology, config);
-          streams.start();
-
-          // Get access to the custom store
-          MyReadableCustomStore&lt;String,String&gt; store = streams.store("the-custom-store", new MyCustomStoreType&lt;String,String&gt;());
-          // Query the store
-          String value = store.read("key");
-        </pre>
-
-    <h4><a id="streams_developer-guide_interactive-queries_discovery" href="#streams_developer-guide_interactive-queries_discovery">Querying remote state stores (for the entire application)</a></h4>
-
-    <p>
-        Typically, the ultimate goal for interactive queries is not to just query locally available state stores from within an instance of a Kafka Streams application as described in the previous section.
-        Rather, you want to expose the application's full state (i.e. the state across all its instances) to other applications that might be running on different machines.
-        For example, you might have a Kafka Streams application that processes the user events in a multi-player video game, and you want to retrieve the latest status of each user directly from this application so that you can display it in a mobile companion app.
-    </p>
-    <p>
-        Three steps are needed to make the full state of your application queryable:
-    </p>
-
-    <ol>
-        <li>You must <a href="#streams_developer-guide_interactive-queries_rpc-layer">add an RPC layer to your application</a> so that the instances of your application may be interacted with via the network -- notably to respond to interactive queries.
-            By design Kafka Streams does not provide any such RPC functionality out of the box so that you can freely pick your favorite approach: a REST API, Thrift, a custom protocol, and so on.</li>
-        <li>You need to <a href="#streams_developer-guide_interactive-queries_expose-rpc">expose the respective RPC endpoints</a> of your application's instances via the <code>application.server</code> configuration setting of Kafka Streams.
-            Because RPC endpoints must be unique within a network, each instance will have its own value for this configuration setting.
-            This makes an application instance discoverable by other instances.</li>
-        <li> In the RPC layer, you can then <a href="#streams_developer-guide_interactive-queries_discover-app-instances-and-stores">discover remote application instances</a> and their respective state stores (e.g. for forwarding queries to other app instances if an instance lacks the local data to respond to a query) as well as <a href="#streams_developer-guide_interactive-queries_local-stores">query locally available state stores</a> (in order to directly respond to queries) in order t [...]
-    </ol>
-
-    <figure>
-        <img class="centered" src="/{{version}}/images/streams-interactive-queries-api-02.png" style="width:500pt;">
-        <figcaption style="text-align: center;"><i>Discover any running instances of the same application as well as the respective RPC endpoints they expose for interactive queries</i></figcaption>
-    </figure>
-
-    <h4><a id="streams_developer-guide_interactive-queries_rpc-layer" href="#streams_developer-guide_interactive-queries_rpc-layer">Adding an RPC layer to your application</a></h4>
-    <p>
-        As Kafka Streams doesn't provide an RPC layer you are free to choose your favorite approach.
-        There are many ways of doing this, and it will depend on the technologies you have chosen to use.
-        The only requirements are that the RPC layer is embedded within the Kafka Streams application and that it exposes an endpoint that other application instances and applications can connect to.
-    </p>
-
-    <h4><a id="streams_developer-guide_interactive-queries_expose-rpc" href="#streams_developer-guide_interactive-queries_expose-rpc">Exposing the RPC endpoints of your application</a></h4>
-    <p>
-        To enable the remote discovery of state stores running within a (typically distributed) Kafka Streams application you need to set the <code>application.server</code> configuration property in <code>StreamsConfig</code>.
-        The <code>application.server</code> property defines a unique <code>host:port</code> pair that points to the RPC endpoint of the respective instance of a Kafka Streams application.
-        It's important to understand that the value of this configuration property varies across the instances of your application.
-        When this property is set, then, for every instance of an application, Kafka Streams will keep track of the instance's RPC endpoint information, its state stores, and assigned stream partitions through instances of <code>StreamsMetadata</code>
-    </p>
-    <p>
-        Below is an example of configuring and running a Kafka Streams application that supports the discovery of its state stores.
-    </p>
-
-    <pre class="brush: java;">
-
-          Properties props = new Properties();
-          // Set the unique RPC endpoint of this application instance through which it
-          // can be interactively queried.  In a real application, the value would most
-          // probably not be hardcoded but derived dynamically.
-          String rpcEndpoint = "host1:4460";
-          props.put(StreamsConfig.APPLICATION_SERVER_CONFIG, rpcEndpoint);
-          // ... further settings may follow here ...
-
-          StreamsConfig config = new StreamsConfig(props);
-          StreamsBuilder builder = new StreamsBuilder();
-
-          KStream&lt;String, String&gt; textLines = builder.stream("word-count-input", Consumed.with(stringSerde, stringSerde);
-
-          KGroupedStream&lt;String, String&gt; groupedByWord = textLines
-              .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
-              .groupBy((key, word) -> word, Serialized.with(stringSerde, stringSerde));
-
-          // This call to `count()` creates a state store named "word-count".
-          // The state store is discoverable and can be queried interactively.
-          groupedByWord.count(Materialized.&ltString, Long,  KeyValueStore&lt;Bytes, byte[]&gt;&gt;as("word-count"));
-
-          // Start an instance of the topology
-          KafkaStreams streams = new KafkaStreams(builder.build(), streamsConfiguration);
-          streams.start();
-
-          // Then, create and start the actual RPC service for remote access to this
-          // application instance's local state stores.
-          //
-          // This service should be started on the same host and port as defined above by
-          // the property `StreamsConfig.APPLICATION_SERVER_CONFIG`.  The example below is
-          // fictitious, but we provide end-to-end demo applications (such as KafkaMusicExample)
-          // that showcase how to implement such a service to get you started.
-          MyRPCService rpcService = ...;
-          rpcService.listenAt(rpcEndpoint);
-        </pre>
-
-    <h4><a id="streams_developer-guide_interactive-queries_discover-app-instances-and-stores" href="#streams_developer-guide_interactive-queries_discover-app-instances-and-stores">Discovering and accessing application instances and their respective local state stores</a></h4>
-    <p>
-        With the <code>application.server</code> property set, we can now find the locations of remote app instances and their state stores.
-        The following methods return <code>StreamsMetadata</code> objects, which provide meta-information about application instances such as their RPC endpoint and locally available state stores.
-    </p>
-    <ul>
-        <li><code>KafkaStreams#allMetadata()</code>: find all instances of this application</li>
-        <li><code>KafkaStreams#allMetadataForStore(String storeName)</code>: find those applications instances that manage local instances of the state store "storeName"</li>
-        <li><code>KafkaStreams#metadataForKey(String storeName, K key, Serializer&lt;K&gt; keySerializer)</code>: using the default stream partitioning strategy, find the one application instance that holds the data for the given key in the given state store</li>
-        <li><code>KafkaStreams#metadataForKey(String storeName, K key, StreamPartitioner&lt;K, ?&gt; partitioner)</code>: using <code>>partitioner</code>, find the one application instance that holds the data for the given key in the given state store</li>
-    </ul>
-
-    <p>
-        If <code>application.server</code> is not configured for an application instance, then the above methods will not find any <code>StreamsMetadata</code> for it.
-    </p>
-
-    <p>
-        For example, we can now find the <code>StreamsMetadata</code> for the state store named "word-count" that we defined in the code example shown in the previous section:
-    </p>
-
-    <pre class="brush: java;">
-
-          KafkaStreams streams = ...;
-          // Find all the locations of local instances of the state store named "word-count"
-          Collection&lt;StreamsMetadata&gt; wordCountHosts = streams.allMetadataForStore("word-count");
-
-          // For illustrative purposes, we assume using an HTTP client to talk to remote app instances.
-          HttpClient http = ...;
-
-          // Get the word count for word (aka key) 'alice': Approach 1
-          //
-          // We first find the one app instance that manages the count for 'alice' in its local state stores.
-          StreamsMetadata metadata = streams.metadataForKey("word-count", "alice", Serdes.String().serializer());
-          // Then, we query only that single app instance for the latest count of 'alice'.
-          // Note: The RPC URL shown below is fictitious and only serves to illustrate the idea.  Ultimately,
-          // the URL (or, in general, the method of communication) will depend on the RPC layer you opted to
-          // implement.  Again, we provide end-to-end demo applications (such as KafkaMusicExample) that showcase
-          // how to implement such an RPC layer.
-          Long result = http.getLong("http://" + metadata.host() + ":" + metadata.port() + "/word-count/alice");
-
-          // Get the word count for word (aka key) 'alice': Approach 2
-          //
-          // Alternatively, we could also choose (say) a brute-force approach where we query every app instance
-          // until we find the one that happens to know about 'alice'.
-          Optional&lt;Long&gt; result = streams.allMetadataForStore("word-count")
-              .stream()
-              .map(streamsMetadata -> {
-                  // Construct the (fictitious) full endpoint URL to query the current remote application instance
-                  String url = "http://" + streamsMetadata.host() + ":" + streamsMetadata.port() + "/word-count/alice";
-                  // Read and return the count for 'alice', if any.
-                  return http.getLong(url);
-              })
-              .filter(s -> s != null)
-              .findFirst();
-        </pre>
-
-    <p>
-        At this point the full state of the application is interactively queryable:
-    </p>
-    <ul>
-        <li>We can discover the running instances of the application as well as the state stores they manage locally.</li>
-        <li>Through the RPC layer that was added to the application, we can communicate with these application instances over the network and query them for locally available state</li>
-        <li>The application instances are able to serve such queries because they can directly query their own local state stores and respond via the RPC layer</li>
-        <li>Collectively, this allows us to query the full state of the entire application</li>
-    </ul>
-
-    <h3><a id="streams_developer-guide_memory-management" href="#streams_developer-guide_memory-management">Memory Management</a></h3>
-
-
-    <h4><a id="streams_developer-guide_memory-management_record-cache" href="#streams_developer-guide_memory-management_record-cache">Record caches in the DSL</a></h4>
-    <p>
-    Developers of an application using the DSL have the option to specify, for an instance of a processing topology, the
-    total memory (RAM) size of a record cache that is leveraged by the following <code>KTable</code> instances:
-    </p>
-
-    <ol>
-        <li>Source <code>KTable</code>, i.e. <code>KTable</code> instances that are created via <code>StreamBuilder#table()</code> or <code>StreamBuilder#globalTable()</code>.</li>
-        <li>Aggregation <code>KTable</code>, i.e. instances of <code>KTable</code> that are created as a result of aggregations</li>
-    </ol>
-    <p>
-        For such <code>KTable</code> instances, the record cache is used for:
-    </p>
-    <ol>
-        <li>Internal caching and compacting of output records before they are written by the underlying stateful processor node to its internal state store.</li>
-        <li>Internal caching and compacting of output records before they are forwarded from the underlying stateful processor node to any of its downstream processor nodes</li>
-    </ol>
-    <p>
-        Here is a motivating example:
-    </p>
-
-    <ul>
-        <li>Imagine the input is a <code>KStream&lt;String, Integer&gt;</code> with the records <code>&lt;A, 1&gt;, &lt;D, 5&gt;, &lt;A, 20&gt;, &lt;A, 300&gt;</code>.
-            Note that the focus in this example is on the records with key == <code>A</code>
-        </li>
-        <li>
-            An aggregation computes the sum of record values, grouped by key, for the input above and returns a <code>KTable&lt;String, Integer&gt;</code>.
-            <ul>
-                <li><b>Without caching</b>, what is emitted for key <code>A</code> is a sequence of output records that represent changes in the
-                    resulting aggregation table (here, the parentheses denote changes, where the left and right numbers denote the new
-                    aggregate value and the previous aggregate value, respectively):
-                    <code>&lt;A, (1, null)&gt;, &lt;A, (21, 1)&gt;, &lt;A, (321, 21)&gt;</code>.</li>
-                <li>
-                    <b>With caching</b>, the aforementioned three output records for key <code>A</code> would likely be compacted in the cache,
-                    leading to a single output record <code>&lt;A, (321, null)&gt;</code> that is written to the aggregation's internal state store
-                    and being forwarded to any downstream operations.
-                </li>
-            </ul>
-        </li>
-    </ul>
-
-    <p>
-        The cache size is specified through the <code>cache.max.bytes.buffering</code> parameter, which is a global setting per processing topology:
-    </p>
-
-    <pre class="brush: java;">
-        // Enable record cache of size 10 MB.
-        Properties streamsConfiguration = new Properties();
-        streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 10 * 1024 * 1024L);
-    </pre>
-
-    <p>
-        This parameter controls the number of bytes allocated for caching.
-        Specifically, for a processor topology instance with <code>T</code> threads and <code>C</code> bytes allocated for caching,
-        each thread will have an even <code>C/T</code> bytes to construct its own cache and use as it sees fit among its tasks.
-        I.e., there are as many caches as there are threads, but no sharing of caches across threads happens.
-        The basic API for the cache is made of <code>put()</code> and <code>get()</code> calls.
-        Records are evicted using a simple LRU scheme once the cache size is reached.
-        The first time a keyed record <code>R1 = &lt;K1, V1&gt;</code> finishes processing at a node, it is marked as dirty in the cache.
-        Any other keyed record <code>R2 = &lt;K1, V2&gt;</code> with the same key <code>K1</code> that is processed on that node during that time will overwrite <code>&lt;K1, V1&gt;</code>, which we also refer to as "being compacted".
-        Note that this has the same effect as <a href="https://kafka.apache.org/documentation.html#compaction">Kafka's log compaction</a>, but happens (a) earlier, while the
-        records are still in memory, and (b) within your client-side application rather than on the server-side aka the Kafka broker.
-        Upon flushing <code>R2</code> is (1) forwarded to the next processing node and (2) written to the local state store.
-    </p>
-
-    <p>
-        The semantics of caching is that data is flushed to the state store and forwarded to the next downstream processor node
-        whenever the earliest of <code>commit.interval.ms</code> or <code>cache.max.bytes.buffering</code> (cache pressure) hits.
-        Both <code>commit.interval.ms</code> and <code>cache.max.bytes.buffering</code> are <b>global</b> parameters:  they apply to all processor nodes in
-        the topology, i.e., it is not possible to specify different parameters for each node.
-        Below we provide some example settings for both parameters based on desired scenarios.
-    </p>
-
-    <p>To turn off caching the cache size can be set to zero:</p>
-    <pre class="brush: java;">
-        // Disable record cache
-        Properties streamsConfiguration = new Properties();
-        streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
-    </pre>
-
-    <p>
-        Turning off caching might result in high write traffic for the underlying RocksDB store.
-        With default settings caching is enabled within Kafka Streams but RocksDB caching is disabled.
-        Thus, to avoid high write traffic it is recommended to enable RocksDB caching if Kafka Streams caching is turned off.
-    </p>
-
-    <p>
-        For example, the RocksDB Block Cache could be set to 100MB and Write Buffer size to 32 MB.
-    </p>
-    <p>
-        To enable caching but still have an upper bound on how long records will be cached, the commit interval can be set
-        appropriately (in this example, it is set to 1000 milliseconds):
-    </p>
-    <pre class="brush: java;">
-        Properties streamsConfiguration = new Properties();
-        // Enable record cache of size 10 MB.
-        streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 10 * 1024 * 1024L);
-        // Set commit interval to 1 second.
-        streamsConfiguration.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);
-    </pre>
-
-    <p>
-        The illustration below shows the effect of these two configurations visually.
-        For simplicity we have records with 4 keys: blue, red, yellow and green. Without loss of generality, let's assume the cache has space for only 3 keys.
-        When the cache is disabled, we observer that all the input records will be output. With the cache enabled, we make the following observations.
-        First, most records are output at the end of a commit intervals (e.g., at <code>t1</code> one blue records is output, which is the final over-write of the blue key up to that time).
-        Second, some records are output because of cache pressure, i.e. before the end of a commit interval (cf. the red record right before t2).
-        With smaller cache sizes we expect cache pressure to be the primary factor that dictates when records are output. With large cache sizes, the commit interval will be the primary factor.
-        Third, the number of records output has been reduced (here: from 15 to 8).
-    </p>
-
-    <img class="centered" src="/{{version}}/images/streams-cache-and-commit-interval.png" style="width:500pt;height:400pt;">
-    <h4><a id="streams_developer-guide_memory-management_state-store-cache" href="#streams_developer-guide_memory-management_state-store-cache">State store caches in the Processor API</a></h4>
-
-    <p>
-        Developers of a Kafka Streams application using the Processor API have the option to specify, for an instance of a
-        processing topology, the total memory (RAM) size of the <i>state store cache</i> that is used for:
-    </p>
-
-    <ul><li>Internal <i>caching and compacting</i> of output records before they are written from a <b>stateful</b> processor node to its state stores.</li></ul>
-
-    <p>
-        Note that, unlike <a href="#streams_developer-guide_memory-management_record-cache">record caches</a> in the DSL, the state
-        store cache in the Processor API <i>will not cache or compact</i> any output records that are being forwarded downstream.
-        In other words, downstream processor nodes see all records, whereas the state stores see a reduced number of records.
-        It is important to note that this does not impact correctness of the system but is merely a performance optimization
-        for the state stores.
-    </p>
-    <p>
-        A note on terminology: we use the narrower term <i>state store caches</i> when we refer to the Processor API and the
-        broader term <i>record caches</i> when we are writing about the DSL.
-        We made a conscious choice to not expose the more general record caches to the Processor API so that we keep it simple and flexible.
-        For example, developers of the Processor API might chose to store a record in a state store while forwarding a different value downstream, i.e., they
-        might not want to use the unified record cache for both state store and forwarding downstream.
-    </p>
-    <p>
-        Following from the example first shown in section <a href="#streams_processor_statestore">State Stores</a>, to enable caching,
-        you first create a <code>StateStoreBuilder</code> and then call <code>withCachingEnabled</code> (note that caches
-        are disabled by default and there is no explicit <code>withCachingDisabled</code> call) :
-    </p>
-    <pre class="brush: java;">
-        KeyValueBytesStoreSupplier countSupplier = Stores.persistentKeyValueStore("Counts");
-        StateStoreBuilder&lt;KeyValueStore&lt;String, Long&gt;&gt; builder = Stores.keyValueStoreBuilder(countSupplier, Serdes.String(), Serdes.Long());
-        builder.withCachingEnabled()
-    </pre>
-
-    <h4><a id="streams_developer-guide_memory-management_other_memory_usage" href="#streams_developer-guide_memory-management_other_memory_usage">Other memory usage</a></h4>
-    <p>
-    There are other modules inside Apache Kafka that allocate memory during runtime. They include the following:
-    </p>
-    <ul>
-        <li>Producer buffering, managed by the producer config <code>buffer.memory</code></li>
-
-        <li>Consumer buffering, currently not strictly managed, but can be indirectly controlled by fetch size, i.e.,
-            <code>fetch.max.bytes</code> and <code>fetch.max.wait.ms</code>.</li>
-
-        <li>Both producer and consumer also have separate TCP send / receive buffers that are not counted as the buffering memory.
-            These are controlled by the <code>send.buffer.bytes</code> / <code>receive.buffer.bytes</code> configs.</li>
-
-        <li>Deserialized objects buffering: after ``consumer.poll()`` returns records, they will be deserialized to extract
-            timestamp and buffered in the streams space.
-            Currently this is only indirectly controlled by <code>buffered.records.per.partition</code>.</li>
-
-        <li>RocksDB's own memory usage, both on-heap and off-heap; critical configs (for RocksDB version 4.1.0) include
-            <code>block_cache_size</code>, <code>write_buffer_size</code> and <code>max_write_buffer_number</code>.
-            These can be specified through the ``rocksdb.config.setter`` configuration.</li>
-    </ul>
-
-    <h3><a id="streams_configure_execute" href="#streams_configure_execute">Application Configuration and Execution</a></h3>
-
-    <p>
-        Besides defining the topology, developers will also need to configure their applications
-        in <code>StreamsConfig</code> before running it. A complete list of
-        Kafka Streams configs can be found <a href="/{{version}}/documentation/#streamsconfigs"><b>here</b></a>.
-        Note, that different parameters do have different "levels of importance", with the following interpretation:
-    </p>
-    <ul>
-        <li> HIGH: you would most likely change the default value if you go to production </li>
-        <li> MEDIUM: default value might be ok, but you should double-check it </li>
-        <li> LOW: default value is most likely ok; only consider to change it if you hit an issues when running in production </li>
-    </ul>
-
-    <p>
-        Specifying the configuration in Kafka Streams is similar to the Kafka Producer and Consumer clients. Typically, you create a <code>java.util.Properties</code> instance,
-        set the necessary parameters, and construct a <code>StreamsConfig</code> instance from the <code>Properties</code> instance.
-    </p>
-
-    <pre class="brush: java;">
-    import java.util.Properties;
-    import org.apache.kafka.streams.StreamsConfig;
-
-    Properties settings = new Properties();
-    // Set a few key parameters
-    settings.put(StreamsConfig.APPLICATION_ID_CONFIG, "my-first-streams-application");
-    settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092");
-
-    // Set a few user customized parameters
-    settings.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
-    settings.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, MyTimestampExtractor.class);
-
-    // Any further settings
-    settings.put(... , ...);
-
-    // Create an instance of StreamsConfig from the Properties instance
-    StreamsConfig config = new StreamsConfig(settings);
-    </pre>
-
-    <h4><a id="streams_client_config" href="#streams_client_config">Producer and Consumer Configuration</a></h4>
-    <p>
-        Apart from Kafka Streams' own configuration parameters you can also specify parameters for the Kafka consumers and producers that are used internally,
-        depending on the needs of your application. Similar to the Streams settings you define any such consumer and/or producer settings via <code>StreamsConfig</code>.
-        Note that some consumer and producer configuration parameters do use the same parameter name. For example, <code>send.buffer.bytes</code> or <code>receive.buffer.bytes</code> which
-        are used to configure TCP buffers; <code>request.timeout.ms</code> and <code>retry.backoff.ms</code> which control retries for client request (and some more).
-        If you want to set different values for consumer and producer for such a parameter, you can prefix the parameter name with <code>consumer.</code> or <code>producer.</code>:
-    </p>
-
-    <pre class="brush: java;">
-    Properties settings = new Properties();
-    // Example of a "normal" setting for Kafka Streams
-    settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker-01:9092");
-
-    // Customize the Kafka consumer settings
-    streamsSettings.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 60000);
-
-    // Customize a common client setting for both consumer and producer
-    settings.put(CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG, 100L);
-
-    // Customize different values for consumer and producer
-    settings.put("consumer." + ConsumerConfig.RECEIVE_BUFFER_CONFIG, 1024 * 1024);
-    settings.put("producer." + ProducerConfig.RECEIVE_BUFFER_CONFIG, 64 * 1024);
-    // Alternatively, you can use
-    settings.put(StreamsConfig.consumerPrefix(ConsumerConfig.RECEIVE_BUFFER_CONFIG), 1024 * 1024);
-    settings.put(StreamsConfig.producerConfig(ProducerConfig.RECEIVE_BUFFER_CONFIG), 64 * 1024);
-    </pre>
-
-    <h4><a id="streams_broker_config" href="#streams_broker_config">Broker Configuration</a></h4>
-    <p>
-        Introduced in 0.11.0 is a new broker config that is particularly relevant to Kafka Streams applications, <code>group.initial.rebalance.delay.ms</code>.
-        This config specifies the time, in milliseconds, that the <code>GroupCoordinator</code> will delay the initial consumer rebalance.
-        The rebalance will be further delayed by the value of <code>group.initial.rebalance.delay.ms</code> as each new member joins the consumer group, up to a maximum of the value set by <code>max.poll.interval.ms</code>.
-        The net benefit is that this should reduce the overall startup time for Kafka Streams applications with more than one thread.
-        The default value for <code>group.initial.rebalance.delay.ms</code> is 3 seconds.
-    </p>
-    <p>
-        In practice this means that if you are starting up your Kafka Streams app from a cold start, then when the first member joins the group there will be at least a 3 second delay before it is assigned any tasks.
-        If any other members join the group within the initial 3 seconds, then there will be a further 3 second delay.
-        Once no new members have joined the group within the 3 second delay, or <code>max.poll.interval.ms</code> is reached, then the group rebalance can complete and all current members will be assigned tasks.
-        The benefit of this approach, particularly for Kafka Streams applications, is that we can now delay the assignment and re-assignment of potentially expensive tasks as new members join.
-        So we can avoid the situation where one instance is assigned all tasks, begins restoring/processing, only to shortly after be rebalanced, and then have to start again with half of the tasks and so on.
-    </p>
-
-    <h4><a id="streams_topic_config" href="#streams_topic_config">Internal Topic Configuration</a></h4>
-    <p>
-        Kafka Streams automatically creates internal repartitioning and changelog topics.
-        You can override the default configs used when creating these topics by adding any configs from <code>TopicConfig</code> to your <code>StreamsConfig</code> with the prefix <code>StreamsConfig.TOPIC_PREFIX</code>:
-    </p>
-
-    <pre class="brush: java;">
-    Properties settings = new Properties();
-    // Example of a "normal" setting for Kafka Streams
-    settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker-01:9092");
-
-    // Add a topic config by prefixing with topic.
-    settings.put(StreamsConfig.TOPIC_PREFIX + TopicConfig.SEGMENT_BYTES_CONFIG, 1024 * 1024);
-
-    // Alternatively, you can use
-    settings.put(StreamsConfig.topicPrefix(ConsumerConfig.SEGMENT_BYTES_CONFIG), 1024 * 1024);
-    </pre>
-
-    <p>
-        For changelog topics you can also override the default configs on a per store basis.
-        This can be done by using any method overload that has a <code>Materialized</code> as a parameter:
-    </p>
-
-    <pre class="brush: java;">
-        // a map to add topic config
-        Map&lt;String, String&gt; topicConfig = new HashMap&lt;&gt;();
-        topicConfig.put(TopicConfig.SEGMENT_MS_CONFIG, "10000");
-
-        final Materialized&lt;String, Long, KeyValueStore&lt;Bytes, byte[]&gt;&gt; materialized = Materialized.as("store")
-            .withKeySerde(Serdes.String())
-            .withValueSerde(Serdes.String())
-            .withLoggingEnabled(topicConfig); // pass in the config overrides
-
-        groupedStream.count(materialized)
-    </pre>
-
-    <h4><a id="streams_execute" href="#streams_execute">Executing Your Kafka Streams Application</a></h4>
-    <p>
-        You can call Kafka Streams from anywhere in your application code.
-        Very commonly though you would do so within the <code>main()</code> method of your application, or some variant thereof.
-    </p>
-
-    <p>
-        First, you must create an instance of <code>KafkaStreams</code>.
-        The first argument of the <code>KafkaStreams</code> constructor takes an instance of <code>Topology</code>.
-        This topology can be either created directly following the <code>Processor</code> API or implicitly via the <code>StreamsBuilder</code> in the higher-level Streams DSL.
-        The second argument is an instance of <code>StreamsConfig</code> mentioned above.
-    </p>
-
-    <pre class="brush: java;">
-    import org.apache.kafka.streams.KafkaStreams;
-    import org.apache.kafka.streams.StreamsBuilder;
-    import org.apache.kafka.streams.StreamsConfig;
-    import org.apache.kafka.streams.Topology;
-
-    // Use the builders to define the actual processing topology, e.g. to specify
-    // from which input topics to read, which stream operations (filter, map, etc.)
-    // should be called, and so on.
-
-    Topology topology = ...; // when using the Processor API
-    //
-    // OR
-    //
-    StreamsBuilder builder = ...;  // when using the Kafka Streams DSL
-    Topology topology = builder.build();
-
-    // Use the configuration to tell your application where the Kafka cluster is,
-    // which serializers/deserializers to use by default, to specify security settings,
-    // and so on.
-    StreamsConfig config = ...;
-
-    KafkaStreams streams = new KafkaStreams(topology, config);
-    </pre>
-
-    <p>
-        At this point, internal structures have been initialized, but the processing is not started yet. You have to explicitly start the Kafka Streams thread by calling the <code>start()</code> method:
-    </p>
-
-    <pre class="brush: java;">
-    // Start the Kafka Streams instance
-    streams.start();
-    </pre>
-
-    <p>
-        To catch any unexpected exceptions, you may set an <code>java.lang.Thread.UncaughtExceptionHandler</code> before you start the application. This handler is called whenever a stream thread is terminated by an unexpected exception:
-    </p>
-
-    <pre class="brush: java;">
-    streams.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
-    public uncaughtException(Thread t, throwable e) {
-    // here you should examine the exception and perform an appropriate action!
-    }
-    );
-    </pre>
-
-    <p>
-        To retrieve information about the local running threads, you can use the <code>localThreadsMetadata()</code> method after you start the application.
-    </p>
-
-    <pre class="brush: java;">
-    // For instance, use this method to print/monitor the partitions assigned to each local tasks.
-    Set&lt;ThreadMetadata&gt; threads = streams.localThreadsMetadata();
-    ...
-    </pre>
-
-    <p>
-        To stop the application instance call the <code>close()</code> method:
-    </p>
-
-    <pre class="brush: java;">
-    // Stop the Kafka Streams instance
-    streams.close();
-    </pre>
-
-    Now it's time to execute your application that uses the Kafka Streams library, which can be run just like any other Java application - there is no special magic or requirement on the side of Kafka Streams.
-    For example, you can package your Java application as a fat jar file and then start the application via:
-
-    <pre class="brush: bash;">
-    # Start the application in class `com.example.MyStreamsApp`
-    # from the fat jar named `path-to-app-fatjar.jar`.
-    $ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
-    </pre>
-
-    <p>
-        When the application instance starts running, the defined processor topology will be initialized as one or more stream tasks that can be executed in parallel by the stream threads within the instance.
-        If the processor topology defines any state stores, these state stores will also be (re-)constructed, if possible, during the initialization
-        period of their associated stream tasks.
-        It is important to understand that, when starting your application as described above, you are actually launching what Kafka Streams considers to be one instance of your application.
-        More than one instance of your application may be running at a time, and in fact the common scenario is that there are indeed multiple instances of your application running in parallel (e.g., on another JVM or another machine).
-        In such cases, Kafka Streams transparently re-assigns tasks from the existing instances to the new instance that you just started.
-        See <a href="/{{version}}/documentation/streams/architecture#streams_architecture_tasks"><b>Stream Partitions and Tasks</b></a> and <a href="/{{version}}/documentation/streams/architecture#streams_architecture_threads"><b>Threading Model</b></a> for details.
-    </p>
-
-    <div class="pagination">
-        <a href="/{{version}}/documentation/streams/quickstart" class="pagination__btn pagination__btn__prev">Previous</a>
-        <a href="/{{version}}/documentation/streams/core-concepts" class="pagination__btn pagination__btn__next">Next</a>
-    </div>
-</script>
-
-<!--#include virtual="../../includes/_header.htm" -->
-<!--#include virtual="../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
-    <!--#include virtual="../../includes/_nav.htm" -->
-    <div class="right">
-        <!--#include virtual="../../includes/_docs_banner.htm" -->
-        <ul class="breadcrumbs">
-            <li><a href="/documentation">Documentation</a></li>
-            <li><a href="/documentation/streams">Kafka Streams API</a></li>
-        </ul>
-        <div class="p-content"></div>
-    </div>
-</div>
-<!--#include virtual="../../includes/_footer.htm" -->
-<script>
-$(function() {
-  // Show selected style on nav item
-  $('.b-nav__streams').addClass('selected');
-
-     //sticky secondary nav
-          var $navbar = $(".sub-nav-sticky"),
-               y_pos = $navbar.offset().top,
-               height = $navbar.height();
-       
-           $(window).scroll(function() {
-               var scrollTop = $(window).scrollTop();
-           
-               if (scrollTop > y_pos - height) {
-                   $navbar.addClass("navbar-fixed")
-               } else if (scrollTop <= y_pos) {
-                   $navbar.removeClass("navbar-fixed")
-               }
-           });
-
-  // Display docs subnav items
-  $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
-});
-</script>
\ No newline at end of file
diff --git a/docs/streams/developer-guide/app-reset-tool.html b/docs/streams/developer-guide/app-reset-tool.html
new file mode 100644
index 0000000..dfee64f
--- /dev/null
+++ b/docs/streams/developer-guide/app-reset-tool.html
@@ -0,0 +1,173 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <!-- div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div -->
+    </div>
+  </div>
+
+    <div class="section" id="application-reset-tool">
+        <span id="streams-developer-guide-app-reset"></span><h1>Application Reset Tool<a class="headerlink" href="#application-reset-tool" title="Permalink to this headline"></a></h1>
+        <p>You can reset an application and force it to reprocess its data from scratch by using the application reset tool.
+            This can be useful for development and testing, or when fixing bugs.</p>
+        <p>The application reset tool handles the Kafka Streams <a class="reference internal" href="manage-topics.html#streams-developer-guide-topics-user"><span class="std std-ref">user topics</span></a> (input,
+            output, and intermediate topics) and <a class="reference internal" href="manage-topics.html#streams-developer-guide-topics-internal"><span class="std std-ref">internal topics</span></a> differently
+            when resetting the application.</p>
+        <p>Here&#8217;s what the application reset tool does for each topic type:</p>
+        <ul class="simple">
+            <li>Input topics: Reset to the beginning of the topic. This means that it sets the application&#8217;s committed consumer offsets for all partitions to each partition&#8217;s <code class="docutils literal"><span class="pre">earliest</span></code> offset (for consumer group <code class="docutils literal"><span class="pre">application.id</span></code>).</li>
+            <li>Intermediate topics: Skip to the end of the topic, i.e., set the application&#8217;s committed consumer offsets for all partitions to each partition&#8217;s <code class="docutils literal"><span class="pre">logSize</span></code> (for consumer group <code class="docutils literal"><span class="pre">application.id</span></code>).</li>
+            <li>Internal topics: Delete the internal topic (this automatically deletes any committed offsets).</li>
+        </ul>
+        <p>The application reset tool does not:</p>
+        <ul class="simple">
+            <li>Reset output topics of an application. If any output (or intermediate) topics are consumed by downstream
+                applications, it is your responsibility to adjust those downstream applications as appropriate when you reset the
+                upstream application.</li>
+            <li>Reset the local environment of your application instances.  It is your responsibility to delete the local
+                state on any machine on which an application instance was run.  See the instructions in section
+                <a class="reference internal" href="#streams-developer-guide-reset-local-environment"><span class="std std-ref">Step 2: Reset the local environments of your application instances</span></a> on how to do this.</li>
+        </ul>
+        <dl class="docutils">
+            <dt>Prerequisites</dt>
+            <dd><ul class="first last">
+                <li><p class="first">All instances of your application must be stopped. Otherwise, the application may enter an invalid state, crash, or produce incorrect results. You can verify whether the consumer group with ID <code class="docutils literal"><span class="pre">application.id</span></code> is still active by using <code class="docutils literal"><span class="pre">bin/kafka-consumer-groups</span></code>.</p>
+                </li>
+                <li><p class="first">Use this tool with care and double-check its parameters: If you provide wrong parameter values (e.g., typos in <code class="docutils literal"><span class="pre">application.id</span></code>) or specify parameters inconsistently (e.g., specify the wrong input topics for the application), this tool might invalidate the application&#8217;s state or even impact other applications, consumer groups, or your Kafka topics.</p>
+                </li>
+                <li><p class="first">You should manually delete and re-create any intermediate topics before running the application reset tool. This will free up disk space in Kafka brokers.</p>
+                </li>
+                <li><p class="first">You should delete and recreate intermediate topics before running the application reset tool, unless the following applies:</p>
+                    <blockquote>
+                        <div><ul class="simple">
+                            <li>You have external downstream consumers for the application&#8217;s intermediate topics.</li>
+                            <li>You are in a development environment where manually deleting and re-creating intermediate topics is unnecessary.</li>
+                        </ul>
+                        </div></blockquote>
+                </li>
+            </ul>
+            </dd>
+        </dl>
+        <div class="section" id="step-1-run-the-application-reset-tool">
+            <h2>Step 1: Run the application reset tool<a class="headerlink" href="#step-1-run-the-application-reset-tool" title="Permalink to this headline"></a></h2>
+            <p>Invoke the application reset tool from the command line</p>
+            <div class="highlight-bash"><div class="highlight"><pre><span></span>&lt;path-to-kafka&gt;/bin/kafka-streams-application-reset
+</pre></div>
+            </div>
+            <p>The tool accepts the following parameters:</p>
+            <div class="highlight-bash"><div class="highlight"><pre><span></span>Option <span class="o">(</span>* <span class="o">=</span> required<span class="o">)</span>                 Description
+---------------------                 -----------
+* --application-id &lt;String: id&gt;       The Kafka Streams application ID
+                                        <span class="o">(</span>application.id<span class="o">)</span>.
+--bootstrap-servers &lt;String: urls&gt;    Comma-separated list of broker urls with
+                                        format: HOST1:PORT1,HOST2:PORT2
+                                        <span class="o">(</span>default: localhost:9092<span class="o">)</span>
+--config-file &lt;String: file name&gt;     Property file containing configs to be
+                                        passed to admin clients and embedded
+                                        consumer.
+--dry-run                             Display the actions that would be
+                                        performed without executing the reset
+                                        commands.
+--input-topics &lt;String: list&gt;         Comma-separated list of user input
+                                        topics. For these topics, the tool will
+                                        reset the offset to the earliest
+                                        available offset.
+--intermediate-topics &lt;String: list&gt;  Comma-separated list of intermediate user
+                                        topics <span class="o">(</span>topics used in the through<span class="o">()</span>
+                                        method<span class="o">)</span>. For these topics, the tool
+                                        will skip to the end.
+--zookeeper                           Zookeeper option is deprecated by
+                                        bootstrap.servers, as the reset tool
+                                        would no longer access Zookeeper
+                                        directly.
+</pre></div>
+            </div>
+            <p>Parameters can be combined as needed.  For example, if you want to restart an application from an
+                empty internal state, but not reprocess previous data, simply omit the parameters <code class="docutils literal"><span class="pre">--input-topics</span></code> and
+                <code class="docutils literal"><span class="pre">--intermediate-topics</span></code>.</p>
+        </div>
+        <div class="section" id="step-2-reset-the-local-environments-of-your-application-instances">
+            <span id="streams-developer-guide-reset-local-environment"></span><h2>Step 2: Reset the local environments of your application instances<a class="headerlink" href="#step-2-reset-the-local-environments-of-your-application-instances" title="Permalink to this headline"></a></h2>
+            <p>For a complete application reset, you must delete the application&#8217;s local state directory on any machines where the
+                application instance was run. You must do this before restarting an application instance on the same machine.  You can
+                use either of these methods:</p>
+            <ul class="simple">
+                <li>The API method <code class="docutils literal"><span class="pre">KafkaStreams#cleanUp()</span></code> in your application code.</li>
+                <li>Manually delete the corresponding local state directory (default location: <code class="docutils literal"><span class="pre">/var/lib/kafka-streams/&lt;application.id&gt;</span></code>). For more information, see <a class="reference internal" href="../javadocs.html#streams-javadocs"><span class="std std-ref">state.dir</span></a> StreamsConfig class.</li>
+            </ul>
+</div>
+</div>
+
+
+               </div>
+              </div>
+  <div class="pagination">
+    <a href="/{{version}}/documentation/streams/developer-guide/security" class="pagination__btn pagination__btn__prev">Previous</a>
+      <a href="/{{version}}/documentation/streams/upgrade-guide" class="pagination__btn pagination__btn__next">Next</a>
+  </div>
+</script>
+
+<!--#include virtual="../../../includes/_header.htm" -->
+<!--#include virtual="../../../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+  <!--#include virtual="../../../includes/_nav.htm" -->
+  <div class="right">
+    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <ul class="breadcrumbs">
+      <li><a href="/documentation">Documentation</a></li>
+      <li><a href="/documentation/streams">Kafka Streams</a></li>
+      <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+    </ul>
+    <div class="p-content"></div>
+  </div>
+</div>
+<!--#include virtual="../../../includes/_footer.htm" -->
+<script>
+    $(function() {
+        // Show selected style on nav item
+        $('.b-nav__streams').addClass('selected');
+
+        //sticky secondary nav
+        var $navbar = $(".sub-nav-sticky"),
+            y_pos = $navbar.offset().top,
+            height = $navbar.height();
+
+        $(window).scroll(function() {
+            var scrollTop = $(window).scrollTop();
+
+            if (scrollTop > y_pos - height) {
+                $navbar.addClass("navbar-fixed")
+            } else if (scrollTop <= y_pos) {
+                $navbar.removeClass("navbar-fixed")
+            }
+        });
+
+        // Display docs subnav items
+        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+    });
+</script>
\ No newline at end of file
diff --git a/docs/streams/developer-guide/datatypes.html b/docs/streams/developer-guide/datatypes.html
new file mode 100644
index 0000000..155ee2c
--- /dev/null
+++ b/docs/streams/developer-guide/datatypes.html
@@ -0,0 +1,223 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <!-- div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div -->
+    </div>
+  </div>
+
+  <div class="section" id="data-types-and-serialization">
+    <span id="streams-developer-guide-serdes"></span><h1>Data Types and Serialization<a class="headerlink" href="#data-types-and-serialization" title="Permalink to this headline"></a></h1>
+    <p>Every Kafka Streams application must provide SerDes (Serializer/Deserializer) for the data types of record keys and record values (e.g. <code class="docutils literal"><span class="pre">java.lang.String</span></code>) to materialize the data when necessary.  Operations that require such SerDes information include: <code class="docutils literal"><span class="pre">stream()</span></code>, <code class="docutils literal"><span class="pre">table()</span></code>, <code class="docutils lit [...]
+    <p>You can provide SerDes by using either of these methods:</p>
+    <ul class="simple">
+      <li>By setting default SerDes via a <code class="docutils literal"><span class="pre">StreamsConfig</span></code> instance.</li>
+      <li>By specifying explicit SerDes when calling the appropriate API methods, thus overriding the defaults.</li>
+    </ul>
+
+      <p class="topic-title first"><b>Table of Contents</b></p>
+      <ul class="simple">
+          <li><a class="reference internal" href="#configuring-serdes" id="id1">Configuring SerDes</a></li>
+          <li><a class="reference internal" href="#overriding-default-serdes" id="id2">Overriding default SerDes</a></li>
+          <li><a class="reference internal" href="#available-serdes" id="id3">Available SerDes</a><ul>
+              <li><a class="reference internal" href="#primitive-and-basic-types" id="id4">Primitive and basic types</a></li>
+              <li><a class="reference internal" href="#avro" id="id5">Avro</a></li>
+              <li><a class="reference internal" href="#json" id="id6">JSON</a></li>
+              <li><a class="reference internal" href="#further-serdes" id="id7">Further serdes</a></li>
+          </ul>
+    <div class="section" id="configuring-serdes">
+      <h2>Configuring SerDes<a class="headerlink" href="#configuring-serdes" title="Permalink to this headline"></a></h2>
+      <p>SerDes specified in the Streams configuration via <code class="docutils literal"><span class="pre">StreamsConfig</span></code> are used as the default in your Kafka Streams application.</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.common.serialization.Serdes</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.StreamsConfig</span><span class="o">;</span>
+
+<span class="n">Properties</span> <span class="n">settings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="c1">// Default serde for keys of data records (here: built-in serde for String type)</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">KEY_SERDE_CLASS_CONFIG</span><span class="o">,</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">().</span><span class="na">getClass</span><span class="o">().</span><span class="na">getName</span><span class="o">());</span>
+<span class="c1">// Default serde for values of data records (here: built-in serde for Long type)</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">VALUE_SERDE_CLASS_CONFIG</span><span class="o">,</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">().</span><span class="na">getClass</span><span class="o">().</span><span class="na">getName</span><span class="o">());</span>
+
+<span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsConfig</span><span class="o">(</span><span class="n">settings</span><span class="o">);</span>
+</pre></div>
+      </div>
+    </div>
+    <div class="section" id="overriding-default-serdes">
+      <h2>Overriding default SerDes<a class="headerlink" href="#overriding-default-serdes" title="Permalink to this headline"></a></h2>
+      <p>You can also specify SerDes explicitly by passing them to the appropriate API methods, which overrides the default serde settings:</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.common.serialization.Serde</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.common.serialization.Serdes</span><span class="o">;</span>
+
+<span class="kd">final</span> <span class="n">Serde</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">stringSerde</span> <span class="o">=</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">();</span>
+<span class="kd">final</span> <span class="n">Serde</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">longSerde</span> <span class="o">=</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">();</span>
+
+<span class="c1">// The stream userCountByRegion has type `String` for record keys (for region)</span>
+<span class="c1">// and type `Long` for record values (for user counts).</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">userCountByRegion</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">userCountByRegion</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;RegionCountsTopic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">stringSerde</span><span class="o">,</span> <span class="n">longSerde</span><span class="o">));</span>
+</pre></div>
+      </div>
+      <p>If you want to override serdes selectively, i.e., keep the defaults for some fields, then don&#8217;t specify the serde whenever you want to leverage the default settings:</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.common.serialization.Serde</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.common.serialization.Serdes</span><span class="o">;</span>
+
+<span class="c1">// Use the default serializer for record keys (here: region as String) by not specifying the key serde,</span>
+<span class="c1">// but override the default serializer for record values (here: userCount as Long).</span>
+<span class="kd">final</span> <span class="n">Serde</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">longSerde</span> <span class="o">=</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">();</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">userCountByRegion</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">userCountByRegion</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;RegionCountsTopic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">valueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span>
+</pre></div>
+      </div>
+    </div>
+    <div class="section" id="available-serdes">
+      <span id="streams-developer-guide-serdes-available"></span><h2>Available SerDes<a class="headerlink" href="#available-serdes" title="Permalink to this headline"></a></h2>
+      <div class="section" id="primitive-and-basic-types">
+        <h3>Primitive and basic types<a class="headerlink" href="#primitive-and-basic-types" title="Permalink to this headline"></a></h3>
+        <p>Apache Kafka includes several built-in serde implementations for Java primitives and basic types such as <code class="docutils literal"><span class="pre">byte[]</span></code> in
+          its <code class="docutils literal"><span class="pre">kafka-clients</span></code> Maven artifact:</p>
+        <div class="highlight-xml"><div class="highlight"><pre><span></span><span class="nt">&lt;dependency&gt;</span>
+    <span class="nt">&lt;groupId&gt;</span>org.apache.kafka<span class="nt">&lt;/groupId&gt;</span>
+    <span class="nt">&lt;artifactId&gt;</span>kafka-clients<span class="nt">&lt;/artifactId&gt;</span>
+    <span class="nt">&lt;version&gt;</span>1.0.0-cp1<span class="nt">&lt;/version&gt;</span>
+<span class="nt">&lt;/dependency&gt;</span>
+</pre></div>
+        </div>
+        <p>This artifact provides the following serde implementations under the package <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/common/serialization">org.apache.kafka.common.serialization</a>, which you can leverage when e.g., defining default serializers in your Streams configuration.</p>
+        <table border="1" class="docutils">
+          <colgroup>
+            <col width="17%" />
+            <col width="83%" />
+          </colgroup>
+          <thead valign="bottom">
+          <tr class="row-odd"><th class="head">Data type</th>
+            <th class="head">Serde</th>
+          </tr>
+          </thead>
+          <tbody valign="top">
+          <tr class="row-even"><td>byte[]</td>
+            <td><code class="docutils literal"><span class="pre">Serdes.ByteArray()</span></code>, <code class="docutils literal"><span class="pre">Serdes.Bytes()</span></code> (see tip below)</td>
+          </tr>
+          <tr class="row-odd"><td>ByteBuffer</td>
+            <td><code class="docutils literal"><span class="pre">Serdes.ByteBuffer()</span></code></td>
+          </tr>
+          <tr class="row-even"><td>Double</td>
+            <td><code class="docutils literal"><span class="pre">Serdes.Double()</span></code></td>
+          </tr>
+          <tr class="row-odd"><td>Integer</td>
+            <td><code class="docutils literal"><span class="pre">Serdes.Integer()</span></code></td>
+          </tr>
+          <tr class="row-even"><td>Long</td>
+            <td><code class="docutils literal"><span class="pre">Serdes.Long()</span></code></td>
+          </tr>
+          <tr class="row-odd"><td>String</td>
+            <td><code class="docutils literal"><span class="pre">Serdes.String()</span></code></td>
+          </tr>
+          </tbody>
+        </table>
+        <div class="admonition tip">
+          <p><b>Tip</b></p>
+          <p class="last"><a class="reference external" href="https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/common/utils/Bytes.java">Bytes</a> is a wrapper for Java&#8217;s <code class="docutils literal"><span class="pre">byte[]</span></code> (byte array) that supports proper equality and ordering semantics.  You may want to consider using <code class="docutils literal"><span class="pre">Bytes</span></code> instead of <code class="docutils literal"><span [...]
+        </div>
+      </div>
+      <div class="section" id="json">
+        <h3>JSON<a class="headerlink" href="#json" title="Permalink to this headline"></a></h3>
+        <p>The code examples of Kafka Streams also include a basic serde implementation for JSON:</p>
+        <ul class="simple">
+          <li><a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/JsonPOJOSerializer.java">JsonPOJOSerializer</a></li>
+          <li><a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/JsonPOJODeserializer.java">JsonPOJODeserializer</a></li>
+        </ul>
+        <p>You can construct a unified JSON serde from the <code class="docutils literal"><span class="pre">JsonPOJOSerializer</span></code> and <code class="docutils literal"><span class="pre">JsonPOJODeserializer</span></code> via
+          <code class="docutils literal"><span class="pre">Serdes.serdeFrom(&lt;serializerInstance&gt;,</span> <span class="pre">&lt;deserializerInstance&gt;)</span></code>.  The
+          <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java">PageViewTypedDemo</a>
+          example demonstrates how to use this JSON serde.</p>
+      </div>
+    <div class="section" id="implementing-custom-serdes">
+      <span id="streams-developer-guide-serdes-custom"></span><h2>Implementing custom SerDes<a class="headerlink" href="#implementing-custom-serdes" title="Permalink to this headline"></a></h2>
+      <p>If you need to implement custom SerDes, your best starting point is to take a look at the source code references of
+        existing SerDes (see previous section).  Typically, your workflow will be similar to:</p>
+      <ol class="arabic simple">
+        <li>Write a <em>serializer</em> for your data type <code class="docutils literal"><span class="pre">T</span></code> by implementing
+          <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/common/serialization/Serializer.java">org.apache.kafka.common.serialization.Serializer</a>.</li>
+        <li>Write a <em>deserializer</em> for <code class="docutils literal"><span class="pre">T</span></code> by implementing
+          <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/common/serialization/Deserializer.java">org.apache.kafka.common.serialization.Deserializer</a>.</li>
+        <li>Write a <em>serde</em> for <code class="docutils literal"><span class="pre">T</span></code> by implementing
+          <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/common/serialization/Serde.java">org.apache.kafka.common.serialization.Serde</a>,
+          which you either do manually (see existing SerDes in the previous section) or by leveraging helper functions in
+          <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/common/serialization/Serdes.java">Serdes</a>
+          such as <code class="docutils literal"><span class="pre">Serdes.serdeFrom(Serializer&lt;T&gt;,</span> <span class="pre">Deserializer&lt;T&gt;)</span></code>.</li>
+      </ol>
+</div>
+</div>
+
+
+               </div>
+              </div>
+  <div class="pagination">
+    <a href="/{{version}}/documentation/streams/developer-guide/processor-api" class="pagination__btn pagination__btn__prev">Previous</a>
+    <a href="/{{version}}/documentation/streams/developer-guide/interactive-queries" class="pagination__btn pagination__btn__next">Next</a>
+  </div>
+</script>
+
+<!--#include virtual="../../../includes/_header.htm" -->
+<!--#include virtual="../../../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+  <!--#include virtual="../../../includes/_nav.htm" -->
+  <div class="right">
+    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <ul class="breadcrumbs">
+      <li><a href="/documentation">Documentation</a></li>
+      <li><a href="/documentation/streams">Kafka Streams</a></li>
+      <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+    </ul>
+    <div class="p-content"></div>
+  </div>
+</div>
+<!--#include virtual="../../../includes/_footer.htm" -->
+<script>
+    $(function() {
+        // Show selected style on nav item
+        $('.b-nav__streams').addClass('selected');
+
+        //sticky secondary nav
+        var $navbar = $(".sub-nav-sticky"),
+            y_pos = $navbar.offset().top,
+            height = $navbar.height();
+
+        $(window).scroll(function() {
+            var scrollTop = $(window).scrollTop();
+
+            if (scrollTop > y_pos - height) {
+                $navbar.addClass("navbar-fixed")
+            } else if (scrollTop <= y_pos) {
+                $navbar.removeClass("navbar-fixed")
+            }
+        });
+
+        // Display docs subnav items
+        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+    });
+</script>
diff --git a/docs/streams/developer-guide/dsl-api.html b/docs/streams/developer-guide/dsl-api.html
new file mode 100644
index 0000000..6b24cb5
--- /dev/null
+++ b/docs/streams/developer-guide/dsl-api.html
@@ -0,0 +1,3208 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <!-- div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div>
+    </div -->
+  </div>
+
+
+    <div class="section" id="streams-dsl">
+        <span id="streams-developer-guide-dsl"></span><h1>Streams DSL<a class="headerlink" href="#streams-dsl" title="Permalink to this headline"></a></h1>
+        <p>The Kafka Streams DSL (Domain Specific Language) is built on top of the Streams Processor API. It is the recommended for
+            most users, especially beginners. Most data processing operations can be expressed in just a few lines of DSL code.</p>
+        <div class="contents local topic" id="table-of-contents">
+            <p class="topic-title first"><b>Table of Contents</b></p>
+            <ul class="simple">
+                <li><a class="reference internal" href="#overview" id="id7">Overview</a></li>
+                <li><a class="reference internal" href="#creating-source-streams-from-kafka" id="id8">Creating source streams from Kafka</a></li>
+                <li><a class="reference internal" href="#transform-a-stream" id="id9">Transform a stream</a><ul>
+                    <li><a class="reference internal" href="#stateless-transformations" id="id10">Stateless transformations</a></li>
+                    <li><a class="reference internal" href="#stateful-transformations" id="id11">Stateful transformations</a><ul>
+                        <li><a class="reference internal" href="#aggregating" id="id12">Aggregating</a></li>
+                        <li><a class="reference internal" href="#joining" id="id13">Joining</a><ul>
+                            <li><a class="reference internal" href="#join-co-partitioning-requirements" id="id14">Join co-partitioning requirements</a></li>
+                            <li><a class="reference internal" href="#kstream-kstream-join" id="id15">KStream-KStream Join</a></li>
+                            <li><a class="reference internal" href="#ktable-ktable-join" id="id16">KTable-KTable Join</a></li>
+                            <li><a class="reference internal" href="#kstream-ktable-join" id="id17">KStream-KTable Join</a></li>
+                            <li><a class="reference internal" href="#kstream-globalktable-join" id="id18">KStream-GlobalKTable Join</a></li>
+                        </ul>
+                        </li>
+                        <li><a class="reference internal" href="#windowing" id="id19">Windowing</a><ul>
+                            <li><a class="reference internal" href="#tumbling-time-windows" id="id20">Tumbling time windows</a></li>
+                            <li><a class="reference internal" href="#hopping-time-windows" id="id21">Hopping time windows</a></li>
+                            <li><a class="reference internal" href="#sliding-time-windows" id="id22">Sliding time windows</a></li>
+                            <li><a class="reference internal" href="#session-windows" id="id23">Session Windows</a></li>
+                        </ul>
+                        </li>
+                    </ul>
+                    </li>
+                    <li><a class="reference internal" href="#applying-processors-and-transformers-processor-api-integration" id="id24">Applying processors and transformers (Processor API integration)</a></li>
+                </ul>
+                </li>
+                <li><a class="reference internal" href="#writing-streams-back-to-kafka" id="id25">Writing streams back to Kafka</a></li>
+            </ul>
+        </div>
+        <div class="section" id="overview">
+            <h2><a class="toc-backref" href="#id7">Overview</a><a class="headerlink" href="#overview" title="Permalink to this headline"></a></h2>
+            <p>In comparison to the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a>, only the DSL supports:</p>
+            <ul class="simple">
+                <li>Built-in abstractions for <a class="reference internal" href="../concepts.html#streams-concepts-duality"><span class="std std-ref">streams and tables</span></a> in the form of
+                    <a class="reference internal" href="../concepts.html#streams-concepts-kstream"><span class="std std-ref">KStream</span></a>, <a class="reference internal" href="../concepts.html#streams-concepts-ktable"><span class="std std-ref">KTable</span></a>, and
+                    <a class="reference internal" href="../concepts.html#streams-concepts-globalktable"><span class="std std-ref">GlobalKTable</span></a>.  Having first-class support for streams and tables is crucial
+                    because, in practice, most use cases require not just either streams or databases/tables, but a combination of both.
+                    For example, if your use case is to create a customer 360-degree view that is updated in real-time, what your
+                    application will be doing is transforming many input <em>streams</em> of customer-related events into an output <em>table</em>
+                    that contains a continuously updated 360-degree view of your customers.</li>
+                <li>Declarative, functional programming style with
+                    <a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateless"><span class="std std-ref">stateless transformations</span></a> (e.g. <code class="docutils literal"><span class="pre">map</span></code> and <code class="docutils literal"><span class="pre">filter</span></code>)
+                    as well as <a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateful"><span class="std std-ref">stateful transformations</span></a>
+                    such as <a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregations</span></a> (e.g. <code class="docutils literal"><span class="pre">count</span></code> and <code class="docutils literal"><span class="pre">reduce</span></code>),
+                    <a class="reference internal" href="#streams-developer-guide-dsl-joins"><span class="std std-ref">joins</span></a> (e.g. <code class="docutils literal"><span class="pre">leftJoin</span></code>), and
+                    <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">windowing</span></a> (e.g. <a class="reference internal" href="#windowing-session"><span class="std std-ref">session windows</span></a>).</li>
+            </ul>
+            <p>With the DSL, you can define <a class="reference internal" href="../concepts.html#streams-concepts-processor-topology"><span class="std std-ref">processor topologies</span></a> (i.e., the logical
+                processing plan) in your application. The steps to accomplish this are:</p>
+            <ol class="arabic simple">
+                <li>Specify <a class="reference internal" href="#streams-developer-guide-dsl-sources"><span class="std std-ref">one or more input streams that are read from Kafka topics</span></a>.</li>
+                <li>Compose <a class="reference internal" href="#streams-developer-guide-dsl-transformations"><span class="std std-ref">transformations</span></a> on these streams.</li>
+                <li>Write the <a class="reference internal" href="#streams-developer-guide-dsl-destinations"><span class="std std-ref">resulting output streams back to Kafka topics</span></a>, or expose the processing results of your application directly to other applications through <a class="reference internal" href="interactive-queries.html#streams-developer-guide-interactive-queries"><span class="std std-ref">interactive queries</span></a> (e.g., via a REST API).</li>
+            </ol>
+            <p>After the application is run, the defined processor topologies are continuously executed (i.e., the processing plan is put into
+                action). A step-by-step guide for writing a stream processing application using the DSL is provided below.</p>
+            <p>For a complete list of available API functionality, see also the <a class="reference internal" href="../javadocs.html#streams-javadocs"><span class="std std-ref">Kafka Streams Javadocs</span></a>.</p>
+        </div>
+        <div class="section" id="creating-source-streams-from-kafka">
+            <span id="streams-developer-guide-dsl-sources"></span><h2><a class="toc-backref" href="#id8">Creating source streams from Kafka</a><a class="headerlink" href="#creating-source-streams-from-kafka" title="Permalink to this headline"></a></h2>
+            <p>You can easily read data from Kafka topics into your application.  The following operations are supported.</p>
+            <table border="1" class="non-scrolling-table width-100-percent docutils">
+                <colgroup>
+                    <col width="22%" />
+                    <col width="78%" />
+                </colgroup>
+                <thead valign="bottom">
+                <tr class="row-odd"><th class="head">Reading from Kafka</th>
+                    <th class="head">Description</th>
+                </tr>
+                </thead>
+                <tbody valign="top">
+                <tr class="row-even"><td><p class="first"><strong>Stream</strong></p>
+                    <ul class="last simple">
+                        <li><em>input topics</em> &rarr; KStream</li>
+                    </ul>
+                </td>
+                    <td><p class="first">Creates a <a class="reference internal" href="../concepts.html#streams-concepts-kstream"><span class="std std-ref">KStream</span></a> from the specified Kafka input topics and interprets the data
+                        as a <a class="reference internal" href="../concepts.html#streams-concepts-kstream"><span class="std std-ref">record stream</span></a>.
+                        A <code class="docutils literal"><span class="pre">KStream</span></code> represents a <em>partitioned</em> record stream.
+                        <a class="reference external" href="../javadocs/org/apache/kafka/streams/StreamsBuilder.html#stream(java.lang.String)">(details)</a></p>
+                        <p>In the case of a KStream, the local KStream instance of every application instance will
+                            be populated with data from only <strong>a subset</strong> of the partitions of the input topic.  Collectively, across
+                            all application instances, all input topic partitions are read and processed.</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.common.serialization.Serdes</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.StreamsBuilder</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.KStream</span><span class="o">;</span>
+
+<span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsBuilder</span><span class="o">();</span>
+
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">wordCounts</span> <span class="o">=</span> <span class="n">builder</span><span class="o">.</span><span class="na">stream</span><span class="o">(</span>
+    <span class="s">&quot;word-counts-input-topic&quot;</span><span class="o">,</span> <span class="cm">/* input topic */</span>
+    <span class="n">Consumed</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key serde */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()</span>   <span class="cm">/* value serde */</span>
+    <span class="o">);</span>
+</pre></div>
+                        </div>
+                        <p>If you do not specify SerDes explicitly, the default SerDes from the
+                            <a class="reference internal" href="config-streams.html#streams-developer-guide-configuration"><span class="std std-ref">configuration</span></a> are used.</p>
+                        <p>You <strong>must specify SerDes explicitly</strong> if the key or value types of the records in the Kafka input
+                            topics do not match the configured default SerDes. For information about configuring default SerDes, available
+                            SerDes, and implementing your own custom SerDes see <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data Types and Serialization</span></a>.</p>
+                        <p class="last">Several variants of <code class="docutils literal"><span class="pre">stream</span></code> exist, for example to specify a regex pattern for input topics to read from).</p>
+                    </td>
+                </tr>
+                <tr class="row-odd"><td><p class="first"><strong>Table</strong></p>
+                    <ul class="last simple">
+                        <li><em>input topic</em> &rarr; KTable</li>
+                    </ul>
+                </td>
+                    <td><p class="first">Reads the specified Kafka input topic into a <a class="reference internal" href="../concepts.html#streams-concepts-ktable"><span class="std std-ref">KTable</span></a>.  The topic is
+                        interpreted as a changelog stream, where records with the same key are interpreted as UPSERT aka INSERT/UPDATE
+                        (when the record value is not <code class="docutils literal"><span class="pre">null</span></code>) or as DELETE (when the value is <code class="docutils literal"><span class="pre">null</span></code>) for that key.
+                        <a class="reference external" href="../javadocs/org/apache/kafka/streams/StreamsBuilder.html#table-java.lang.String(java.lang.String)">(details)</a></p>
+                        <p>In the case of a KStream, the local KStream instance of every application instance will
+                            be populated with data from only <strong>a subset</strong> of the partitions of the input topic.  Collectively, across
+                            all application instances, all input topic partitions are read and processed.</p>
+                        <p>You must provide a name for the table (more precisely, for the internal
+                            <a class="reference internal" href="../architecture.html#streams-architecture-state"><span class="std std-ref">state store</span></a> that backs the table).  This is required for
+                            supporting <a class="reference internal" href="interactive-queries.html#streams-developer-guide-interactive-queries"><span class="std std-ref">interactive queries</span></a> against the table. When a
+                            name is not provided the table will not queryable and an internal name will be provided for the state store.</p>
+                        <p>If you do not specify SerDes explicitly, the default SerDes from the
+                            <a class="reference internal" href="config-streams.html#streams-developer-guide-configuration"><span class="std std-ref">configuration</span></a> are used.</p>
+                        <p>You <strong>must specify SerDes explicitly</strong> if the key or value types of the records in the Kafka input
+                            topics do not match the configured default SerDes. For information about configuring default SerDes, available
+                            SerDes, and implementing your own custom SerDes see <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data Types and Serialization</span></a>.</p>
+                        <p class="last">Several variants of <code class="docutils literal"><span class="pre">table</span></code> exist, for example to specify the <code class="docutils literal"><span class="pre">auto.offset.reset</span></code> policy to be used when
+                            reading from the input topic.</p>
+                    </td>
+                </tr>
+                <tr class="row-even"><td><p class="first"><strong>Global Table</strong></p>
+                    <ul class="last simple">
+                        <li><em>input topic</em> &rarr; GlobalKTable</li>
+                    </ul>
+                </td>
+                    <td><p class="first">Reads the specified Kafka input topic into a <a class="reference internal" href="../concepts.html#streams-concepts-globalktable"><span class="std std-ref">GlobalKTable</span></a>.  The topic is
+                        interpreted as a changelog stream, where records with the same key are interpreted as UPSERT aka INSERT/UPDATE
+                        (when the record value is not <code class="docutils literal"><span class="pre">null</span></code>) or as DELETE (when the value is <code class="docutils literal"><span class="pre">null</span></code>) for that key.
+                        <a class="reference external" href="../javadocs/org/apache/kafka/streams/StreamsBuilder.html#globalTable-java.lang.String(java.lang.String)">(details)</a></p>
+                        <p>In the case of a GlobalKTable, the local GlobalKTable instance of every application instance will
+                            be populated with data from only <strong>a subset</strong> of the partitions of the input topic.  Collectively, across
+                            all application instances, all input topic partitions are read and processed.</p>
+                        <p>You must provide a name for the table (more precisely, for the internal
+                            <a class="reference internal" href="../architecture.html#streams-architecture-state"><span class="std std-ref">state store</span></a> that backs the table).  This is required for
+                            supporting <a class="reference internal" href="interactive-queries.html#streams-developer-guide-interactive-queries"><span class="std std-ref">interactive queries</span></a> against the table. When a
+                            name is not provided the table will not queryable and an internal name will be provided for the state store.</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.common.serialization.Serdes</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.StreamsBuilder</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.GlobalKTable</span><span class="o">;</span>
+
+<span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsBuilder</span><span class="o">();</span>
+
+<span class="n">GlobalKTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">wordCounts</span> <span class="o">=</span> <span class="n">builder</span><span class="o">.</span><span class="na">globalTable</span><span class="o">(</span>
+    <span class="s">&quot;word-counts-input-topic&quot;</span><span class="o">,</span>
+    <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;&gt;</span><span class="n">as</span><span class="o">(</span>
+      <span class="s">&quot;word-counts-global-store&quot;</span> <span class="cm">/* table/store name */</span><span class="o">)</span>
+      <span class="o">.</span><span class="na">withKeySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key serde */</span>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* value serde */</span>
+    <span class="o">);</span>
+</pre></div>
+                        </div>
+                        <p>You <strong>must specify SerDes explicitly</strong> if the key or value types of the records in the Kafka input
+                            topics do not match the configured default SerDes. For information about configuring default SerDes, available
+                            SerDes, and implementing your own custom SerDes see <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data Types and Serialization</span></a>.</p>
+                        <p class="last">Several variants of <code class="docutils literal"><span class="pre">globalTable</span></code> exist to e.g. specify explicit SerDes.</p>
+                    </td>
+                </tr>
+                </tbody>
+            </table>
+        </div>
+        <div class="section" id="transform-a-stream">
+            <span id="streams-developer-guide-dsl-transformations"></span><h2><a class="toc-backref" href="#id9">Transform a stream</a><a class="headerlink" href="#transform-a-stream" title="Permalink to this headline"></a></h2>
+            <p>The KStream and KTable interfaces support a variety of transformation operations.
+                Each of these operations can be translated into one or more connected processors into the underlying processor topology.
+                Since KStream and KTable are strongly typed, all of these transformation operations are defined as
+                generic functions where users could specify the input and output data types.</p>
+            <p>Some KStream transformations may generate one or more KStream objects, for example:
+                - <code class="docutils literal"><span class="pre">filter</span></code> and <code class="docutils literal"><span class="pre">map</span></code> on a KStream will generate another KStream
+                - <code class="docutils literal"><span class="pre">branch</span></code> on KStream can generate multiple KStreams</p>
+            <p>Some others may generate a KTable object, for example an aggregation of a KStream also yields a KTable. This allows Kafka Streams to continuously update the computed value upon arrivals of <a class="reference internal" href="../concepts.html#streams-concepts-aggregations"><span class="std std-ref">late records</span></a> after it
+                has already been produced to the downstream transformation operators.</p>
+            <p>All KTable transformation operations can only generate another KTable. However, the Kafka Streams DSL does provide a special function
+                that converts a KTable representation into a KStream. All of these transformation methods can be chained together to compose
+                a complex processor topology.</p>
+            <p>These transformation operations are described in the following subsections:</p>
+            <ul class="simple">
+                <li><a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateless"><span class="std std-ref">Stateless transformations</span></a></li>
+                <li><a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateful"><span class="std std-ref">Stateful transformations</span></a></li>
+            </ul>
+            <div class="section" id="stateless-transformations">
+                <span id="streams-developer-guide-dsl-transformations-stateless"></span><h3><a class="toc-backref" href="#id10">Stateless transformations</a><a class="headerlink" href="#stateless-transformations" title="Permalink to this headline"></a></h3>
+                <p>Stateless transformations do not require state for processing and they do not require a state store associated with
+                    the stream processor. Kafka 0.11.0 and later allows you to materialize the result from a stateless <code class="docutils literal"><span class="pre">KTable</span></code> transformation. This allows the result to be queried through <a class="reference internal" href="interactive-queries.html#streams-developer-guide-interactive-queries"><span class="std std-ref">interactive queries</span></a>. To materialize a <code class="docutils literal"><span class="pre">KTable</span [...]
+                <table border="1" class="non-scrolling-table width-100-percent docutils">
+                    <colgroup>
+                        <col width="22%" />
+                        <col width="78%" />
+                    </colgroup>
+                    <thead valign="bottom">
+                    <tr class="row-odd"><th class="head">Transformation</th>
+                        <th class="head">Description</th>
+                    </tr>
+                    </thead>
+                    <tbody valign="top">
+                    <tr class="row-even"><td><p class="first"><strong>Branch</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream[]</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Branch (or split) a <code class="docutils literal"><span class="pre">KStream</span></code> based on the supplied predicates into one or more <code class="docutils literal"><span class="pre">KStream</span></code> instances.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#branch-org.apache.kafka.streams.kstream.Predicate...-">details</a>)</p>
+                            <p>Predicates are evaluated in order.  A record is placed to one and only one output stream on the first match:
+                                if the n-th predicate evaluates to true, the record is placed to n-th stream.  If no predicate matches, the
+                                the record is dropped.</p>
+                            <p>Branching is useful, for example, to route records to different downstream topics.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;[]</span> <span class="n">branches</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">branch</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">key</span><span class="o">.</span><span class="na">startsWith</span><span class="o">(</span><span class="s">&quot;A&quot;</span><span class="o">),</span> <span class="cm">/* first predicate  */</span>
+    <span class="o">(</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">key</span><span class="o">.</span><span class="na">startsWith</span><span class="o">(</span><span class="s">&quot;B&quot;</span><span class="o">),</span> <span class="cm">/* second predicate */</span>
+    <span class="o">(</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="kc">true</span>                 <span class="cm">/* third predicate  */</span>
+  <span class="o">);</span>
+
+<span class="c1">// KStream branches[0] contains all records whose keys start with &quot;A&quot;</span>
+<span class="c1">// KStream branches[1] contains all records whose keys start with &quot;B&quot;</span>
+<span class="c1">// KStream branches[2] contains all other records</span>
+
+<span class="c1">// Java 7 example: cf. `filter` for how to create `Predicate` instances</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td><p class="first"><strong>Filter</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream</li>
+                            <li>KTable &rarr; KTable</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Evaluates a boolean function for each element and retains those for which the function returns true.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#filter-org.apache.kafka.streams.kstream.Predicate-">KStream details</a>,
+                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#filter-org.apache.kafka.streams.kstream.Predicate-">KTable details</a>)</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// A filter that selects (keeps) only positive numbers</span>
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">onlyPositives</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">filter</span><span class="o">((</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">value</ [...]
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">onlyPositives</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">filter</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">Predicate</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">test</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">Long</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">value</span> <span class="o">&gt;</span> <span class="mi">0</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-even"><td><p class="first"><strong>Inverse Filter</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream</li>
+                            <li>KTable &rarr; KTable</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Evaluates a boolean function for each element and drops those for which the function returns true.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#filterNot-org.apache.kafka.streams.kstream.Predicate-">KStream details</a>,
+                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#filterNot-org.apache.kafka.streams.kstream.Predicate-">KTable details</a>)</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// An inverse filter that discards any negative numbers or zero</span>
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">onlyPositives</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">filterNot</span><span class="o">((</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">valu [...]
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">onlyPositives</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">filterNot</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">Predicate</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">test</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">Long</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">value</span> <span class="o">&lt;=</span> <span class="mi">0</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td><p class="first"><strong>FlatMap</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Takes one record and produces zero, one, or more records.  You can modify the record keys and values, including
+                            their types.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#flatMap-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
+                            <p><strong>Marks the stream for data re-partitioning:</strong>
+                                Applying a grouping or a join after <code class="docutils literal"><span class="pre">flatMap</span></code> will result in re-partitioning of the records.
+                                If possible use <code class="docutils literal"><span class="pre">flatMapValues</span></code> instead, which will not cause data re-partitioning.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">transformed</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">flatMap</span><span class="o">(</span>
+     <span class="c1">// Here, we generate two output records for each input record.</span>
+     <span class="c1">// We also change the key and value types.</span>
+     <span class="c1">// Example: (345L, &quot;Hello&quot;) -&gt; (&quot;HELLO&quot;, 1000), (&quot;hello&quot;, 9000)</span>
+    <span class="o">(</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="o">{</span>
+      <span class="n">List</span><span class="o">&lt;</span><span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;&gt;</span> <span class="n">result</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LinkedList</span><span class="o">&lt;&gt;();</span>
+      <span class="n">result</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span><span class="n">value</span><span class="o">.</span><span class="na">toUpperCase</span><span class="o">(),</span> <span class="mi">1000</span><span class="o">));</span>
+      <span class="n">result</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">(),</span> <span class="mi">9000</span><span class="o">));</span>
+      <span class="k">return</span> <span class="n">result</span><span class="o">;</span>
+    <span class="o">}</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example: cf. `map` for how to create `KeyValueMapper` instances</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-even"><td><p class="first"><strong>FlatMap (values only)</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Takes one record and produces zero, one, or more records, while retaining the key of the original record.
+                            You can modify the record values and the value type.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#flatMapValues-org.apache.kafka.streams.kstream.ValueMapper-">details</a>)</p>
+                            <p><code class="docutils literal"><span class="pre">flatMapValues</span></code> is preferable to <code class="docutils literal"><span class="pre">flatMap</span></code> because it will not cause data re-partitioning.  However, you
+                                cannot modify the key or key type like <code class="docutils literal"><span class="pre">flatMap</span></code> does.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Split a sentence into words.</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">sentences</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">words</span> <span class="o">=</span> <span class="n">sentences</span><span class="o">.</span><span class="na">flatMapValues</span><span class="o">(</span><span class="n">value</span> <span class="o">-&gt;</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class [...]
+
+<span class="c1">// Java 7 example: cf. `mapValues` for how to create `ValueMapper` instances</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td><p class="first"><strong>Foreach</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; void</li>
+                            <li>KStream &rarr; void</li>
+                            <li>KTable &rarr; void</li>
+                        </ul>
+                    </td>
+                        <td><p class="first"><strong>Terminal operation.</strong>  Performs a stateless action on each record.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#foreach-org.apache.kafka.streams.kstream.ForeachAction-">details</a>)</p>
+                            <p>You would use <code class="docutils literal"><span class="pre">foreach</span></code> to cause <em>side effects</em> based on the input data (similar to <code class="docutils literal"><span class="pre">peek</span></code>) and then <em>stop</em>
+                                <em>further processing</em> of the input data (unlike <code class="docutils literal"><span class="pre">peek</span></code>, which is not a terminal operation).</p>
+                            <p><strong>Note on processing guarantees:</strong> Any side effects of an action (such as writing to external systems) are not
+                                trackable by Kafka, which means they will typically not benefit from  Kafka&#8217;s processing guarantees.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Print the contents of the KStream to the local console.</span>
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">stream</span><span class="o">.</span><span class="na">foreach</span><span class="o">((</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">key</span> <span class="o">+</span> <span class="s">&quot; =&gt; &quot;</sp [...]
+
+<span class="c1">// Java 7 example</span>
+<span class="n">stream</span><span class="o">.</span><span class="na">foreach</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">ForeachAction</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="kt">void</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">Long</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">key</span> <span class="o">+</span> <span class="s">&quot; =&gt; &quot;</span> <span class="o">+</span> <span class="n">value</span><span class="o">);</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-even"><td><p class="first"><strong>GroupByKey</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KGroupedStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Groups the records by the existing key.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#groupByKey--">details</a>)</p>
+                            <p>Grouping is a prerequisite for <a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregating a stream or a table</span></a>
+                                and ensures that data is properly partitioned (&#8220;keyed&#8221;) for subsequent operations.</p>
+                            <p><strong>When to set explicit SerDes:</strong>
+                                Variants of <code class="docutils literal"><span class="pre">groupByKey</span></code> exist to override the configured default SerDes of your application, which <strong>you</strong>
+                                <strong>must do</strong> if the key and/or value types of the resulting <code class="docutils literal"><span class="pre">KGroupedStream</span></code> do not match the configured default
+                                SerDes.</p>
+                            <div class="admonition note">
+                                <p class="first admonition-title">Note</p>
+                                <p class="last"><strong>Grouping vs. Windowing:</strong>
+                                    A related operation is <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">windowing</span></a>, which lets you control how to
+                                    &#8220;sub-group&#8221; the grouped records <em>of the same key</em> into so-called <em>windows</em> for stateful operations such as
+                                    windowed <a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregations</span></a> or
+                                    windowed <a class="reference internal" href="#streams-developer-guide-dsl-joins"><span class="std std-ref">joins</span></a>.</p>
+                            </div>
+                            <p><strong>Causes data re-partitioning if and only if the stream was marked for re-partitioning.</strong>
+                                <code class="docutils literal"><span class="pre">groupByKey</span></code> is preferable to <code class="docutils literal"><span class="pre">groupBy</span></code> because it re-partitions data only if the stream was already marked
+                                for re-partitioning. However, <code class="docutils literal"><span class="pre">groupByKey</span></code> does not allow you to modify the key or key type like <code class="docutils literal"><span class="pre">groupBy</span></code>
+                                does.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Group by the existing key, using the application&#39;s configured</span>
+<span class="c1">// default serdes for keys and values.</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">groupByKey</span><span class="o">();</span>
+
+<span class="c1">// When the key and/or value types do not match the configured</span>
+<span class="c1">// default serdes, we must explicitly specify serdes.</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">groupByKey</span><span class="o">(</span>
+    <span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">ByteArray</span><span class="o">(),</span> <span class="cm">/* key */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span>     <span class="cm">/* value */</span>
+  <span class="o">);</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td><p class="first"><strong>GroupBy</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KGroupedStream</li>
+                            <li>KTable &rarr; KGroupedTable</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Groups the records by a <em>new</em> key, which may be of a different key type.
+                            When grouping a table, you may also specify a new value and value type.
+                            <code class="docutils literal"><span class="pre">groupBy</span></code> is a shorthand for <code class="docutils literal"><span class="pre">selectKey(...).groupByKey()</span></code>.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#groupBy-org.apache.kafka.streams.kstream.KeyValueMapper-">KStream details</a>,
+                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#groupBy-org.apache.kafka.streams.kstream.KeyValueMapper-">KTable details</a>)</p>
+                            <p>Grouping is a prerequisite for <a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregating a stream or a table</span></a>
+                                and ensures that data is properly partitioned (&#8220;keyed&#8221;) for subsequent operations.</p>
+                            <p><strong>When to set explicit SerDes:</strong>
+                                Variants of <code class="docutils literal"><span class="pre">groupBy</span></code> exist to override the configured default SerDes of your application, which <strong>you must</strong>
+                                <strong>do</strong> if the key and/or value types of the resulting <code class="docutils literal"><span class="pre">KGroupedStream</span></code> or <code class="docutils literal"><span class="pre">KGroupedTable</span></code> do not match the
+                                configured default SerDes.</p>
+                            <div class="admonition note">
+                                <p class="first admonition-title">Note</p>
+                                <p class="last"><strong>Grouping vs. Windowing:</strong>
+                                    A related operation is <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">windowing</span></a>, which lets you control how to
+                                    &#8220;sub-group&#8221; the grouped records <em>of the same key</em> into so-called <em>windows</em> for stateful operations such as
+                                    windowed <a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregations</span></a> or
+                                    windowed <a class="reference internal" href="#streams-developer-guide-dsl-joins"><span class="std std-ref">joins</span></a>.</p>
+                            </div>
+                            <p><strong>Always causes data re-partitioning:</strong>  <code class="docutils literal"><span class="pre">groupBy</span></code> always causes data re-partitioning.
+                                If possible use <code class="docutils literal"><span class="pre">groupByKey</span></code> instead, which will re-partition data only if required.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">table</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ examples, using lambda expressions</span>
+
+<span class="c1">// Group the stream by a new key and key type</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">groupBy</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">value</span><span class="o">,</span>
+    <span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key (note: type was modified) */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span>  <span class="cm">/* value */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Group the table by a new key and key type, and also modify the value and value type.</span>
+<span class="n">KGroupedTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">groupedTable</span> <span class="o">=</span> <span class="n">table</span><span class="o">.</span><span class="na">groupBy</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span><span class="n">value</span><span class="o">,</span> <span class="n">value</span><span class="o">.</span><span class="na">length</span><span class="o">()),</span>
+    <span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key (note: type was modified) */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">())</span> <span class="cm">/* value (note: type was modified) */</span>
+  <span class="o">);</span>
+
+
+<span class="c1">// Java 7 examples</span>
+
+<span class="c1">// Group the stream by a new key and key type</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">groupBy</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">KeyValueMapper</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">value</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key (note: type was modified) */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span>  <span class="cm">/* value */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Group the table by a new key and key type, and also modify the value and value type.</span>
+<span class="n">KGroupedTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">groupedTable</span> <span class="o">=</span> <span class="n">table</span><span class="o">.</span><span class="na">groupBy</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">KeyValueMapper</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">,</span> <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span><span class="n">value</span><span class="o">,</span> <span class="n">value</span><span class="o">.</span><span class="na">length</span><span class="o">());</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key (note: type was modified) */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">())</span> <span class="cm">/* value (note: type was modified) */</span>
+  <span class="o">);</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-even"><td><p class="first"><strong>Map</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Takes one record and produces one record.  You can modify the record key and value, including their types.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#map-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
+                            <p><strong>Marks the stream for data re-partitioning:</strong>
+                                Applying a grouping or a join after <code class="docutils literal"><span class="pre">map</span></code> will result in re-partitioning of the records.
+                                If possible use <code class="docutils literal"><span class="pre">mapValues</span></code> instead, which will not cause data re-partitioning.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="c1">// Note how we change the key and the key type (similar to `selectKey`)</span>
+<span class="c1">// as well as the value and the value type.</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">transformed</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">(),</span> <span class="n">value</span><span class="o">.</span><span class="na">length</span><span class="o">()));</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">transformed</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">KeyValueMapper</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">,</span> <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="k">new</span> <span class="n">KeyValue</span><span class="o">&lt;&gt;(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">(),</span> <span class="n">value</span><span class="o">.</span><span class="na">length</span><span class="o">());</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td><p class="first"><strong>Map (values only)</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream</li>
+                            <li>KTable &rarr; KTable</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Takes one record and produces one record, while retaining the key of the original record.
+                            You can modify the record value and the value type.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#mapValues-org.apache.kafka.streams.kstream.ValueMapper-">KStream details</a>,
+                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#mapValues-org.apache.kafka.streams.kstream.ValueMapper-">KTable details</a>)</p>
+                            <p><code class="docutils literal"><span class="pre">mapValues</span></code> is preferable to <code class="docutils literal"><span class="pre">map</span></code> because it will not cause data re-partitioning.  However, it does not
+                                allow you to modify the key or key type like <code class="docutils literal"><span class="pre">map</span></code> does.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">uppercased</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">mapValues</span><span class="o">(</span><span class="n">value</span> <span class="o">-&gt;</span> <span class="n">value</span><span class="o">.</span><span class="na">toUpperCase</span><span cla [...]
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">uppercased</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">mapValues</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">ValueMapper</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">s</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">s</span><span class="o">.</span><span class="na">toUpperCase</span><span class="o">();</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-even"><td><p class="first"><strong>Peek</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Performs a stateless action on each record, and returns an unchanged stream.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#peek-org.apache.kafka.streams.kstream.ForeachAction-">details</a>)</p>
+                            <p>You would use <code class="docutils literal"><span class="pre">peek</span></code> to cause <em>side effects</em> based on the input data (similar to <code class="docutils literal"><span class="pre">foreach</span></code>) and <em>continue</em>
+                                <em>processing</em> the input data (unlike <code class="docutils literal"><span class="pre">foreach</span></code>, which is a terminal operation).  <code class="docutils literal"><span class="pre">peek</span></code> returns the input
+                                stream as-is;  if you need to modify the input stream, use <code class="docutils literal"><span class="pre">map</span></code> or <code class="docutils literal"><span class="pre">mapValues</span></code> instead.</p>
+                            <p><code class="docutils literal"><span class="pre">peek</span></code> is helpful for use cases such as logging or tracking metrics or for debugging and troubleshooting.</p>
+                            <p><strong>Note on processing guarantees:</strong> Any side effects of an action (such as writing to external systems) are not
+                                trackable by Kafka, which means they will typically not benefit from Kafka&#8217;s processing guarantees.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">unmodifiedStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">peek</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;key=&quot;</span> <span class="o">+</span> <span class="n">key</span> <span class="o">+</span> <span class="s">&quot;, value=&quot;</span> <span class [...]
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">unmodifiedStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">peek</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">ForeachAction</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="kt">void</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;key=&quot;</span> <span class="o">+</span> <span class="n">key</span> <span class="o">+</span> <span class="s">&quot;, value=&quot;</span> <span class="o">+</span> <span class="n">value</span><span class="o">);</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td><p class="first"><strong>Print</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; void</li>
+                        </ul>
+                    </td>
+                        <td><p class="first"><strong>Terminal operation.</strong>  Prints the records to <code class="docutils literal"><span class="pre">System.out</span></code>.  See Javadocs for serde and <code class="docutils literal"><span class="pre">toString()</span></code>
+                            caveats.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#print--">details</a>)</p>
+                            <p>Calling <code class="docutils literal"><span class="pre">print()</span></code> is the same as calling <code class="docutils literal"><span class="pre">foreach((key,</span> <span class="pre">value)</span> <span class="pre">-&gt;</span> <span class="pre">System.out.println(key</span> <span class="pre">+</span> <span class="pre">&quot;,</span> <span class="pre">&quot;</span> <span class="pre">+</span> <span class="pre">value))</span></code></p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="c1">// print to sysout</span>
+<span class="n">stream</span><span class="o">.</span><span class="na">print</span><span class="o">();</span>
+
+<span class="c1">// print to file with a custom label</span>
+<span class="n">stream</span><span class="o">.</span><span class="na">print</span><span class="o">(</span><span class="n">Printed</span><span class="o">.</span><span class="na">toFile</span><span class="o">(</span><span class="s">&quot;streams.out&quot;</span><span class="o">).</span><span class="na">withLabel</span><span class="o">(</span><span class="s">&quot;streams&quot;</span><span class="o">));</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-even"><td><p class="first"><strong>SelectKey</strong></p>
+                        <ul class="last simple">
+                            <li>KStream &rarr; KStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Assigns a new key &#8211; possibly of a new key type &#8211; to each record.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#selectKey-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
+                            <p>Calling <code class="docutils literal"><span class="pre">selectKey(mapper)</span></code> is the same as calling <code class="docutils literal"><span class="pre">map((key,</span> <span class="pre">value)</span> <span class="pre">-&gt;</span> <span class="pre">mapper(key,</span> <span class="pre">value),</span> <span class="pre">value)</span></code>.</p>
+                            <p><strong>Marks the stream for data re-partitioning:</strong>
+                                Applying a grouping or a join after <code class="docutils literal"><span class="pre">selectKey</span></code> will result in re-partitioning of the records.</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Derive a new record key from the record&#39;s value.  Note how the key type changes, too.</span>
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">rekeyed</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">selectKey</span><span class="o">((</span><span class="n">key</span><span class="o">,</span> <span class="n">value</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">value</s [...]
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">rekeyed</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">selectKey</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">KeyValueMapper</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">value</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">)[</span><span class="mi">0</span><span class="o">];</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td><p class="first"><strong>Table to Stream</strong></p>
+                        <ul class="last simple">
+                            <li>KTable &rarr; KStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Get the changelog stream of this table.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#toStream--">details</a>)</p>
+                            <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">table</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Also, a variant of `toStream` exists that allows you</span>
+<span class="c1">// to select a new key for the resulting stream.</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="n">table</span><span class="o">.</span><span class="na">toStream</span><span class="o">();</span>
+</pre></div>
+                            </div>
+                        </td>
+                    </tr>
+                    </tbody>
+                </table>
+            </div>
+            <div class="section" id="stateful-transformations">
+                <span id="streams-developer-guide-dsl-transformations-stateful"></span><h3><a class="toc-backref" href="#id11">Stateful transformations</a><a class="headerlink" href="#stateful-transformations" title="Permalink to this headline"></a></h3>
+                <p id="streams-developer-guide-dsl-transformations-stateful-overview">Stateful transformations depend on state for processing inputs and producing outputs and require a <a class="reference internal" href="../architecture.html#streams-architecture-state"><span class="std std-ref">state store</span></a> associated with the stream processor. For example, in aggregating operations, a windowing state store is used to collect the latest aggregation results per
+                    window. In join operations, a windowing state store is used to collect all of the records received so far within the
+                    defined window boundary.</p>
+                <p>Note, that state stores are fault-tolerant.
+                    In case of failure, Kafka Streams guarantees to fully restore all state stores prior to resuming the processing.
+                    See <a class="reference internal" href="../architecture.html#streams-architecture-fault-tolerance"><span class="std std-ref">Fault Tolerance</span></a> for further information.</p>
+                <p>Available stateful transformations in the DSL include:</p>
+                <ul class="simple">
+                    <li><a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">Aggregating</span></a></li>
+                    <li><a class="reference internal" href="#streams-developer-guide-dsl-joins"><span class="std std-ref">Joining</span></a></li>
+                    <li><a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">Windowing</span></a> (as part of aggregations and joins)</li>
+                    <li><a class="reference internal" href="#streams-developer-guide-dsl-process"><span class="std std-ref">Applying custom processors and transformers</span></a>, which may be stateful, for
+                        Processor API integration</li>
+                </ul>
+                <p>The following diagram shows their relationships:</p>
+                <div class="figure align-center" id="id2">
+                    <a class="reference internal image-reference" href="../../../images/streams-stateful_operations.png"><img alt="../../../images/streams-stateful_operations.png" src="../../../images/streams-stateful_operations.png" style="width: 400pt;" /></a>
+                    <p class="caption"><span class="caption-text">Stateful transformations in the DSL.</span></p>
+                </div>
+                <p>Here is an example of a stateful application: the WordCount algorithm.</p>
+                <p>WordCount example in Java 8+, using lambda expressions:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Assume the record values represent lines of text.  For the sake of this example, you can ignore</span>
+<span class="c1">// whatever may be stored in the record keys.</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">textLines</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">wordCounts</span> <span class="o">=</span> <span class="n">textLines</span>
+    <span class="c1">// Split each text line, by whitespace, into words.  The text lines are the record</span>
+    <span class="c1">// values, i.e. you can ignore whatever data is in the record keys and thus invoke</span>
+    <span class="c1">// `flatMapValues` instead of the more generic `flatMap`.</span>
+    <span class="o">.</span><span class="na">flatMapValues</span><span class="o">(</span><span class="n">value</span> <span class="o">-&gt;</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">().</span><span class="na">split</span><span class="o">(</span><span class="s">&quot;\\W+&quot;</span><span class="o">)))</span>
+    <span class="c1">// Group the stream by word to ensure the key of the record is the word.</span>
+    <span class="o">.</span><span class="na">groupBy</span><span class="o">((</span><span class="n">key</span><span class="o">,</span> <span class="n">word</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">word</span><span class="o">)</span>
+    <span class="c1">// Count the occurrences of each word (record key).</span>
+    <span class="c1">//</span>
+    <span class="c1">// This will change the stream type from `KGroupedStream&lt;String, String&gt;` to</span>
+    <span class="c1">// `KTable&lt;String, Long&gt;` (word -&gt; count).</span>
+    <span class="o">.</span><span class="na">count</span><span class="o">()</span>
+    <span class="c1">// Convert the `KTable&lt;String, Long&gt;` into a `KStream&lt;String, Long&gt;`.</span>
+    <span class="o">.</span><span class="na">toStream</span><span class="o">();</span>
+</pre></div>
+                </div>
+                <p>WordCount example in Java 7:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Code below is equivalent to the previous Java 8+ example above.</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">textLines</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">wordCounts</span> <span class="o">=</span> <span class="n">textLines</span>
+    <span class="o">.</span><span class="na">flatMapValues</span><span class="o">(</span><span class="k">new</span> <span class="n">ValueMapper</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Iterable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
+        <span class="nd">@Override</span>
+        <span class="kd">public</span> <span class="n">Iterable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+            <span class="k">return</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">().</span><span class="na">split</span><span class="o">(</span><span class="s">&quot;\\W+&quot;</span><span class="o">));</span>
+        <span class="o">}</span>
+    <span class="o">})</span>
+    <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="n">KeyValueMapper</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;&gt;()</span> <span class="o">{</span>
+        <span class="nd">@Override</span>
+        <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">String</span> <span class="n">word</span><span class="o">)</span> <span class="o">{</span>
+            <span class="k">return</span> <span class="n">word</span><span class="o">;</span>
+        <span class="o">}</span>
+    <span class="o">})</span>
+    <span class="o">.</span><span class="na">count</span><span class="o">()</span>
+    <span class="o">.</span><span class="na">toStream</span><span class="o">();</span>
+</pre></div>
+                </div>
+                <div class="section" id="aggregating">
+                    <span id="streams-developer-guide-dsl-aggregating"></span><h4><a class="toc-backref" href="#id12">Aggregating</a><a class="headerlink" href="#aggregating" title="Permalink to this headline"></a></h4>
+                    <p>After records are <a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateless"><span class="std std-ref">grouped</span></a> by key via <code class="docutils literal"><span class="pre">groupByKey</span></code> or
+                        <code class="docutils literal"><span class="pre">groupBy</span></code> &#8211; and thus represented as either a <code class="docutils literal"><span class="pre">KGroupedStream</span></code> or a <code class="docutils literal"><span class="pre">KGroupedTable</span></code>, they can be aggregated
+                        via an operation such as <code class="docutils literal"><span class="pre">reduce</span></code>.  Aggregations are key-based operations, which means that they always operate over records
+                        (notably record values) of the same key.
+                        You can perform aggregations on <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">windowed</span></a> or non-windowed data.</p>
+                    <table border="1" class="non-scrolling-table width-100-percent docutils">
+                        <colgroup>
+                            <col width="22%" />
+                            <col width="78%" />
+                        </colgroup>
+                        <thead valign="bottom">
+                        <tr class="row-odd"><th class="head">Transformation</th>
+                            <th class="head">Description</th>
+                        </tr>
+                        </thead>
+                        <tbody valign="top">
+                        <tr class="row-even"><td><p class="first"><strong>Aggregate</strong></p>
+                            <ul class="last simple">
+                                <li>KGroupedStream &rarr; KTable</li>
+                                <li>KGroupedTable &rarr; KTable</li>
+                            </ul>
+                        </td>
+                            <td><p class="first"><strong>Rolling aggregation.</strong> Aggregates the values of (non-windowed) records by the grouped key.
+                                Aggregating is a generalization of <code class="docutils literal"><span class="pre">reduce</span></code> and allows, for example, the aggregate value to have a different
+                                type than the input values.
+                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
+                                <p>When aggregating a <em>grouped stream</em>, you must provide an initializer (e.g., <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">=</span> <span class="pre">0</span></code>) and an &#8220;adder&#8221;
+                                    aggregator (e.g., <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">+</span> <span class="pre">curValue</span></code>).  When aggregating a <em>grouped table</em>, you must provide a
+                                    &#8220;subtractor&#8221; aggregator (think: <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">-</span> <span class="pre">oldValue</span></code>).</p>
+                                <p>Several variants of <code class="docutils literal"><span class="pre">aggregate</span></code> exist, see Javadocs for details.</p>
+                                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KGroupedTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedTable</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ examples, using lambda expressions</span>
+
+<span class="c1">// Aggregating a KGroupedStream (note how the value type changes from String to Long)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span>
+    <span class="o">()</span> <span class="o">-&gt;</span> <span class="mi">0</span><span class="n">L</span><span class="o">,</span> <span class="cm">/* initializer */</span>
+    <span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">.</span><span class="na">length</span><span class="o">(),</span> <span class="cm">/* adder */</span>
+    <span class="n">Materialized</span><span class="o">.</span><span class="na">as</span><span class="o">(</span><span class="s">&quot;aggregated-stream-store&quot;</span><span class="o">)</span> <span class="cm">/* state store name */</span>
+        <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">());</span> <span class="cm">/* serde for aggregate value */</span>
+
+<span class="c1">// Aggregating a KGroupedTable (note how the value type changes from String to Long)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedTable</span> <span class="o">=</span> <span class="n">groupedTable</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span>
+    <span class="o">()</span> <span class="o">-&gt;</span> <span class="mi">0</span><span class="n">L</span><span class="o">,</span> <span class="cm">/* initializer */</span>
+    <span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">.</span><span class="na">length</span><span class="o">(),</span> <span class="cm">/* adder */</span>
+    <span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">oldValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">-</span> <span class="n">oldValue</span><span class="o">.</span><span class="na">length</span><span class="o">(),</span> <span class="cm">/* subtractor */</span>
+    <span class="n">Materialized</span><span class="o">.</span><span class="na">as</span><span class="o">(</span><span class="s">&quot;aggregated-table-store&quot;</span><span class="o">)</span> <span class="cm">/* state store name */</span>
+	<span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* serde for aggregate value */</span>
+
+
+<span class="c1">// Java 7 examples</span>
+
+<span class="c1">// Aggregating a KGroupedStream (note how the value type changes from String to Long)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">Initializer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* initializer */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">()</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="mi">0</span><span class="n">L</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="k">new</span> <span class="n">Aggregator</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* adder */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">aggKey</span><span class="o">,</span> <span class="n">String</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">.</span><span class="na">length</span><span class="o">();</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">Materialized</span><span class="o">.</span><span class="na">as</span><span class="o">(</span><span class="s">&quot;aggregated-stream-store&quot;</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">());</span>
+
+<span class="c1">// Aggregating a KGroupedTable (note how the value type changes from String to Long)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedTable</span> <span class="o">=</span> <span class="n">groupedTable</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">Initializer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* initializer */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">()</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="mi">0</span><span class="n">L</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="k">new</span> <span class="n">Aggregator</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* adder */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">aggKey</span><span class="o">,</span> <span class="n">String</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">.</span><span class="na">length</span><span class="o">();</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="k">new</span> <span class="n">Aggregator</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* subtractor */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">aggKey</span><span class="o">,</span> <span class="n">String</span> <span class="n">oldValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">aggValue</span> <span class="o">-</span> <span class="n">oldValue</span><span class="o">.</span><span class="na">length</span><span class="o">();</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">Materialized</span><span class="o">.</span><span class="na">as</span><span class="o">(</span><span class="s">&quot;aggregated-stream-store&quot;</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">());</span>
+</pre></div>
+                                </div>
+                                <p>Detailed behavior of <code class="docutils literal"><span class="pre">KGroupedStream</span></code>:</p>
+                                <ul class="simple">
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys are ignored.</li>
+                                    <li>When a record key is received for the first time, the initializer is called (and called before the adder).</li>
+                                    <li>Whenever a record with a non-<code class="docutils literal"><span class="pre">null</span></code> value is received, the adder is called.</li>
+                                </ul>
+                                <p>Detailed behavior of <code class="docutils literal"><span class="pre">KGroupedTable</span></code>:</p>
+                                <ul class="simple">
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys are ignored.</li>
+                                    <li>When a record key is received for the first time, the initializer is called (and called before the adder
+                                        and subtractor).  Note that, in contrast to <code class="docutils literal"><span class="pre">KGroupedStream</span></code>, over time the initializer may be called
+                                        more than once for a key as a result of having received input tombstone records for that key (see below).</li>
+                                    <li>When the first non-<code class="docutils literal"><span class="pre">null</span></code> value is received for a key (e.g.,  INSERT), then only the adder is called.</li>
+                                    <li>When subsequent non-<code class="docutils literal"><span class="pre">null</span></code> values are received for a key (e.g.,  UPDATE), then (1) the subtractor is
+                                        called with the old value as stored in the table and (2) the adder is called with the new value of the
+                                        input record that was just received.  The order of execution for the subtractor and adder is not defined.</li>
+                                    <li>When a tombstone record &#8211; i.e. a record with a <code class="docutils literal"><span class="pre">null</span></code> value &#8211; is received for a key (e.g.,  DELETE),
+                                        then only the subtractor is called.  Note that, whenever the subtractor returns a <code class="docutils literal"><span class="pre">null</span></code> value itself,
+                                        then the corresponding key is removed from the resulting <code class="docutils literal"><span class="pre">KTable</span></code>.  If that happens, any next input
+                                        record for that key will trigger the initializer again.</li>
+                                </ul>
+                                <p class="last">See the example at the bottom of this section for a visualization of the aggregation semantics.</p>
+                            </td>
+                        </tr>
+                        <tr class="row-odd"><td><p class="first"><strong>Aggregate (windowed)</strong></p>
+                            <ul class="last simple">
+                                <li>KGroupedStream &rarr; KTable</li>
+                            </ul>
+                        </td>
+                            <td><p class="first"><strong>Windowed aggregation.</strong>
+                                Aggregates the values of records, <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">per window</span></a>, by the grouped key.
+                                Aggregating is a generalization of <code class="docutils literal"><span class="pre">reduce</span></code> and allows, for example, the aggregate value to have a different
+                                type than the input values.
+                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
+                                <p>You must provide an initializer (e.g.,  <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">=</span> <span class="pre">0</span></code>), &#8220;adder&#8221; aggregator (e.g.,  <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">+</span> <span class="pre">curValue</span></code>),
+                                    and a window.  When windowing based on sessions, you must additionally provide a &#8220;session merger&#8221; aggregator
+                                    (e.g.,  <code class="docutils literal"><span class="pre">mergedAggValue</span> <span class="pre">=</span> <span class="pre">leftAggValue</span> <span class="pre">+</span> <span class="pre">rightAggValue</span></code>).</p>
+                                <p>The windowed <code class="docutils literal"><span class="pre">aggregate</span></code> turns a <code class="docutils literal"><span class="pre">TimeWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code> or <code class="docutils literal"><span class="pre">SessionWindowdKStream&lt;K,</span> <span class="pre">V&gt;</span></code>
+                                    into a windowed <code class="docutils literal"><span class="pre">KTable&lt;Windowed&lt;K&gt;,</span> <span class="pre">V&gt;</span></code>.</p>
+                                <p>Several variants of <code class="docutils literal"><span class="pre">aggregate</span></code> exist, see Javadocs for details.</p>
+                                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ examples, using lambda expressions</span>
+
+<span class="c1">// Aggregating with time-based windowing (here: with 5-minute tumbling windows)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">timeWindowedAggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na [...]
+    <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span>
+      <span class="o">()</span> <span class="o">-&gt;</span> <span class="mi">0</span><span class="n">L</span><span class="o">,</span> <span class="cm">/* initializer */</span>
+    	<span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">,</span> <span class="cm">/* adder */</span>
+      <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">WindowStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;time-windowed-aggregated-stream-store&quot;</span><span class="o">)</sp [...]
+        <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span> <span class="cm">/* serde for aggregate value */</span>
+
+<span class="c1">// Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">sessionizedAggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span><span class="n">SessionWindows</span><span class="o">.</span><span clas [...]
+    <span class="n">aggregate</span><span class="o">(</span>
+    	<span class="o">()</span> <span class="o">-&gt;</span> <span class="mi">0</span><span class="n">L</span><span class="o">,</span> <span class="cm">/* initializer */</span>
+    	<span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">,</span> <span class="cm">/* adder */</span>
+    	<span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">leftAggValue</span><span class="o">,</span> <span class="n">rightAggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">leftAggValue</span> <span class="o">+</span> <span class="n">rightAggValue</span><span class="o">,</span> <span class="cm">/* session merger */</span>
+	    <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">SessionStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;sessionized-aggregated-stream-store&quot;</span><span class="o">)</span [...]
+        <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span> <span class="cm">/* serde for aggregate value */</span>
+
+<span class="c1">// Java 7 examples</span>
+
+<span class="c1">// Aggregating with time-based windowing (here: with 5-minute tumbling windows)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">timeWindowedAggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na [...]
+    <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span>
+        <span class="k">new</span> <span class="n">Initializer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* initializer */</span>
+            <span class="nd">@Override</span>
+            <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">()</span> <span class="o">{</span>
+                <span class="k">return</span> <span class="mi">0</span><span class="n">L</span><span class="o">;</span>
+            <span class="o">}</span>
+        <span class="o">},</span>
+        <span class="k">new</span> <span class="n">Aggregator</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* adder */</span>
+            <span class="nd">@Override</span>
+            <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">aggKey</span><span class="o">,</span> <span class="n">Long</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">{</span>
+                <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">;</span>
+            <span class="o">}</span>
+        <span class="o">},</span>
+        <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">WindowStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;time-windowed-aggregated-stream-store&quot;</span><span class="o">)</span>
+          <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span>
+
+<span class="c1">// Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">sessionizedAggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span><span class="n">SessionWindows</span><span class="o">.</span><span clas [...]
+    <span class="n">aggregate</span><span class="o">(</span>
+        <span class="k">new</span> <span class="n">Initializer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* initializer */</span>
+            <span class="nd">@Override</span>
+            <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">()</span> <span class="o">{</span>
+                <span class="k">return</span> <span class="mi">0</span><span class="n">L</span><span class="o">;</span>
+            <span class="o">}</span>
+        <span class="o">},</span>
+        <span class="k">new</span> <span class="n">Aggregator</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* adder */</span>
+            <span class="nd">@Override</span>
+            <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">aggKey</span><span class="o">,</span> <span class="n">Long</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">{</span>
+                <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">;</span>
+            <span class="o">}</span>
+        <span class="o">},</span>
+        <span class="k">new</span> <span class="n">Merger</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* session merger */</span>
+            <span class="nd">@Override</span>
+            <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">aggKey</span><span class="o">,</span> <span class="n">Long</span> <span class="n">leftAggValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">rightAggValue</span><span class="o">)</span> <span class="o">{</span>
+                <span class="k">return</span> <span class="n">rightAggValue</span> <span class="o">+</span> <span class="n">leftAggValue</span><span class="o">;</span>
+            <span class="o">}</span>
+        <span class="o">},</span>
+        <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">SessionStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;sessionized-aggregated-stream-store&quot;</span><span class="o">)</span>
+          <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">()));</span>
+</pre></div>
+                                </div>
+                                <p>Detailed behavior:</p>
+                                <ul class="simple">
+                                    <li>The windowed aggregate behaves similar to the rolling aggregate described above.  The additional twist is that
+                                        the behavior applies <em>per window</em>.</li>
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys are ignored in general.</li>
+                                    <li>When a record key is received for the first time for a given window, the initializer is called (and called
+                                        before the adder).</li>
+                                    <li>Whenever a record with a non-<code class="docutils literal"><span class="pre">null</span></code> value is received for a given window, the adder is called.</li>
+                                    <li>When using session windows: the session merger is called whenever two sessions are being merged.</li>
+                                </ul>
+                                <p class="last">See the example at the bottom of this section for a visualization of the aggregation semantics.</p>
+                            </td>
+                        </tr>
+                        <tr class="row-even"><td><p class="first"><strong>Count</strong></p>
+                            <ul class="last simple">
+                                <li>KGroupedStream &rarr; KTable</li>
+                                <li>KGroupedTable &rarr; KTable</li>
+                            </ul>
+                        </td>
+                            <td><p class="first"><strong>Rolling aggregation.</strong> Counts the number of records by the grouped key.
+                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
+                                <p>Several variants of <code class="docutils literal"><span class="pre">count</span></code> exist, see Javadocs for details.</p>
+                                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KGroupedTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedTable</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Counting a KGroupedStream</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">count</span><span class="o">();</span>
+
+<span class="c1">// Counting a KGroupedTable</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedTable</span> <span class="o">=</span> <span class="n">groupedTable</span><span class="o">.</span><span class="na">count</span><span class="o">();</span>
+</pre></div>
+                                </div>
+                                <p>Detailed behavior for <code class="docutils literal"><span class="pre">KGroupedStream</span></code>:</p>
+                                <ul class="simple">
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys or values are ignored.</li>
+                                </ul>
+                                <p>Detailed behavior for <code class="docutils literal"><span class="pre">KGroupedTable</span></code>:</p>
+                                <ul class="last simple">
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys are ignored.  Records with <code class="docutils literal"><span class="pre">null</span></code> values are not ignored but interpreted
+                                        as &#8220;tombstones&#8221; for the corresponding key, which indicate the deletion of the key from the table.</li>
+                                </ul>
+                            </td>
+                        </tr>
+                        <tr class="row-odd"><td><p class="first"><strong>Count (windowed)</strong></p>
+                            <ul class="last simple">
+                                <li>KGroupedStream &rarr; KTable</li>
+                            </ul>
+                        </td>
+                            <td><p class="first"><strong>Windowed aggregation.</strong>
+                                Counts the number of records, <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">per window</span></a>, by the grouped key.
+                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
+                                <p>The windowed <code class="docutils literal"><span class="pre">count</span></code> turns a <code class="docutils literal"><span class="pre">TimeWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code> or <code class="docutils literal"><span class="pre">SessionWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code>
+                                    into a windowed <code class="docutils literal"><span class="pre">KTable&lt;Windowed&lt;K&gt;,</span> <span class="pre">V&gt;</span></code>.</p>
+                                <p>Several variants of <code class="docutils literal"><span class="pre">count</span></code> exist, see Javadocs for details.</p>
+                                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Counting a KGroupedStream with time-based windowing (here: with 5-minute tumbling windows)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span>
+    <span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)))</span> <span class="cm">/* time-based window */</span>
+    <span class="o">.</span><span class="na">count</span><span class="o">();</span>
+
+<span class="c1">// Counting a KGroupedStream with session-based windowing (here: with 5-minute inactivity gaps)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span>
+    <span class="n">SessionWindows</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)))</span> <span class="cm">/* session window */</span>
+    <span class="o">.</span><span class="na">count</span><span class="o">();</span>
+</pre></div>
+                                </div>
+                                <p>Detailed behavior:</p>
+                                <ul class="last simple">
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys or values are ignored.</li>
+                                </ul>
+                            </td>
+                        </tr>
+                        <tr class="row-even"><td><p class="first"><strong>Reduce</strong></p>
+                            <ul class="last simple">
+                                <li>KGroupedStream &rarr; KTable</li>
+                                <li>KGroupedTable &rarr; KTable</li>
+                            </ul>
+                        </td>
+                            <td><p class="first"><strong>Rolling aggregation.</strong> Combines the values of (non-windowed) records by the grouped key.
+                                The current record value is combined with the last reduced value, and a new reduced value is returned.
+                                The result value type cannot be changed, unlike <code class="docutils literal"><span class="pre">aggregate</span></code>.
+                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
+                                <p>When reducing a <em>grouped stream</em>, you must provide an &#8220;adder&#8221; reducer (e.g.,  <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">+</span> <span class="pre">curValue</span></code>).
+                                    When reducing a <em>grouped table</em>, you must additionally provide a &#8220;subtractor&#8221; reducer (e.g.,
+                                    <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">-</span> <span class="pre">oldValue</span></code>).</p>
+                                <p>Several variants of <code class="docutils literal"><span class="pre">reduce</span></code> exist, see Javadocs for details.</p>
+                                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KGroupedTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedTable</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ examples, using lambda expressions</span>
+
+<span class="c1">// Reducing a KGroupedStream</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">reduce</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">aggValue</span><span class="o">,</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span> <span class="cm">/* adder */</span><span class="o">);</span>
+
+<span class="c1">// Reducing a KGroupedTable</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedTable</span> <span class="o">=</span> <span class="n">groupedTable</span><span class="o">.</span><span class="na">reduce</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">aggValue</span><span class="o">,</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">,</span> <span class="cm">/* adder */</span>
+    <span class="o">(</span><span class="n">aggValue</span><span class="o">,</span> <span class="n">oldValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">-</span> <span class="n">oldValue</span> <span class="cm">/* subtractor */</span><span class="o">);</span>
+
+
+<span class="c1">// Java 7 examples</span>
+
+<span class="c1">// Reducing a KGroupedStream</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">reduce</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">Reducer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* adder */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">aggValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+
+<span class="c1">// Reducing a KGroupedTable</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">aggregatedTable</span> <span class="o">=</span> <span class="n">groupedTable</span><span class="o">.</span><span class="na">reduce</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">Reducer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* adder */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">aggValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="k">new</span> <span class="n">Reducer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* subtractor */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">aggValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">oldValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">aggValue</span> <span class="o">-</span> <span class="n">oldValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                                </div>
+                                <p>Detailed behavior for <code class="docutils literal"><span class="pre">KGroupedStream</span></code>:</p>
+                                <ul class="simple">
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys are ignored in general.</li>
+                                    <li>When a record key is received for the first time, then the value of that record is used as the initial
+                                        aggregate value.</li>
+                                    <li>Whenever a record with a non-<code class="docutils literal"><span class="pre">null</span></code> value is received, the adder is called.</li>
+                                </ul>
+                                <p>Detailed behavior for <code class="docutils literal"><span class="pre">KGroupedTable</span></code>:</p>
+                                <ul class="simple">
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys are ignored in general.</li>
+                                    <li>When a record key is received for the first time, then the value of that record is used as the initial
+                                        aggregate value.
+                                        Note that, in contrast to <code class="docutils literal"><span class="pre">KGroupedStream</span></code>, over time this initialization step may happen more than once
+                                        for a key as a result of having received input tombstone records for that key (see below).</li>
+                                    <li>When the first non-<code class="docutils literal"><span class="pre">null</span></code> value is received for a key (e.g.,  INSERT), then only the adder is called.</li>
+                                    <li>When subsequent non-<code class="docutils literal"><span class="pre">null</span></code> values are received for a key (e.g.,  UPDATE), then (1) the subtractor is
+                                        called with the old value as stored in the table and (2) the adder is called with the new value of the
+                                        input record that was just received.  The order of execution for the subtractor and adder is not defined.</li>
+                                    <li>When a tombstone record &#8211; i.e. a record with a <code class="docutils literal"><span class="pre">null</span></code> value &#8211; is received for a key (e.g.,  DELETE),
+                                        then only the subtractor is called.  Note that, whenever the subtractor returns a <code class="docutils literal"><span class="pre">null</span></code> value itself,
+                                        then the corresponding key is removed from the resulting <code class="docutils literal"><span class="pre">KTable</span></code>.  If that happens, any next input
+                                        record for that key will re-initialize its aggregate value.</li>
+                                </ul>
+                                <p class="last">See the example at the bottom of this section for a visualization of the aggregation semantics.</p>
+                            </td>
+                        </tr>
+                        <tr class="row-odd"><td><p class="first"><strong>Reduce (windowed)</strong></p>
+                            <ul class="last simple">
+                                <li>KGroupedStream &rarr; KTable</li>
+                            </ul>
+                        </td>
+                            <td><p class="first"><strong>Windowed aggregation.</strong>
+                                Combines the values of records, <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">per window</span></a>, by the grouped key.
+                                The current record value is combined with the last reduced value, and a new reduced value is returned.
+                                Records with <code class="docutils literal"><span class="pre">null</span></code> key or value are ignored.
+                                The result value type cannot be changed, unlike <code class="docutils literal"><span class="pre">aggregate</span></code>.
+                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
+                                <p>The windowed <code class="docutils literal"><span class="pre">reduce</span></code> turns a turns a <code class="docutils literal"><span class="pre">TimeWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code> or a <code class="docutils literal"><span class="pre">SessionWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code>
+                                    into a windowed <code class="docutils literal"><span class="pre">KTable&lt;Windowed&lt;K&gt;,</span> <span class="pre">V&gt;</span></code>.</p>
+                                <p>Several variants of <code class="docutils literal"><span class="pre">reduce</span></code> exist, see Javadocs for details.</p>
+                                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ examples, using lambda expressions</span>
+
+<span class="c1">// Aggregating with time-based windowing (here: with 5-minute tumbling windows)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">timeWindowedAggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span>
+  <span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">))</span> <span class="cm">/* time-based window */</span><span class="o">)</span>
+  <span class="o">.</span><span class="na">reduce</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">aggValue</span><span class="o">,</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span> <span class="cm">/* adder */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">sessionzedAggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span>
+  <span class="n">SessionWindows</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)))</span> <span class="cm">/* session window */</span>
+  <span class="o">.</span><span class="na">reduce</span><span class="o">(</span>
+    <span class="o">(</span><span class="n">aggValue</span><span class="o">,</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span> <span class="cm">/* adder */</span>
+  <span class="o">);</span>
+
+
+<span class="c1">// Java 7 examples</span>
+
+<span class="c1">// Aggregating with time-based windowing (here: with 5-minute tumbling windows)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">timeWindowedAggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">..</span><span class="na">windowedBy</span><span class="o">(</span>
+  <span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">))</span> <span class="cm">/* time-based window */</span><span class="o">)</span>
+  <span class="o">.</span><span class="na">reduce</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">Reducer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* adder */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">aggValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+
+<span class="c1">// Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">Windowed</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">timeWindowedAggregatedStream</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span>
+  <span class="n">SessionWindows</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)))</span> <span class="cm">/* session window */</span>
+  <span class="o">.</span><span class="na">reduce</span><span class="o">(</span>
+    <span class="k">new</span> <span class="n">Reducer</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* adder */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Long</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">aggValue</span><span class="o">,</span> <span class="n">Long</span> <span class="n">newValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                                </div>
+                                <p>Detailed behavior:</p>
+                                <ul class="simple">
+                                    <li>The windowed reduce behaves similar to the rolling reduce described above.  The additional twist is that the
+                                        behavior applies <em>per window</em>.</li>
+                                    <li>Input records with <code class="docutils literal"><span class="pre">null</span></code> keys are ignored in general.</li>
+                                    <li>When a record key is received for the first time for a given window, then the value of that record is used as
+                                        the initial aggregate value.</li>
+                                    <li>Whenever a record with a non-<code class="docutils literal"><span class="pre">null</span></code> value is received for a given window, the adder is called.</li>
+                                </ul>
+                                <p class="last">See the example at the bottom of this section for a visualization of the aggregation semantics.</p>
+                            </td>
+                        </tr>
+                        </tbody>
+                    </table>
+                    <p><strong>Example of semantics for stream aggregations:</strong>
+                        A <code class="docutils literal"><span class="pre">KGroupedStream</span></code> &rarr; <code class="docutils literal"><span class="pre">KTable</span></code> example is shown below.  The streams and the table are initially empty.  Bold
+                        font is used in the column for &#8220;KTable <code class="docutils literal"><span class="pre">aggregated</span></code>&#8221; to highlight changed state.  An entry such as <code class="docutils literal"><span class="pre">(hello,</span> <span class="pre">1)</span></code> denotes a
+                        record with key <code class="docutils literal"><span class="pre">hello</span></code> and value <code class="docutils literal"><span class="pre">1</span></code>.  To improve the readability of the semantics table you can assume that all records
+                        are processed in timestamp order.</p>
+                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Key: word, value: count</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">wordCounts</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="n">wordCounts</span>
+    <span class="o">.</span><span class="na">groupByKey</span><span class="o">(</span><span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">()));</span>
+
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">aggregated</span> <span class="o">=</span> <span class="n">groupedStream</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span>
+    <span class="o">()</span> <span class="o">-&gt;</span> <span class="mi">0</span><span class="o">,</span> <span class="cm">/* initializer */</span>
+    <span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">,</span> <span class="cm">/* adder */</span>
+    <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;aggregated-stream-store&quot;</span> <span class="cm">/* state store name * [...]
+      <span class="o">.</span><span class="na">withKeySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key serde */</span>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">());</span> <span class="cm">/* serde for aggregate value */</span>
+</pre></div>
+                    </div>
+                    <div class="admonition note">
+                        <p class="first admonition-title">Note</p>
+                        <p class="last"><strong>Impact of record caches</strong>:
+                            For illustration purposes, the column &#8220;KTable <code class="docutils literal"><span class="pre">aggregated</span></code>&#8221; below shows the table&#8217;s state changes over time in a
+                            very granular way.  In practice, you would observe state changes in such a granular way only when
+                            <a class="reference internal" href="memory-mgmt.html#streams-developer-guide-memory-management-record-cache"><span class="std std-ref">record caches</span></a> are disabled (default: enabled).
+                            When record caches are enabled, what might happen for example is that the output results of the rows with timestamps
+                            4 and 5 would be <a class="reference internal" href="memory-mgmt.html#streams-developer-guide-memory-management-record-cache"><span class="std std-ref">compacted</span></a>, and there would only be
+                            a single state update for the key <code class="docutils literal"><span class="pre">kafka</span></code> in the KTable (here: from <code class="docutils literal"><span class="pre">(kafka</span> <span class="pre">1)</span></code> directly to <code class="docutils literal"><span class="pre">(kafka,</span> <span class="pre">3)</span></code>.
+                            Typically, you should only disable record caches for testing or debugging purposes &#8211; under normal circumstances it
+                            is better to leave record caches enabled.</p>
+                    </div>
+                    <table border="1" class="docutils">
+                        <colgroup>
+                            <col width="11%" />
+                            <col width="17%" />
+                            <col width="15%" />
+                            <col width="17%" />
+                            <col width="18%" />
+                            <col width="22%" />
+                        </colgroup>
+                        <thead valign="bottom">
+                        <tr class="row-odd"><th class="head">&nbsp;</th>
+                            <th class="head" colspan="2">KStream <code class="docutils literal"><span class="pre">wordCounts</span></code></th>
+                            <th class="head" colspan="2">KGroupedStream <code class="docutils literal"><span class="pre">groupedStream</span></code></th>
+                            <th class="head">KTable <code class="docutils literal"><span class="pre">aggregated</span></code></th>
+                        </tr>
+                        <tr class="row-even"><th class="head">Timestamp</th>
+                            <th class="head">Input record</th>
+                            <th class="head">Grouping</th>
+                            <th class="head">Initializer</th>
+                            <th class="head">Adder</th>
+                            <th class="head">State</th>
+                        </tr>
+                        </thead>
+                        <tbody valign="top">
+                        <tr class="row-odd"><td>1</td>
+                            <td>(hello, 1)</td>
+                            <td>(hello, 1)</td>
+                            <td>0 (for hello)</td>
+                            <td>(hello, 0 + 1)</td>
+                            <td><div class="first last line-block">
+                                <div class="line"><strong>(hello, 1)</strong></div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-even"><td>2</td>
+                            <td>(kafka, 1)</td>
+                            <td>(kafka, 1)</td>
+                            <td>0 (for kafka)</td>
+                            <td>(kafka, 0 + 1)</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(hello, 1)</div>
+                                <div class="line"><strong>(kafka, 1)</strong></div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-odd"><td>3</td>
+                            <td>(streams, 1)</td>
+                            <td>(streams, 1)</td>
+                            <td>0 (for streams)</td>
+                            <td>(streams, 0 + 1)</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(hello, 1)</div>
+                                <div class="line">(kafka, 1)</div>
+                                <div class="line"><strong>(streams, 1)</strong></div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-even"><td>4</td>
+                            <td>(kafka, 1)</td>
+                            <td>(kafka, 1)</td>
+                            <td>&nbsp;</td>
+                            <td>(kafka, 1 + 1)</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(hello, 1)</div>
+                                <div class="line">(kafka, <strong>2</strong>)</div>
+                                <div class="line">(streams, 1)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-odd"><td>5</td>
+                            <td>(kafka, 1)</td>
+                            <td>(kafka, 1)</td>
+                            <td>&nbsp;</td>
+                            <td>(kafka, 2 + 1)</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(hello, 1)</div>
+                                <div class="line">(kafka, <strong>3</strong>)</div>
+                                <div class="line">(streams, 1)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-even"><td>6</td>
+                            <td>(streams, 1)</td>
+                            <td>(streams, 1)</td>
+                            <td>&nbsp;</td>
+                            <td>(streams, 1 + 1)</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(hello, 1)</div>
+                                <div class="line">(kafka, 3)</div>
+                                <div class="line">(streams, <strong>2</strong>)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        </tbody>
+                    </table>
+                    <p><strong>Example of semantics for table aggregations:</strong>
+                        A <code class="docutils literal"><span class="pre">KGroupedTable</span></code> &rarr; <code class="docutils literal"><span class="pre">KTable</span></code> example is shown below.  The tables are initially empty. Bold font is used in the column
+                        for &#8220;KTable <code class="docutils literal"><span class="pre">aggregated</span></code>&#8221; to highlight changed state.  An entry such as <code class="docutils literal"><span class="pre">(hello,</span> <span class="pre">1)</span></code> denotes a record with key
+                        <code class="docutils literal"><span class="pre">hello</span></code> and value <code class="docutils literal"><span class="pre">1</span></code>.  To improve the readability of the semantics table you can assume that all records are processed
+                        in timestamp order.</p>
+                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Key: username, value: user region (abbreviated to &quot;E&quot; for &quot;Europe&quot;, &quot;A&quot; for &quot;Asia&quot;)</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">userProfiles</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Re-group `userProfiles`.  Don&#39;t read too much into what the grouping does:</span>
+<span class="c1">// its prime purpose in this example is to show the *effects* of the grouping</span>
+<span class="c1">// in the subsequent aggregation.</span>
+<span class="n">KGroupedTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">groupedTable</span> <span class="o">=</span> <span class="n">userProfiles</span>
+    <span class="o">.</span><span class="na">groupBy</span><span class="o">((</span><span class="n">user</span><span class="o">,</span> <span class="n">region</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span><span class="n">region</span><span class="o">,</span> <span class="n">user</span><span class="o">.</span><span class="na">length</span><span class="o">()),</span> <sp [...]
+
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">aggregated</span> <span class="o">=</span> <span class="n">groupedTable</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span>
+    <span class="o">()</span> <span class="o">-&gt;</span> <span class="mi">0</span><span class="o">,</span> <span class="cm">/* initializer */</span>
+    <span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">newValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">+</span> <span class="n">newValue</span><span class="o">,</span> <span class="cm">/* adder */</span>
+    <span class="o">(</span><span class="n">aggKey</span><span class="o">,</span> <span class="n">oldValue</span><span class="o">,</span> <span class="n">aggValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">aggValue</span> <span class="o">-</span> <span class="n">oldValue</span><span class="o">,</span> <span class="cm">/* subtractor */</span>
+    <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;aggregated-table-store&quot;</span> <span class="cm">/* state store name */ [...]
+      <span class="o">.</span><span class="na">withKeySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key serde */</span>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Integer</span><span class="o">());</span> <span class="cm">/* serde for aggregate value */</span>
+</pre></div>
+                    </div>
+                    <div class="admonition note">
+                        <p class="first admonition-title">Note</p>
+                        <p class="last"><strong>Impact of record caches</strong>:
+                            For illustration purposes, the column &#8220;KTable <code class="docutils literal"><span class="pre">aggregated</span></code>&#8221; below shows the table&#8217;s state changes over time in a
+                            very granular way.  In practice, you would observe state changes in such a granular way only when
+                            <a class="reference internal" href="memory-mgmt.html#streams-developer-guide-memory-management-record-cache"><span class="std std-ref">record caches</span></a> are disabled (default: enabled).
+                            When record caches are enabled, what might happen for example is that the output results of the rows with timestamps
+                            4 and 5 would be <a class="reference internal" href="memory-mgmt.html#streams-developer-guide-memory-management-record-cache"><span class="std std-ref">compacted</span></a>, and there would only be
+                            a single state update for the key <code class="docutils literal"><span class="pre">kafka</span></code> in the KTable (here: from <code class="docutils literal"><span class="pre">(kafka</span> <span class="pre">1)</span></code> directly to <code class="docutils literal"><span class="pre">(kafka,</span> <span class="pre">3)</span></code>.
+                            Typically, you should only disable record caches for testing or debugging purposes &#8211; under normal circumstances it
+                            is better to leave record caches enabled.</p>
+                    </div>
+                    <table border="1" class="docutils">
+                        <colgroup>
+                            <col width="9%" />
+                            <col width="14%" />
+                            <col width="15%" />
+                            <col width="11%" />
+                            <col width="11%" />
+                            <col width="11%" />
+                            <col width="11%" />
+                            <col width="19%" />
+                        </colgroup>
+                        <thead valign="bottom">
+                        <tr class="row-odd"><th class="head">&nbsp;</th>
+                            <th class="head" colspan="3">KTable <code class="docutils literal"><span class="pre">userProfiles</span></code></th>
+                            <th class="head" colspan="3">KGroupedTable <code class="docutils literal"><span class="pre">groupedTable</span></code></th>
+                            <th class="head">KTable <code class="docutils literal"><span class="pre">aggregated</span></code></th>
+                        </tr>
+                        <tr class="row-even"><th class="head">Timestamp</th>
+                            <th class="head">Input record</th>
+                            <th class="head">Interpreted as</th>
+                            <th class="head">Grouping</th>
+                            <th class="head">Initializer</th>
+                            <th class="head">Adder</th>
+                            <th class="head">Subtractor</th>
+                            <th class="head">State</th>
+                        </tr>
+                        </thead>
+                        <tbody valign="top">
+                        <tr class="row-odd"><td>1</td>
+                            <td>(alice, E)</td>
+                            <td>INSERT alice</td>
+                            <td>(E, 5)</td>
+                            <td>0 (for E)</td>
+                            <td>(E, 0 + 5)</td>
+                            <td>&nbsp;</td>
+                            <td><div class="first last line-block">
+                                <div class="line"><strong>(E, 5)</strong></div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-even"><td>2</td>
+                            <td>(bob, A)</td>
+                            <td>INSERT bob</td>
+                            <td>(A, 3)</td>
+                            <td>0 (for A)</td>
+                            <td>(A, 0 + 3)</td>
+                            <td>&nbsp;</td>
+                            <td><div class="first last line-block">
+                                <div class="line"><strong>(A, 3)</strong></div>
+                                <div class="line">(E, 5)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-odd"><td>3</td>
+                            <td>(charlie, A)</td>
+                            <td>INSERT charlie</td>
+                            <td>(A, 7)</td>
+                            <td>&nbsp;</td>
+                            <td>(A, 3 + 7)</td>
+                            <td>&nbsp;</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(A, <strong>10</strong>)</div>
+                                <div class="line">(E, 5)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-even"><td>4</td>
+                            <td>(alice, A)</td>
+                            <td>UPDATE alice</td>
+                            <td>(A, 5)</td>
+                            <td>&nbsp;</td>
+                            <td>(A, 10 + 5)</td>
+                            <td>(E, 5 - 5)</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(A, <strong>15</strong>)</div>
+                                <div class="line">(E, <strong>0</strong>)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-odd"><td>5</td>
+                            <td>(charlie, null)</td>
+                            <td>DELETE charlie</td>
+                            <td>(null, 7)</td>
+                            <td>&nbsp;</td>
+                            <td>&nbsp;</td>
+                            <td>(A, 15 - 7)</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(A, <strong>8</strong>)</div>
+                                <div class="line">(E, 0)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-even"><td>6</td>
+                            <td>(null, E)</td>
+                            <td><em>ignored</em></td>
+                            <td>&nbsp;</td>
+                            <td>&nbsp;</td>
+                            <td>&nbsp;</td>
+                            <td>&nbsp;</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(A, 8)</div>
+                                <div class="line">(E, 0)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        <tr class="row-odd"><td>7</td>
+                            <td>(bob, E)</td>
+                            <td>UPDATE bob</td>
+                            <td>(E, 3)</td>
+                            <td>&nbsp;</td>
+                            <td>(E, 0 + 3)</td>
+                            <td>(A, 8 - 3)</td>
+                            <td><div class="first last line-block">
+                                <div class="line">(A, <strong>5</strong>)</div>
+                                <div class="line">(E, <strong>3</strong>)</div>
+                            </div>
+                            </td>
+                        </tr>
+                        </tbody>
+                    </table>
+                </div>
+                <div class="section" id="joining">
+                    <span id="streams-developer-guide-dsl-joins"></span><h4><a class="toc-backref" href="#id13">Joining</a><a class="headerlink" href="#joining" title="Permalink to this headline"></a></h4>
+                    <p id="streams-developer-guide-dsl-joins-overview">Streams and tables can also be joined.  Many stream processing applications in practice are coded as streaming joins.
+                        For example, applications backing an online shop might need to access multiple, updating database tables (e.g. sales
+                        prices, inventory, customer information) in order to enrich a new data record (e.g. customer transaction) with context
+                        information.  That is, scenarios where you need to perform table lookups at very large scale and with a low processing
+                        latency.  Here, a popular pattern is to make the information in the databases available in Kafka through so-called
+                        <em>change data capture</em> in combination with <a class="reference internal" href="../../connect/index.html#kafka-connect"><span class="std std-ref">Kafka&#8217;s Connect API</span></a>, and then implementing
+                        applications that leverage the Streams API to perform very fast and efficient local joins
+                        of such tables and streams, rather than requiring the application to make a query to a remote database over the network
+                        for each record.  In this example, the KTable concept in Kafka Streams would enable you to track the latest state
+                        (e.g.,  snapshot) of each table in a local state store, thus greatly reducing the processing latency as well as
+                        reducing the load of the remote databases when doing such streaming joins.</p>
+                    <p>The following join operations are supported, see also the diagram in the
+                        <a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateful-overview"><span class="std std-ref">overview section</span></a> of
+                        <a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateful"><span class="std std-ref">Stateful Transformations</span></a>.
+                        Depending on the operands, joins are either <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">windowed</span></a> joins or
+                        non-windowed joins.</p>
+                    <table border="1" class="docutils">
+                        <colgroup>
+                            <col width="12%" />
+                            <col width="6%" />
+                            <col width="7%" />
+                            <col width="7%" />
+                            <col width="7%" />
+                            <col width="61%" />
+                        </colgroup>
+                        <thead valign="bottom">
+                        <tr class="row-odd"><th class="head">Join operands</th>
+                            <th class="head">Type</th>
+                            <th class="head">(INNER) JOIN</th>
+                            <th class="head">LEFT JOIN</th>
+                            <th class="head">OUTER JOIN</th>
+                        </tr>
+                        </thead>
+                        <tbody valign="top">
+                        <tr class="row-even"><td>KStream-to-KStream</td>
+                            <td>Windowed</td>
+                            <td>Supported</td>
+                            <td>Supported</td>
+                            <td>Supported</td>
+                        </tr>
+                        <tr class="row-odd"><td>KTable-to-KTable</td>
+                            <td>Non-windowed</td>
+                            <td>Supported</td>
+                            <td>Supported</td>
+                            <td>Supported</td>
+                        </tr>
+                        <tr class="row-even"><td>KStream-to-KTable</td>
+                            <td>Non-windowed</td>
+                            <td>Supported</td>
+                            <td>Supported</td>
+                            <td>Not Supported</td>
+                        </tr>
+                        <tr class="row-odd"><td>KStream-to-GlobalKTable</td>
+                            <td>Non-windowed</td>
+                            <td>Supported</td>
+                            <td>Supported</td>
+                            <td>Not Supported</td>
+                        </tr>
+                        <tr class="row-even"><td>KTable-to-GlobalKTable</td>
+                            <td>N/A</td>
+                            <td>Not Supported</td>
+                            <td>Not Supported</td>
+                            <td>Not Supported</td>
+                            <td>N/A</td>
+                        </tr>
+                        </tbody>
+                    </table>
+                    <p>Each case is explained in more detail in the subsequent sections.</p>
+                    <div class="section" id="join-co-partitioning-requirements">
+                        <span id="streams-developer-guide-dsl-joins-co-partitioning"></span><h5><a class="toc-backref" href="#id14">Join co-partitioning requirements</a><a class="headerlink" href="#join-co-partitioning-requirements" title="Permalink to this headline"></a></h5>
+                        <p>Input data must be co-partitioned when joining. This ensures that input records with the same key, from both sides of the
+                            join, are delivered to the same stream task during processing.
+                            <strong>It is the responsibility of the user to ensure data co-partitioning when joining</strong>.</p>
+                        <div class="admonition tip">
+                            <p><b>Tip</b></p>
+                            <p class="last">If possible, consider using <a class="reference internal" href="../concepts.html#streams-concepts-globalktable"><span class="std std-ref">global tables</span></a> (<code class="docutils literal"><span class="pre">GlobalKTable</span></code>) for joining because they do not require data co-partitioning.</p>
+                        </div>
+                        <p>The requirements for data co-partitioning are:</p>
+                        <ul class="simple">
+                            <li>The input topics of the join (left side and right side) must have the <strong>same number of partitions</strong>.</li>
+                            <li>All applications that <em>write</em> to the input topics must have the <strong>same partitioning strategy</strong> so that records with
+                                the same key are delivered to same partition number.  In other words, the keyspace of the input data must be
+                                distributed across partitions in the same manner.
+                                This means that, for example, applications that use Kafka&#8217;s <a class="reference internal" href="../../clients/index.html#kafka-clients"><span class="std std-ref">Java Producer API</span></a> must use the
+                                same partitioner (cf. the producer setting <code class="docutils literal"><span class="pre">&quot;partitioner.class&quot;</span></code> aka <code class="docutils literal"><span class="pre">ProducerConfig.PARTITIONER_CLASS_CONFIG</span></code>),
+                                and applications that use the Kafka&#8217;s Streams API must use the same <code class="docutils literal"><span class="pre">StreamPartitioner</span></code> for operations such as
+                                <code class="docutils literal"><span class="pre">KStream#to()</span></code>.  The good news is that, if you happen to use the default partitioner-related settings across all
+                                applications, you do not need to worry about the partitioning strategy.</li>
+                        </ul>
+                        <p>Why is data co-partitioning required?  Because
+                            <a class="reference internal" href="#streams-developer-guide-dsl-joins-kstream-kstream"><span class="std std-ref">KStream-KStream</span></a>,
+                            <a class="reference internal" href="#streams-developer-guide-dsl-joins-ktable-ktable"><span class="std std-ref">KTable-KTable</span></a>, and
+                            <a class="reference internal" href="#streams-developer-guide-dsl-joins-kstream-ktable"><span class="std std-ref">KStream-KTable</span></a> joins
+                            are performed based on the keys of records (e.g.,  <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>), it is required that the
+                            input streams/tables of a join are co-partitioned by key.</p>
+                        <p>The only exception are
+                            <a class="reference internal" href="#streams-developer-guide-dsl-joins-kstream-globalktable"><span class="std std-ref">KStream-GlobalKTable joins</span></a>.  Here, co-partitioning is
+                            it not required because <em>all</em> partitions of the <code class="docutils literal"><span class="pre">GlobalKTable</span></code>&#8216;s underlying changelog stream are made available to
+                            each <code class="docutils literal"><span class="pre">KafkaStreams</span></code> instance, i.e. each instance has a full copy of the changelog stream.  Further, a
+                            <code class="docutils literal"><span class="pre">KeyValueMapper</span></code> allows for non-key based joins from the <code class="docutils literal"><span class="pre">KStream</span></code> to the <code class="docutils literal"><span class="pre">GlobalKTable</span></code>.</p>
+                        <div class="admonition note">
+                            <p class="first admonition-title">Note</p>
+                            <p class="last"><strong>Kafka Streams partly verifies the co-partitioning requirement:</strong>
+                                During the partition assignment step, i.e. at runtime, Kafka Streams verifies whether the number of partitions for
+                                both sides of a join are the same.  If they are not, a <code class="docutils literal"><span class="pre">TopologyBuilderException</span></code> (runtime exception) is being
+                                thrown.  Note that Kafka Streams cannot verify whether the partitioning strategy matches between the input
+                                streams/tables of a join &#8211; it is up to the user to ensure that this is the case.</p>
+                        </div>
+                        <p><strong>Ensuring data co-partitioning:</strong> If the inputs of a join are not co-partitioned yet, you must ensure this manually.
+                            You may follow a procedure such as outlined below.</p>
+                        <ol class="arabic">
+                            <li><p class="first">Identify the input KStream/KTable in the join whose underlying Kafka topic has the smaller number of partitions.
+                                Let&#8217;s call this stream/table &#8220;SMALLER&#8221;, and the other side of the join &#8220;LARGER&#8221;.  To learn about the number of
+                                partitions of a Kafka topic you can use, for example, the CLI tool <code class="docutils literal"><span class="pre">bin/kafka-topics</span></code> with the <code class="docutils literal"><span class="pre">--describe</span></code>
+                                option.</p>
+                            </li>
+                            <li><p class="first">Pre-create a new Kafka topic for &#8220;SMALLER&#8221; that has the same number of partitions as &#8220;LARGER&#8221;.  Let&#8217;s call this
+                                new topic &#8220;repartitioned-topic-for-smaller&#8221;.  Typically, you&#8217;d use the CLI tool <code class="docutils literal"><span class="pre">bin/kafka-topics</span></code> with the
+                                <code class="docutils literal"><span class="pre">--create</span></code> option for this.</p>
+                            </li>
+                            <li><p class="first">Within your application, re-write the data of &#8220;SMALLER&#8221; into the new Kafka topic.  You must ensure that, when writing
+                                the data with <code class="docutils literal"><span class="pre">to</span></code> or <code class="docutils literal"><span class="pre">through</span></code>, the same partitioner is used as for &#8220;LARGER&#8221;.</p>
+                                <blockquote>
+                                    <div><ul class="simple">
+                                        <li>If &#8220;SMALLER&#8221; is a KStream: <code class="docutils literal"><span class="pre">KStream#to(&quot;repartitioned-topic-for-smaller&quot;)</span></code>.</li>
+                                        <li>If &#8220;SMALLER&#8221; is a KTable: <code class="docutils literal"><span class="pre">KTable#to(&quot;repartitioned-topic-for-smaller&quot;)</span></code>.</li>
+                                    </ul>
+                                    </div></blockquote>
+                            </li>
+                            <li><p class="first">Within your application, re-read the data in &#8220;repartitioned-topic-for-smaller&#8221; into
+                                a new KStream/KTable.</p>
+                                <blockquote>
+                                    <div><ul class="simple">
+                                        <li>If &#8220;SMALLER&#8221; is a KStream: <code class="docutils literal"><span class="pre">StreamsBuilder#stream(&quot;repartitioned-topic-for-smaller&quot;)</span></code>.</li>
+                                        <li>If &#8220;SMALLER&#8221; is a KTable: <code class="docutils literal"><span class="pre">StreamsBuilder#table(&quot;repartitioned-topic-for-smaller&quot;)</span></code>.</li>
+                                    </ul>
+                                    </div></blockquote>
+                            </li>
+                            <li><p class="first">Within your application, perform the join between &#8220;LARGER&#8221; and the new stream/table.</p>
+                            </li>
+                        </ol>
+                    </div>
+                    <div class="section" id="kstream-kstream-join">
+                        <span id="streams-developer-guide-dsl-joins-kstream-kstream"></span><h5><a class="toc-backref" href="#id15">KStream-KStream Join</a><a class="headerlink" href="#kstream-kstream-join" title="Permalink to this headline"></a></h5>
+                        <p>KStream-KStream joins are always <a class="reference internal" href="#windowing-sliding"><span class="std std-ref">windowed</span></a> joins, because otherwise the size of the
+                            internal state store used to perform the join &#8211; e.g.,  a <a class="reference internal" href="#windowing-sliding"><span class="std std-ref">sliding window</span></a> or &#8220;buffer&#8221; &#8211; would
+                            grow indefinitely.  For stream-stream joins it&#8217;s important to highlight that a new input record on one side will
+                            produce a join output <em>for each</em> matching record on the other side, and there can be <em>multiple</em> such matching records
+                            in a given join window (cf. the row with timestamp 15 in the join semantics table below, for example).</p>
+                        <p>Join output records are effectively created as follows, leveraging the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code>:</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">LV</span><span class="o">&gt;</span> <span class="n">leftRecord</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">RV</span><span class="o">&gt;</span> <span class="n">rightRecord</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">LV</span><span class="o">,</span> <span class="n">RV</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joiner</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joinOutputRecord</span> <span class="o">=</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span>
+    <span class="n">leftRecord</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="cm">/* by definition, leftRecord.key == rightRecord.key */</span>
+    <span class="n">joiner</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">leftRecord</span><span class="o">.</span><span class="na">value</span><span class="o">,</span> <span class="n">rightRecord</span><span class="o">.</span><span class="na">value</span><span class="o">)</span>
+  <span class="o">);</span>
+</pre></div>
+                        </div>
+                        <table border="1" class="non-scrolling-table width-100-percent docutils">
+                            <colgroup>
+                                <col width="15%" />
+                                <col width="85%" />
+                            </colgroup>
+                            <thead valign="bottom">
+                            <tr class="row-odd"><th class="head">Transformation</th>
+                                <th class="head">Description</th>
+                            </tr>
+                            </thead>
+                            <tbody valign="top">
+                            <tr class="row-even"><td><p class="first"><strong>Inner Join (windowed)</strong></p>
+                                <ul class="last simple">
+                                    <li>(KStream, KStream)
+                                        &rarr; KStream</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs an INNER JOIN of this stream with another stream.
+                                    Even though this operation is windowed, the joined stream will be of type <code class="docutils literal"><span class="pre">KStream&lt;K,</span> <span class="pre">...&gt;</span></code> rather than <code class="docutils literal"><span class="pre">KStream&lt;Windowed&lt;K&gt;,</span> <span class="pre">...&gt;</span></code>.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
+                                    <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
+                                    <p><strong>Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).</strong></p>
+                                    <p>Several variants of <code class="docutils literal"><span class="pre">join</span></code> exists, see the Javadocs for details.</p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">,</span> <span class="cm">/* ValueJoiner */</span>
+    <span class="n">JoinWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)),</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">JoinWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)),</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
+  <span class="o">);</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul>
+                                        <li><p class="first">The join is <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>, and <em>window-based</em>, i.e. two input records are joined if and only if their
+                                            timestamps are &#8220;close&#8221; to each other as defined by the user-supplied <code class="docutils literal"><span class="pre">JoinWindows</span></code>, i.e. the window defines an additional join predicate over the record timestamps.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                    </ul>
+                                    <p class="last">See the semantics overview at the bottom of this section for a detailed description.</p>
+                                </td>
+                            </tr>
+                            <tr class="row-odd"><td><p class="first"><strong>Left Join (windowed)</strong></p>
+                                <ul class="last simple">
+                                    <li>(KStream, KStream)
+                                        &rarr; KStream</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs a LEFT JOIN of this stream with another stream.
+                                    Even though this operation is windowed, the joined stream will be of type <code class="docutils literal"><span class="pre">KStream&lt;K,</span> <span class="pre">...&gt;</span></code> rather than <code class="docutils literal"><span class="pre">KStream&lt;Windowed&lt;K&gt;,</span> <span class="pre">...&gt;</span></code>.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
+                                    <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
+                                    <p><strong>Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).</strong></p>
+                                    <p>Several variants of <code class="docutils literal"><span class="pre">leftJoin</span></code> exists, see the Javadocs for details.</p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">leftJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">,</span> <span class="cm">/* ValueJoiner */</span>
+    <span class="n">JoinWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)),</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">leftJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">JoinWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)),</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
+  <span class="o">);</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul>
+                                        <li><p class="first">The join is <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>, and <em>window-based</em>, i.e. two input records are joined if and only if their
+                                            timestamps are &#8220;close&#8221; to each other as defined by the user-supplied <code class="docutils literal"><span class="pre">JoinWindows</span></code>, i.e. the window defines an additional join predicate over the record timestamps.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                        <li><p class="first">For each input record on the left side that does not have any match on the right side, the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called with <code class="docutils literal"><span class="pre">ValueJoiner#apply(leftRecord.value,</span> <span class="pre">null)</span></code>;
+                                            this explains the row with timestamp=3 in the table below, which lists <code class="docutils literal"><span class="pre">[A,</span> <span class="pre">null]</span></code> in the LEFT JOIN column.</p>
+                                        </li>
+                                    </ul>
+                                    <p class="last">See the semantics overview at the bottom of this section for a detailed description.</p>
+                                </td>
+                            </tr>
+                            <tr class="row-even"><td><p class="first"><strong>Outer Join (windowed)</strong></p>
+                                <ul class="last simple">
+                                    <li>(KStream, KStream)
+                                        &rarr; KStream</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs an OUTER JOIN of this stream with another stream.
+                                    Even though this operation is windowed, the joined stream will be of type <code class="docutils literal"><span class="pre">KStream&lt;K,</span> <span class="pre">...&gt;</span></code> rather than <code class="docutils literal"><span class="pre">KStream&lt;Windowed&lt;K&gt;,</span> <span class="pre">...&gt;</span></code>.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#outerJoin-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
+                                    <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
+                                    <p><strong>Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).</strong></p>
+                                    <p>Several variants of <code class="docutils literal"><span class="pre">outerJoin</span></code> exists, see the Javadocs for details.</p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">outerJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">,</span> <span class="cm">/* ValueJoiner */</span>
+    <span class="n">JoinWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)),</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">outerJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">JoinWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">)),</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">with</span><span class="o">(</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="cm">/* key */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">(),</span>   <span class="cm">/* left value */</span>
+      <span class="n">Serdes</span><span class="o">.</span><span class="na">Double</span><span class="o">())</span>  <span class="cm">/* right value */</span>
+  <span class="o">);</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul>
+                                        <li><p class="first">The join is <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>, and <em>window-based</em>, i.e. two input records are joined if and only if their
+                                            timestamps are &#8220;close&#8221; to each other as defined by the user-supplied <code class="docutils literal"><span class="pre">JoinWindows</span></code>, i.e. the window defines an additional join predicate over the record timestamps.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                        <li><p class="first">For each input record on one side that does not have any match on the other side, the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called with <code class="docutils literal"><span class="pre">ValueJoiner#apply(leftRecord.value,</span> <span class="pre">null)</span></code> or
+                                            <code class="docutils literal"><span class="pre">ValueJoiner#apply(null,</span> <span class="pre">rightRecord.value)</span></code>, respectively; this explains the row with timestamp=3 in the table below, which lists <code class="docutils literal"><span class="pre">[A,</span> <span class="pre">null]</span></code> in the OUTER JOIN column
+                                            (unlike LEFT JOIN, <code class="docutils literal"><span class="pre">[null,</span> <span class="pre">x]</span></code> is possible, too, but no such example is shown in the table).</p>
+                                        </li>
+                                    </ul>
+                                    <p class="last">See the semantics overview at the bottom of this section for a detailed description.</p>
+                                </td>
+                            </tr>
+                            </tbody>
+                        </table>
+                        <p><strong>Semantics of stream-stream joins:</strong>
+                            The semantics of the various stream-stream join variants are explained below.
+                            To improve the readability of the table, assume that (1) all records have the same key (and thus the key in the table is omitted), (2) all records belong to a single join window, and (3) all records are processed in timestamp order.
+                            The columns INNER JOIN, LEFT JOIN, and OUTER JOIN denote what is passed as arguments to the user-supplied
+                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code>, <code class="docutils literal"><span class="pre">leftJoin</span></code>, and
+                            <code class="docutils literal"><span class="pre">outerJoin</span></code> methods, respectively, whenever a new input record is received on either side of the join.  An empty table
+                            cell denotes that the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> is not called at all.</p>
+                        <table border="1" class="docutils">
+                            <colgroup>
+                                <col width="8%" />
+                                <col width="13%" />
+                                <col width="13%" />
+                                <col width="22%" />
+                                <col width="22%" />
+                                <col width="22%" />
+                            </colgroup>
+                            <thead valign="bottom">
+                            <tr class="row-odd"><th class="head">Timestamp</th>
+                                <th class="head">Left (KStream)</th>
+                                <th class="head">Right (KStream)</th>
+                                <th class="head">(INNER) JOIN</th>
+                                <th class="head">LEFT JOIN</th>
+                                <th class="head">OUTER JOIN</th>
+                            </tr>
+                            </thead>
+                            <tbody valign="top">
+                            <tr class="row-even"><td>1</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>2</td>
+                                <td>&nbsp;</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>3</td>
+                                <td>A</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>[A, null]</td>
+                                <td>[A, null]</td>
+                            </tr>
+                            <tr class="row-odd"><td>4</td>
+                                <td>&nbsp;</td>
+                                <td>a</td>
+                                <td>[A, a]</td>
+                                <td>[A, a]</td>
+                                <td>[A, a]</td>
+                            </tr>
+                            <tr class="row-even"><td>5</td>
+                                <td>B</td>
+                                <td>&nbsp;</td>
+                                <td>[B, a]</td>
+                                <td>[B, a]</td>
+                                <td>[B, a]</td>
+                            </tr>
+                            <tr class="row-odd"><td>6</td>
+                                <td>&nbsp;</td>
+                                <td>b</td>
+                                <td>[A, b], [B, b]</td>
+                                <td>[A, b], [B, b]</td>
+                                <td>[A, b], [B, b]</td>
+                            </tr>
+                            <tr class="row-even"><td>7</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>8</td>
+                                <td>&nbsp;</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>9</td>
+                                <td>C</td>
+                                <td>&nbsp;</td>
+                                <td>[C, a], [C, b]</td>
+                                <td>[C, a], [C, b]</td>
+                                <td>[C, a], [C, b]</td>
+                            </tr>
+                            <tr class="row-odd"><td>10</td>
+                                <td>&nbsp;</td>
+                                <td>c</td>
+                                <td>[A, c], [B, c], [C, c]</td>
+                                <td>[A, c], [B, c], [C, c]</td>
+                                <td>[A, c], [B, c], [C, c]</td>
+                            </tr>
+                            <tr class="row-even"><td>11</td>
+                                <td>&nbsp;</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>12</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>13</td>
+                                <td>&nbsp;</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>14</td>
+                                <td>&nbsp;</td>
+                                <td>d</td>
+                                <td>[A, d], [B, d], [C, d]</td>
+                                <td>[A, d], [B, d], [C, d]</td>
+                                <td>[A, d], [B, d], [C, d]</td>
+                            </tr>
+                            <tr class="row-even"><td>15</td>
+                                <td>D</td>
+                                <td>&nbsp;</td>
+                                <td>[D, a], [D, b], [D, c], [D, d]</td>
+                                <td>[D, a], [D, b], [D, c], [D, d]</td>
+                                <td>[D, a], [D, b], [D, c], [D, d]</td>
+                            </tr>
+                            </tbody>
+                        </table>
+                    </div>
+                    <div class="section" id="ktable-ktable-join">
+                        <span id="streams-developer-guide-dsl-joins-ktable-ktable"></span><h5><a class="toc-backref" href="#id16">KTable-KTable Join</a><a class="headerlink" href="#ktable-ktable-join" title="Permalink to this headline"></a></h5>
+                        <p>KTable-KTable joins are always <em>non-windowed</em> joins.  They are designed to be consistent with their counterparts in
+                            relational databases.  The changelog streams of both KTables are materialized into local state stores to represent the
+                            latest snapshot of their <a class="reference internal" href="../concepts.html#streams-concepts-ktable"><span class="std std-ref">table duals</span></a>.
+                            The join result is a new KTable that represents the changelog stream of the join operation.</p>
+                        <p>Join output records are effectively created as follows, leveraging the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code>:</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">LV</span><span class="o">&gt;</span> <span class="n">leftRecord</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">RV</span><span class="o">&gt;</span> <span class="n">rightRecord</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">LV</span><span class="o">,</span> <span class="n">RV</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joiner</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joinOutputRecord</span> <span class="o">=</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span>
+    <span class="n">leftRecord</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="cm">/* by definition, leftRecord.key == rightRecord.key */</span>
+    <span class="n">joiner</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">leftRecord</span><span class="o">.</span><span class="na">value</span><span class="o">,</span> <span class="n">rightRecord</span><span class="o">.</span><span class="na">value</span><span class="o">)</span>
+  <span class="o">);</span>
+</pre></div>
+                        </div>
+                        <table border="1" class="non-scrolling-table width-100-percent docutils">
+                            <colgroup>
+                                <col width="15%" />
+                                <col width="85%" />
+                            </colgroup>
+                            <thead valign="bottom">
+                            <tr class="row-odd"><th class="head">Transformation</th>
+                                <th class="head">Description</th>
+                            </tr>
+                            </thead>
+                            <tbody valign="top">
+                            <tr class="row-even"><td><p class="first"><strong>Inner Join</strong></p>
+                                <ul class="last simple">
+                                    <li>(KTable, KTable)
+                                        &rarr; KTable</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs an INNER JOIN of this table with another table.
+                                    The result is an ever-updating KTable that represents the &#8220;current&#8221; result of the join.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#join-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul>
+                                        <li><p class="first">The join is <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> key are ignored and do not trigger the join.</li>
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em> for the corresponding key, which indicate the deletion of the key from the table.  Tombstones do not
+                                                        trigger the join.  When an input tombstone is received, then an output tombstone is forwarded directly to the join result KTable if required (i.e. only if the corresponding
+                                                        key actually exists already in the join result KTable).</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                    </ul>
+                                    <p class="last">See the semantics overview at the bottom of this section for a detailed description.</p>
+                                </td>
+                            </tr>
+                            <tr class="row-odd"><td><p class="first"><strong>Left Join</strong></p>
+                                <ul class="last simple">
+                                    <li>(KTable, KTable)
+                                        &rarr; KTable</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs a LEFT JOIN of this table with another table.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#leftJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">leftJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">leftJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul>
+                                        <li><p class="first">The join is <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> key are ignored and do not trigger the join.</li>
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em> for the corresponding key, which indicate the deletion of the key from the table.  Tombstones do not
+                                                        trigger the join.  When an input tombstone is received, then an output tombstone is forwarded directly to the join result KTable if required (i.e. only if the corresponding
+                                                        key actually exists already in the join result KTable).</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                        <li><p class="first">For each input record on the left side that does not have any match on the right side, the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called with <code class="docutils literal"><span class="pre">ValueJoiner#apply(leftRecord.value,</span> <span class="pre">null)</span></code>;
+                                            this explains the row with timestamp=3 in the table below, which lists <code class="docutils literal"><span class="pre">[A,</span> <span class="pre">null]</span></code> in the LEFT JOIN column.</p>
+                                        </li>
+                                    </ul>
+                                    <p class="last">See the semantics overview at the bottom of this section for a detailed description.</p>
+                                </td>
+                            </tr>
+                            <tr class="row-even"><td><p class="first"><strong>Outer Join</strong></p>
+                                <ul class="last simple">
+                                    <li>(KTable, KTable)
+                                        &rarr; KTable</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs an OUTER JOIN of this table with another table.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#outerJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">outerJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">outerJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul>
+                                        <li><p class="first">The join is <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> key are ignored and do not trigger the join.</li>
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em> for the corresponding key, which indicate the deletion of the key from the table.  Tombstones do not
+                                                        trigger the join.  When an input tombstone is received, then an output tombstone is forwarded directly to the join result KTable if required (i.e. only if the corresponding
+                                                        key actually exists already in the join result KTable).</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                        <li><p class="first">For each input record on one side that does not have any match on the other side, the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called with <code class="docutils literal"><span class="pre">ValueJoiner#apply(leftRecord.value,</span> <span class="pre">null)</span></code> or
+                                            <code class="docutils literal"><span class="pre">ValueJoiner#apply(null,</span> <span class="pre">rightRecord.value)</span></code>, respectively; this explains the rows with timestamp=3 and timestamp=7 in the table below, which list <code class="docutils literal"><span class="pre">[A,</span> <span class="pre">null]</span></code> and
+                                            <code class="docutils literal"><span class="pre">[null,</span> <span class="pre">b]</span></code>, respectively, in the OUTER JOIN column.</p>
+                                        </li>
+                                    </ul>
+                                    <p class="last">See the semantics overview at the bottom of this section for a detailed description.</p>
+                                </td>
+                            </tr>
+                            </tbody>
+                        </table>
+                        <p><strong>Semantics of table-table joins:</strong>
+                            The semantics of the various table-table join variants are explained below.
+                            To improve the readability of the table, you can assume that (1) all records have the same key (and thus the key in the table is omitted) and that (2) all records are processed in timestamp order.
+                            The columns INNER JOIN, LEFT JOIN, and OUTER JOIN denote what is passed as arguments to the user-supplied
+                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code>, <code class="docutils literal"><span class="pre">leftJoin</span></code>, and
+                            <code class="docutils literal"><span class="pre">outerJoin</span></code> methods, respectively, whenever a new input record is received on either side of the join.  An empty table
+                            cell denotes that the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> is not called at all.</p>
+                        <table border="1" class="docutils">
+                            <colgroup>
+                                <col width="8%" />
+                                <col width="13%" />
+                                <col width="13%" />
+                                <col width="22%" />
+                                <col width="22%" />
+                                <col width="22%" />
+                            </colgroup>
+                            <thead valign="bottom">
+                            <tr class="row-odd"><th class="head">Timestamp</th>
+                                <th class="head">Left (KTable)</th>
+                                <th class="head">Right (KTable)</th>
+                                <th class="head">(INNER) JOIN</th>
+                                <th class="head">LEFT JOIN</th>
+                                <th class="head">OUTER JOIN</th>
+                            </tr>
+                            </thead>
+                            <tbody valign="top">
+                            <tr class="row-even"><td>1</td>
+                                <td>null (tombstone)</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>2</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>3</td>
+                                <td>A</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>[A, null]</td>
+                                <td>[A, null]</td>
+                            </tr>
+                            <tr class="row-odd"><td>4</td>
+                                <td>&nbsp;</td>
+                                <td>a</td>
+                                <td>[A, a]</td>
+                                <td>[A, a]</td>
+                                <td>[A, a]</td>
+                            </tr>
+                            <tr class="row-even"><td>5</td>
+                                <td>B</td>
+                                <td>&nbsp;</td>
+                                <td>[B, a]</td>
+                                <td>[B, a]</td>
+                                <td>[B, a]</td>
+                            </tr>
+                            <tr class="row-odd"><td>6</td>
+                                <td>&nbsp;</td>
+                                <td>b</td>
+                                <td>[B, b]</td>
+                                <td>[B, b]</td>
+                                <td>[B, b]</td>
+                            </tr>
+                            <tr class="row-even"><td>7</td>
+                                <td>null (tombstone)</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                                <td>null (tombstone)</td>
+                                <td>[null, b]</td>
+                            </tr>
+                            <tr class="row-odd"><td>8</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                            </tr>
+                            <tr class="row-even"><td>9</td>
+                                <td>C</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>[C, null]</td>
+                                <td>[C, null]</td>
+                            </tr>
+                            <tr class="row-odd"><td>10</td>
+                                <td>&nbsp;</td>
+                                <td>c</td>
+                                <td>[C, c]</td>
+                                <td>[C, c]</td>
+                                <td>[C, c]</td>
+                            </tr>
+                            <tr class="row-even"><td>11</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                                <td>null (tombstone)</td>
+                                <td>[C, null]</td>
+                                <td>[C, null]</td>
+                            </tr>
+                            <tr class="row-odd"><td>12</td>
+                                <td>null (tombstone)</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                                <td>null (tombstone)</td>
+                            </tr>
+                            <tr class="row-even"><td>13</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>14</td>
+                                <td>&nbsp;</td>
+                                <td>d</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>[null, d]</td>
+                            </tr>
+                            <tr class="row-even"><td>15</td>
+                                <td>D</td>
+                                <td>&nbsp;</td>
+                                <td>[D, d]</td>
+                                <td>[D, d]</td>
+                                <td>[D, d]</td>
+                            </tr>
+                            </tbody>
+                        </table>
+                    </div>
+                    <div class="section" id="kstream-ktable-join">
+                        <span id="streams-developer-guide-dsl-joins-kstream-ktable"></span><h5><a class="toc-backref" href="#id17">KStream-KTable Join</a><a class="headerlink" href="#kstream-ktable-join" title="Permalink to this headline"></a></h5>
+                        <p>KStream-KTable joins are always <em>non-windowed</em> joins.  They allow you to perform <em>table lookups</em> against a KTable
+                            (changelog stream) upon receiving a new record from the KStream (record stream).  An example use case would be to enrich
+                            a stream of user activities (KStream) with the latest user profile information (KTable).</p>
+                        <p>Join output records are effectively created as follows, leveraging the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code>:</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">LV</span><span class="o">&gt;</span> <span class="n">leftRecord</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">RV</span><span class="o">&gt;</span> <span class="n">rightRecord</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">LV</span><span class="o">,</span> <span class="n">RV</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joiner</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joinOutputRecord</span> <span class="o">=</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span>
+    <span class="n">leftRecord</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="cm">/* by definition, leftRecord.key == rightRecord.key */</span>
+    <span class="n">joiner</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">leftRecord</span><span class="o">.</span><span class="na">value</span><span class="o">,</span> <span class="n">rightRecord</span><span class="o">.</span><span class="na">value</span><span class="o">)</span>
+  <span class="o">);</span>
+</pre></div>
+                        </div>
+                        <table border="1" class="non-scrolling-table width-100-percent docutils">
+                            <colgroup>
+                                <col width="15%" />
+                                <col width="85%" />
+                            </colgroup>
+                            <thead valign="bottom">
+                            <tr class="row-odd"><th class="head">Transformation</th>
+                                <th class="head">Description</th>
+                            </tr>
+                            </thead>
+                            <tbody valign="top">
+                            <tr class="row-even"><td><p class="first"><strong>Inner Join</strong></p>
+                                <ul class="last simple">
+                                    <li>(KStream, KTable)
+                                        &rarr; KStream</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs an INNER JOIN of this stream with the table, effectively doing a table lookup.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
+                                    <p><strong>Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.</strong></p>
+                                    <p>Several variants of <code class="docutils literal"><span class="pre">join</span></code> exists, see the Javadocs for details.</p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">,</span> <span class="cm">/* ValueJoiner */</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">keySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key */</span>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* left value */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">keySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key */</span>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* left value */</span>
+  <span class="o">);</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul>
+                                        <li><p class="first">The join is <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Only input records for the left side (stream) trigger the join.  Input records for the right side (table) update only the internal right-side join state.</li>
+                                                    <li>Input records for the stream with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                    <li>Input records for the table with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em> for the corresponding key, which indicate the deletion of the key from the table.
+                                                        Tombstones do not trigger the join.</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                    </ul>
+                                    <p class="last">See the semantics overview at the bottom of this section for a detailed description.</p>
+                                </td>
+                            </tr>
+                            <tr class="row-odd"><td><p class="first"><strong>Left Join</strong></p>
+                                <ul class="last simple">
+                                    <li>(KStream, KTable)
+                                        &rarr; KStream</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs a LEFT JOIN of this stream with the table, effectively doing a table lookup.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
+                                    <p><strong>Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.</strong></p>
+                                    <p>Several variants of <code class="docutils literal"><span class="pre">leftJoin</span></code> exists, see the Javadocs for details.</p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">leftJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">,</span> <span class="cm">/* ValueJoiner */</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">keySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key */</span>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* left value */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">leftJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="n">Joined</span><span class="o">.</span><span class="na">keySerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span> <span class="cm">/* key */</span>
+      <span class="o">.</span><span class="na">withValueSerde</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span> <span class="cm">/* left value */</span>
+  <span class="o">);</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul>
+                                        <li><p class="first">The join is <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">leftRecord.key</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Only input records for the left side (stream) trigger the join.  Input records for the right side (table) update only the internal right-side join state.</li>
+                                                    <li>Input records for the stream with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                    <li>Input records for the table with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em> for the corresponding key, which indicate the deletion of the key from the table.
+                                                        Tombstones do not trigger the join.</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                        <li><p class="first">For each input record on the left side that does not have any match on the right side, the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called with <code class="docutils literal"><span class="pre">ValueJoiner#apply(leftRecord.value,</span> <span class="pre">null)</span></code>;
+                                            this explains the row with timestamp=3 in the table below, which lists <code class="docutils literal"><span class="pre">[A,</span> <span class="pre">null]</span></code> in the LEFT JOIN column.</p>
+                                        </li>
+                                    </ul>
+                                    <p class="last">See the semantics overview at the bottom of this section for a detailed description.</p>
+                                </td>
+                            </tr>
+                            </tbody>
+                        </table>
+                        <p><strong>Semantics of stream-table joins:</strong>
+                            The semantics of the various stream-table join variants are explained below.
+                            To improve the readability of the table we assume that (1) all records have the same key (and thus we omit the key in
+                            the table) and that (2) all records are processed in timestamp order.
+                            The columns INNER JOIN and LEFT JOIN denote what is passed as arguments to the user-supplied
+                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code> and <code class="docutils literal"><span class="pre">leftJoin</span></code>
+                            methods, respectively, whenever a new input record is received on either side of the join.  An empty table
+                            cell denotes that the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> is not called at all.</p>
+                        <table border="1" class="docutils">
+                            <colgroup>
+                                <col width="10%" />
+                                <col width="16%" />
+                                <col width="16%" />
+                                <col width="29%" />
+                                <col width="29%" />
+                            </colgroup>
+                            <thead valign="bottom">
+                            <tr class="row-odd"><th class="head">Timestamp</th>
+                                <th class="head">Left (KStream)</th>
+                                <th class="head">Right (KTable)</th>
+                                <th class="head">(INNER) JOIN</th>
+                                <th class="head">LEFT JOIN</th>
+                            </tr>
+                            </thead>
+                            <tbody valign="top">
+                            <tr class="row-even"><td>1</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>2</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>3</td>
+                                <td>A</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>[A, null]</td>
+                            </tr>
+                            <tr class="row-odd"><td>4</td>
+                                <td>&nbsp;</td>
+                                <td>a</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>5</td>
+                                <td>B</td>
+                                <td>&nbsp;</td>
+                                <td>[B, a]</td>
+                                <td>[B, a]</td>
+                            </tr>
+                            <tr class="row-odd"><td>6</td>
+                                <td>&nbsp;</td>
+                                <td>b</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>7</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>8</td>
+                                <td>&nbsp;</td>
+                                <td>null (tombstone)</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>9</td>
+                                <td>C</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>[C, null]</td>
+                            </tr>
+                            <tr class="row-odd"><td>10</td>
+                                <td>&nbsp;</td>
+                                <td>c</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>11</td>
+                                <td>&nbsp;</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>12</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>13</td>
+                                <td>&nbsp;</td>
+                                <td>null</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-odd"><td>14</td>
+                                <td>&nbsp;</td>
+                                <td>d</td>
+                                <td>&nbsp;</td>
+                                <td>&nbsp;</td>
+                            </tr>
+                            <tr class="row-even"><td>15</td>
+                                <td>D</td>
+                                <td>&nbsp;</td>
+                                <td>[D, d]</td>
+                                <td>[D, d]</td>
+                            </tr>
+                            </tbody>
+                        </table>
+                    </div>
+                    <div class="section" id="kstream-globalktable-join">
+                        <span id="streams-developer-guide-dsl-joins-kstream-globalktable"></span><h5><a class="toc-backref" href="#id18">KStream-GlobalKTable Join</a><a class="headerlink" href="#kstream-globalktable-join" title="Permalink to this headline"></a></h5>
+                        <p>KStream-GlobalKTable joins are always <em>non-windowed</em> joins.  They allow you to perform <em>table lookups</em> against a
+                            <a class="reference internal" href="../concepts.html#streams-concepts-globalktable"><span class="std std-ref">GlobalKTable</span></a> (entire changelog stream) upon receiving a new record from the
+                            KStream (record stream).  An example use case would be &#8220;star queries&#8221; or &#8220;star joins&#8221;, where you would enrich a stream
+                            of user activities (KStream) with the latest user profile information (GlobalKTable) and further context information
+                            (further GlobalKTables).</p>
+                        <p>At a high-level, KStream-GlobalKTable joins are very similar to
+                            <a class="reference internal" href="#streams-developer-guide-dsl-joins-kstream-ktable"><span class="std std-ref">KStream-KTable joins</span></a>.  However, global tables provide you
+                            with much more flexibility at the <a class="reference internal" href="../concepts.html#streams-concepts-globalktable"><span class="std std-ref">some expense</span></a> when compared to partitioned
+                            tables:</p>
+                        <ul class="simple">
+                            <li>They do not require <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">data co-partitioning</span></a>.</li>
+                            <li>They allow for efficient &#8220;star joins&#8221;; i.e., joining a large-scale &#8220;facts&#8221; stream against &#8220;dimension&#8221; tables</li>
+                            <li>They allow for joining against foreign keys; i.e., you can lookup data in the table not just by the keys of records in the
+                                stream, but also by data in the record values.</li>
+                            <li>They make many use cases feasible where you must work on heavily skewed data and thus suffer from hot partitions.</li>
+                            <li>They are often more efficient than their partitioned KTable counterpart when you need to perform multiple joins in
+                                succession.</li>
+                        </ul>
+                        <p>Join output records are effectively created as follows, leveraging the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code>:</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">LV</span><span class="o">&gt;</span> <span class="n">leftRecord</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">RV</span><span class="o">&gt;</span> <span class="n">rightRecord</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">LV</span><span class="o">,</span> <span class="n">RV</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joiner</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">JV</span><span class="o">&gt;</span> <span class="n">joinOutputRecord</span> <span class="o">=</span> <span class="n">KeyValue</span><span class="o">.</span><span class="na">pair</span><span class="o">(</span>
+    <span class="n">leftRecord</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="cm">/* by definition, leftRecord.key == rightRecord.key */</span>
+    <span class="n">joiner</span><span class="o">.</span><span class="na">apply</span><span class="o">(</span><span class="n">leftRecord</span><span class="o">.</span><span class="na">value</span><span class="o">,</span> <span class="n">rightRecord</span><span class="o">.</span><span class="na">value</span><span class="o">)</span>
+  <span class="o">);</span>
+</pre></div>
+                        </div>
+                        <table border="1" class="non-scrolling-table width-100-percent docutils">
+                            <colgroup>
+                                <col width="15%" />
+                                <col width="85%" />
+                            </colgroup>
+                            <thead valign="bottom">
+                            <tr class="row-odd"><th class="head">Transformation</th>
+                                <th class="head">Description</th>
+                            </tr>
+                            </thead>
+                            <tbody valign="top">
+                            <tr class="row-even"><td><p class="first"><strong>Inner Join</strong></p>
+                                <ul class="last simple">
+                                    <li>(KStream, GlobalKTable)
+                                        &rarr; KStream</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs an INNER JOIN of this stream with the global table, effectively doing a table lookup.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.GlobalKTable-org.apache.kafka.streams.kstream.KeyValueMapper-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <p>The <code class="docutils literal"><span class="pre">GlobalKTable</span></code> is fully bootstrapped upon (re)start of a <code class="docutils literal"><span class="pre">KafkaStreams</span></code> instance, which means the table is fully populated with all the data in the underlying topic that is
+                                        available at the time of the startup.  The actual data processing begins only once the bootstrapping has completed.</p>
+                                    <p><strong>Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.</strong></p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">GlobalKTable</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftKey</span><span class="o">,</span> <span class="n">leftValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">leftKey</span><span class="o">.</span><span class="na">length</span><span class="o">(),</span> <span class="cm">/* derive a (potentially) new key by which to lookup against the table */</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">KeyValueMapper</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* derive a (potentially) new key by which to lookup against the table */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Integer</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">Long</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">key</span><span class="o">.</span><span class="na">length</span><span class="o">();</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul class="last">
+                                        <li><p class="first">The join is indirectly <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">KeyValueMapper#apply(leftRecord.key,</span> <span class="pre">leftRecord.value)</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Only input records for the left side (stream) trigger the join.  Input records for the right side (table) update only the internal right-side join state.</li>
+                                                    <li>Input records for the stream with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                    <li>Input records for the table with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em>, which indicate the deletion of a record key from the table.  Tombstones do not trigger the
+                                                        join.</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                    </ul>
+                                </td>
+                            </tr>
+                            <tr class="row-odd"><td><p class="first"><strong>Left Join</strong></p>
+                                <ul class="last simple">
+                                    <li>(KStream, GlobalKTable)
+                                        &rarr; KStream</li>
+                                </ul>
+                            </td>
+                                <td><p class="first">Performs a LEFT JOIN of this stream with the global table, effectively doing a table lookup.
+                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.GlobalKTable-org.apache.kafka.streams.kstream.KeyValueMapper-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <p>The <code class="docutils literal"><span class="pre">GlobalKTable</span></code> is fully bootstrapped upon (re)start of a <code class="docutils literal"><span class="pre">KafkaStreams</span></code> instance, which means the table is fully populated with all the data in the underlying topic that is
+                                        available at the time of the startup.  The actual data processing begins only once the bootstrapping has completed.</p>
+                                    <p><strong>Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.</strong></p>
+                                    <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">GlobalKTable</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Java 8+ example, using lambda expressions</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">leftJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="o">(</span><span class="n">leftKey</span><span class="o">,</span> <span class="n">leftValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">leftKey</span><span class="o">.</span><span class="na">length</span><span class="o">(),</span> <span class="cm">/* derive a (potentially) new key by which to lookup against the table */</span>
+    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
+  <span class="o">);</span>
+
+<span class="c1">// Java 7 example</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">leftJoin</span><span class="o">(</span><span class="n">right</span><span class="o">,</span>
+    <span class="k">new</span> <span class="n">KeyValueMapper</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;()</span> <span class="o">{</span> <span class="cm">/* derive a (potentially) new key by which to lookup against the table */</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">Integer</span> <span class="nf">apply</span><span class="o">(</span><span class="n">String</span> <span class="n">key</span><span class="o">,</span> <span class="n">Long</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">key</span><span class="o">.</span><span class="na">length</span><span class="o">();</span>
+      <span class="o">}</span>
+    <span class="o">},</span>
+    <span class="k">new</span> <span class="n">ValueJoiner</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
+      <span class="nd">@Override</span>
+      <span class="kd">public</span> <span class="n">String</span> <span class="nf">apply</span><span class="o">(</span><span class="n">Long</span> <span class="n">leftValue</span><span class="o">,</span> <span class="n">Double</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="s">&quot;left=&quot;</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">&quot;, right=&quot;</span> <span class="o">+</span> <span class="n">rightValue</span><span class="o">;</span>
+      <span class="o">}</span>
+    <span class="o">});</span>
+</pre></div>
+                                    </div>
+                                    <p>Detailed behavior:</p>
+                                    <ul class="last">
+                                        <li><p class="first">The join is indirectly <em>key-based</em>, i.e. with the join predicate <code class="docutils literal"><span class="pre">KeyValueMapper#apply(leftRecord.key,</span> <span class="pre">leftRecord.value)</span> <span class="pre">==</span> <span class="pre">rightRecord.key</span></code>.</p>
+                                        </li>
+                                        <li><p class="first">The join will be triggered under the conditions listed below whenever new input is received.  When it is triggered, the user-supplied <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called to produce
+                                            join output records.</p>
+                                            <blockquote>
+                                                <div><ul class="simple">
+                                                    <li>Only input records for the left side (stream) trigger the join.  Input records for the right side (table) update only the internal right-side join state.</li>
+                                                    <li>Input records for the stream with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                    <li>Input records for the table with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em>, which indicate the deletion of a record key from the table.  Tombstones do not trigger the
+                                                        join.</li>
+                                                </ul>
+                                                </div></blockquote>
+                                        </li>
+                                        <li><p class="first">For each input record on the left side that does not have any match on the right side, the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> will be called with <code class="docutils literal"><span class="pre">ValueJoiner#apply(leftRecord.value,</span> <span class="pre">null)</span></code>.</p>
+                                        </li>
+                                    </ul>
+                                </td>
+                            </tr>
+                            </tbody>
+                        </table>
+                        <p><strong>Semantics of stream-table joins:</strong>
+                            The join semantics are identical to <a class="reference internal" href="#streams-developer-guide-dsl-joins-kstream-ktable"><span class="std std-ref">KStream-KTable joins</span></a>.
+                            The only difference is that, for KStream-GlobalKTable joins, the left input record is first &#8220;mapped&#8221; with
+                            a user-supplied <code class="docutils literal"><span class="pre">KeyValueMapper</span></code> into the table&#8217;s keyspace prior to the table lookup.</p>
+                    </div>
+                </div>
+                <div class="section" id="windowing">
+                    <span id="streams-developer-guide-dsl-windowing"></span><h4><a class="toc-backref" href="#id19">Windowing</a><a class="headerlink" href="#windowing" title="Permalink to this headline"></a></h4>
+                    <p>Windowing lets you control how to group records that have the same key for stateful operations such as
+                        <a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregations</span></a> or <a class="reference internal" href="#streams-developer-guide-dsl-joins"><span class="std std-ref">joins</span></a> into
+                        so-called windows.  Windows are tracked per record key.</p>
+                    <div class="admonition note">
+                        <p class="first admonition-title">Note</p>
+                        <p class="last">A related operation is <a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateless"><span class="std std-ref">grouping</span></a>, which groups all
+                            records that have the same key to ensure that data is properly partitioned (&#8220;keyed&#8221;) for subsequent operations.
+                            Once grouped, windowing allows you to further sub-group the records of a key.</p>
+                    </div>
+                    <p>For example, in join operations, a windowing state store is used to store all the records received so far within the
+                        defined window boundary.  In aggregating operations, a windowing state store is used to store the latest aggregation
+                        results per window.
+                        Old records in the state store are purged after the specified
+                        <a class="reference internal" href="../concepts.html#streams-concepts-windowing"><span class="std std-ref">window retention period</span></a>.
+                        Kafka Streams guarantees to keep a window for at least this specified time; the default value is one day and can be
+                        changed via <code class="docutils literal"><span class="pre">Windows#until()</span></code> and <code class="docutils literal"><span class="pre">SessionWindows#until()</span></code>.</p>
+                    <p>The DSL supports the following types of windows:</p>
+                    <table border="1" class="docutils">
+                        <colgroup>
+                            <col width="34%" />
+                            <col width="10%" />
+                            <col width="56%" />
+                        </colgroup>
+                        <thead valign="bottom">
+                        <tr class="row-odd"><th class="head">Window name</th>
+                            <th class="head">Behavior</th>
+                            <th class="head">Short description</th>
+                        </tr>
+                        </thead>
+                        <tbody valign="top">
+                        <tr class="row-even"><td><a class="reference internal" href="#windowing-tumbling"><span class="std std-ref">Tumbling time window</span></a></td>
+                            <td>Time-based</td>
+                            <td>Fixed-size, non-overlapping, gap-less windows</td>
+                        </tr>
+                        <tr class="row-odd"><td><a class="reference internal" href="#windowing-hopping"><span class="std std-ref">Hopping time window</span></a></td>
+                            <td>Time-based</td>
+                            <td>Fixed-size, overlapping windows</td>
+                        </tr>
+                        <tr class="row-even"><td><a class="reference internal" href="#windowing-sliding"><span class="std std-ref">Sliding time window</span></a></td>
+                            <td>Time-based</td>
+                            <td>Fixed-size, overlapping windows that work on differences between record timestamps</td>
+                        </tr>
+                        <tr class="row-odd"><td><a class="reference internal" href="#windowing-session"><span class="std std-ref">Session window</span></a></td>
+                            <td>Session-based</td>
+                            <td>Dynamically-sized, non-overlapping, data-driven windows</td>
+                        </tr>
+                        </tbody>
+                    </table>
+                    <div class="section" id="tumbling-time-windows">
+                        <span id="windowing-tumbling"></span><h5><a class="toc-backref" href="#id20">Tumbling time windows</a><a class="headerlink" href="#tumbling-time-windows" title="Permalink to this headline"></a></h5>
+                        <p>Tumbling time windows are a special case of hopping time windows and, like the latter, are windows based on time
+                            intervals.  They model fixed-size, non-overlapping, gap-less windows.
+                            A tumbling window is defined by a single property: the window&#8217;s <em>size</em>.
+                            A tumbling window is a hopping window whose window size is equal to its advance interval.
+                            Since tumbling windows never overlap, a data record will belong to one and only one window.</p>
+                        <div class="figure align-center" id="id3">
+                            <a class="reference internal image-reference" href="../../../images/streams-time-windows-tumbling.png"><img alt="../../../images/streams-time-windows-tumbling.png" src="../../../images/streams-time-windows-tumbling.png" style="width: 400pt;" /></a>
+                            <p class="caption"><span class="caption-text">This diagram shows windowing a stream of data records with tumbling windows.  Windows do not overlap because, by
+definition, the advance interval is identical to the window size.  In this diagram the time numbers represent minutes;
+e.g. t=5 means &#8220;at the five-minute mark&#8221;.  In reality, the unit of time in Kafka Streams is milliseconds, which means
+the time numbers would need to be multiplied with 60 * 1,000 to convert from minutes to milliseconds (e.g. t=5 would
+become t=300,000).</span></p>
+                        </div>
+                        <p>Tumbling time windows are <em>aligned to the epoch</em>, with the lower interval bound being inclusive and the upper bound
+                            being exclusive.  &#8220;Aligned to the epoch&#8221; means that the first window starts at timestamp zero.  For example, tumbling
+                            windows with a size of 5000ms have predictable window boundaries <code class="docutils literal"><span class="pre">[0;5000),[5000;10000),...</span></code> &#8212; and <strong>not</strong>
+                            <code class="docutils literal"><span class="pre">[1000;6000),[6000;11000),...</span></code> or even something &#8220;random&#8221; like <code class="docutils literal"><span class="pre">[1452;6452),[6452;11452),...</span></code>.</p>
+                        <p>The following code defines a tumbling window with a size of 5 minutes:</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.TimeWindows</span><span class="o">;</span>
+
+<span class="c1">// A tumbling time window with a size of 5 minutes (and, by definition, an implicit</span>
+<span class="c1">// advance interval of 5 minutes).</span>
+<span class="kt">long</span> <span class="n">windowSizeMs</span> <span class="o">=</span> <span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">);</span> <span class="c1">// 5 * 60 * 1000L</span>
+<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">);</span>
+
+<span class="c1">// The above is equivalent to the following code:</span>
+<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">advanceBy</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">);</span>
+</pre></div>
+                        </div>
+                    </div>
+                    <div class="section" id="hopping-time-windows">
+                        <span id="windowing-hopping"></span><h5><a class="toc-backref" href="#id21">Hopping time windows</a><a class="headerlink" href="#hopping-time-windows" title="Permalink to this headline"></a></h5>
+                        <p>Hopping time windows are windows based on time intervals.  They model fixed-sized, (possibly) overlapping windows.
+                            A hopping window is defined by two properties: the window&#8217;s <em>size</em> and its <em>advance interval</em> (aka &#8220;hop&#8221;).  The advance
+                            interval specifies by how much a window moves forward relative to the previous one.  For example, you can configure a
+                            hopping window with a size 5 minutes and an advance interval of 1 minute.  Since hopping windows can overlap &#8211; and in
+                            general they do &#8211; a data record may belong to more than one such windows.</p>
+                        <div class="admonition note">
+                            <p class="first admonition-title">Note</p>
+                            <p class="last"><strong>Hopping windows vs. sliding windows:</strong>
+                                Hopping windows are sometimes called &#8220;sliding windows&#8221; in other stream processing tools.  Kafka Streams follows the
+                                terminology in academic literature, where the semantics of sliding windows are different to those of hopping windows.</p>
+                        </div>
+                        <p>The following code defines a hopping window with a size of 5 minutes and an advance interval of 1 minute:</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.TimeWindows</span><span class="o">;</span>
+
+<span class="c1">// A hopping time window with a size of 5 minutes and an advance interval of 1 minute.</span>
+<span class="c1">// The window&#39;s name -- the string parameter -- is used to e.g. name the backing state store.</span>
+<span class="kt">long</span> <span class="n">windowSizeMs</span> <span class="o">=</span> <span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">);</span> <span class="c1">// 5 * 60 * 1000L</span>
+<span class="kt">long</span> <span class="n">advanceMs</span> <span class="o">=</span>    <span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">1</span><span class="o">);</span> <span class="c1">// 1 * 60 * 1000L</span>
+<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">advanceBy</span><span class="o">(</span><span class="n">advanceMs</span><span class="o">);</span>
+</pre></div>
+                        </div>
+                        <div class="figure align-center" id="id4">
+                            <a class="reference internal image-reference" href="../../../images/streams-time-windows-hopping.png"><img alt="../../../images/streams-time-windows-hopping.png" src="../../../images/streams-time-windows-hopping.png" style="width: 400pt;" /></a>
+                            <p class="caption"><span class="caption-text">This diagram shows windowing a stream of data records with hopping windows.  In this diagram the time numbers
+represent minutes; e.g. t=5 means &#8220;at the five-minute mark&#8221;.  In reality, the unit of time in Kafka Streams is
+milliseconds, which means the time numbers would need to be multiplied with 60 * 1,000 to convert from minutes to
+milliseconds (e.g. t=5 would become t=300,000).</span></p>
+                        </div>
+                        <p>Hopping time windows are <em>aligned to the epoch</em>, with the lower interval bound being inclusive and the upper bound
+                            being exclusive.  &#8220;Aligned to the epoch&#8221; means that the first window starts at timestamp zero.  For example, hopping
+                            windows with a size of 5000ms and an advance interval (&#8220;hop&#8221;) of 3000ms have predictable window boundaries
+                            <code class="docutils literal"><span class="pre">[0;5000),[3000;8000),...</span></code> &#8212; and <strong>not</strong> <code class="docutils literal"><span class="pre">[1000;6000),[4000;9000),...</span></code> or even something &#8220;random&#8221; like
+                            <code class="docutils literal"><span class="pre">[1452;6452),[4452;9452),...</span></code>.</p>
+                        <p>Unlike non-windowed aggregates that we have seen previously, windowed aggregates return a <em>windowed KTable</em> whose keys
+                            type is <code class="docutils literal"><span class="pre">Windowed&lt;K&gt;</span></code>.  This is to differentiate aggregate values with the same key from different windows.  The
+                            corresponding window instance and the embedded key can be retrieved as <code class="docutils literal"><span class="pre">Windowed#window()</span></code> and <code class="docutils literal"><span class="pre">Windowed#key()</span></code>,
+                            respectively.</p>
+                    </div>
+                    <div class="section" id="sliding-time-windows">
+                        <span id="windowing-sliding"></span><h5><a class="toc-backref" href="#id22">Sliding time windows</a><a class="headerlink" href="#sliding-time-windows" title="Permalink to this headline"></a></h5>
+                        <p>Sliding windows are actually quite different from hopping and tumbling windows.  In Kafka Streams, sliding windows
+                            are used only for <a class="reference internal" href="#streams-developer-guide-dsl-joins"><span class="std std-ref">join operations</span></a>, and can be specified through the
+                            <code class="docutils literal"><span class="pre">JoinWindows</span></code> class.</p>
+                        <p>A sliding window models a fixed-size window that slides continuously over the time axis; here, two data records are
+                            said to be included in the same window if (in the case of symmetric windows) the difference of their timestamps is
+                            within the window size.  Thus, sliding windows are not aligned to the epoch, but to the data record timestamps.  In
+                            contrast to hopping and tumbling windows, the lower and upper window time interval bounds of sliding windows are
+                            <em>both inclusive</em>.</p>
+                    </div>
+                    <div class="section" id="session-windows">
+                        <span id="windowing-session"></span><h5><a class="toc-backref" href="#id23">Session Windows</a><a class="headerlink" href="#session-windows" title="Permalink to this headline"></a></h5>
+                        <p>Session windows are used to aggregate key-based events into so-called <em>sessions</em>, the process of which is referred to
+                            as <em>sessionization</em>.  Sessions represent a <strong>period of activity</strong> separated by a defined <strong>gap of inactivity</strong> (or
+                            &#8220;idleness&#8221;).  Any events processed that fall within the inactivity gap of any existing sessions are merged into the
+                            existing sessions.  If an event falls outside of the session gap, then a new session will be created.</p>
+                        <p>Session windows are different from the other window types in that:</p>
+                        <ul class="simple">
+                            <li>all windows are tracked independently across keys &#8211; e.g. windows of different keys typically have different start
+                                and end times</li>
+                            <li>their window sizes sizes vary &#8211; even windows for the same key typically have different sizes</li>
+                        </ul>
+                        <p>The prime area of application for session windows is <strong>user behavior analysis</strong>.  Session-based analyses can range from
+                            simple metrics (e.g. count of user visits on a news website or social platform) to more complex metrics (e.g. customer
+                            conversion funnel and event flows).</p>
+                        <p>The following code defines a session window with an inactivity gap of 5 minutes:</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.concurrent.TimeUnit</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.SessionWindows</span><span class="o">;</span>
+
+<span class="c1">// A session window with an inactivity gap of 5 minutes.</span>
+<span class="n">SessionWindows</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">TimeUnit</span><span class="o">.</span><span class="na">MINUTES</span><span class="o">.</span><span class="na">toMillis</span><span class="o">(</span><span class="mi">5</span><span class="o">));</span>
+</pre></div>
+                        </div>
+                        <p>Given the previous session window example, here&#8217;s what would happen on an input stream of six records.
+                            When the first three records arrive (upper part of in the diagram below), we&#8217;d have three sessions (see lower part)
+                            after having processed those records: two for the green record key, with one session starting and ending at the
+                            0-minute mark (only due to the illustration it looks as if the session goes from 0 to 1), and another starting and
+                            ending at the 6-minute mark; and one session for the blue record key, starting and ending at the 2-minute mark.</p>
+                        <div class="figure align-center" id="id5">
+                            <a class="reference internal image-reference" href="../../../images/streams-session-windows-01.png"><img alt="../../../images/streams-session-windows-01.png" src="../../../images/streams-session-windows-01.png" style="width: 400pt;" /></a>
+                            <p class="caption"><span class="caption-text">Detected sessions after having received three input records: two records for the green record key at t=0 and t=6, and
+one record for the blue record key at t=2.
+In this diagram the time numbers represent minutes; e.g.  t=5 means &#8220;at the five-minute mark&#8221;.  In reality, the unit
+of time in Kafka Streams is milliseconds, which means the time numbers would need to be multiplied with 60 * 1,000 to
+convert from minutes to milliseconds (e.g. t=5 would become t=300,000).</span></p>
+                        </div>
+                        <p>If we then receive three additional records (including two late-arriving records), what would happen is that the two
+                            existing sessions for the green record key will be merged into a single session starting at time 0 and ending at time 6,
+                            consisting of a total of three records.  The existing session for the blue record key will be extended to end at time 5,
+                            consisting of a total of two records.  And, finally, there will be a new session for the blue key starting and ending at
+                            time 11.</p>
+                        <div class="figure align-center" id="id6">
+                            <a class="reference internal image-reference" href="../../../images/streams-session-windows-02.png"><img alt="../../../images/streams-session-windows-02.png" src="../../../images/streams-session-windows-02.png" style="width: 400pt;" /></a>
+                            <p class="caption"><span class="caption-text">Detected sessions after having received six input records.  Note the two late-arriving data records at t=4 (green) and
+t=5 (blue), which lead to a merge of sessions and an extension of a session, respectively.</span></p>
+                        </div>
+                    </div>
+                </div>
+            </div>
+            <div class="section" id="applying-processors-and-transformers-processor-api-integration">
+                <span id="streams-developer-guide-dsl-process"></span><h3><a class="toc-backref" href="#id24">Applying processors and transformers (Processor API integration)</a><a class="headerlink" href="#applying-processors-and-transformers-processor-api-integration" title="Permalink to this headline"></a></h3>
+                <p>Beyond the aforementioned <a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateless"><span class="std std-ref">stateless</span></a> and
+                    <a class="reference internal" href="#streams-developer-guide-dsl-transformations-stateless"><span class="std std-ref">stateful</span></a> transformations, you may also
+                    leverage the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a> from the DSL.
+                    There are a number of scenarios where this may be helpful:</p>
+                <ul class="simple">
+                    <li><strong>Customization:</strong> You need to implement special, customized logic that is not or not yet available in the DSL.</li>
+                    <li><strong>Combining ease-of-use with full flexibility where it&#8217;s needed:</strong> Even though you generally prefer to use
+                        the expressiveness of the DSL, there are certain steps in your processing that require more flexibility and
+                        tinkering than the DSL provides.  For example, only the Processor API provides access to a
+                        <a class="reference internal" href="../faq.html#streams-faq-processing-record-metadata"><span class="std std-ref">record&#8217;s metadata</span></a> such as its topic, partition, and offset information.
+                        However, you don&#8217;t want to switch completely to the Processor API just because of that.</li>
+                    <li><strong>Migrating from other tools:</strong> You are migrating from other stream processing technologies that provide an
+                        imperative API, and migrating some of your legacy code to the Processor API was faster and/or easier than to
+                        migrate completely to the DSL right away.</li>
+                </ul>
+                <table border="1" class="non-scrolling-table width-100-percent docutils">
+                    <colgroup>
+                        <col width="19%" />
+                        <col width="81%" />
+                    </colgroup>
+                    <thead valign="bottom">
+                    <tr class="row-odd"><th class="head">Transformation</th>
+                        <th class="head">Description</th>
+                    </tr>
+                    </thead>
+                    <tbody valign="top">
+                    <tr class="row-even"><td><p class="first"><strong>Process</strong></p>
+                        <ul class="last simple">
+                            <li>KStream -&gt; void</li>
+                        </ul>
+                    </td>
+                        <td><p class="first"><strong>Terminal operation.</strong>  Applies a <code class="docutils literal"><span class="pre">Processor</span></code> to each record.
+                            <code class="docutils literal"><span class="pre">process()</span></code> allows you to leverage the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a> from the DSL.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#process-org.apache.kafka.streams.processor.ProcessorSupplier-java.lang.String...-">details</a>)</p>
+                            <p>This is essentially equivalent to adding the <code class="docutils literal"><span class="pre">Processor</span></code> via <code class="docutils literal"><span class="pre">Topology#addProcessor()</span></code> to your
+                                <a class="reference internal" href="../concepts.html#streams-concepts-processor-topology"><span class="std std-ref">processor topology</span></a>.</p>
+                            <p class="last">An example is available in the
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#process-org.apache.kafka.streams.processor.ProcessorSupplier-java.lang.String...-">javadocs</a>.</p>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td><p class="first"><strong>Transform</strong></p>
+                        <ul class="last simple">
+                            <li>KStream -&gt; KStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Applies a <code class="docutils literal"><span class="pre">Transformer</span></code> to each record.
+                            <code class="docutils literal"><span class="pre">transform()</span></code> allows you to leverage the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a> from the DSL.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#transform-org.apache.kafka.streams.kstream.TransformerSupplier-java.lang.String...-">details</a>)</p>
+                            <p>Each input record is transformed into zero, one, or more output records (similar to the stateless <code class="docutils literal"><span class="pre">flatMap</span></code>).
+                                The <code class="docutils literal"><span class="pre">Transformer</span></code> must return <code class="docutils literal"><span class="pre">null</span></code> for zero output.
+                                You can modify the record&#8217;s key and value, including their types.</p>
+                            <p><strong>Marks the stream for data re-partitioning:</strong>
+                                Applying a grouping or a join after <code class="docutils literal"><span class="pre">transform</span></code> will result in re-partitioning of the records.
+                                If possible use <code class="docutils literal"><span class="pre">transformValues</span></code> instead, which will not cause data re-partitioning.</p>
+                            <p><code class="docutils literal"><span class="pre">transform</span></code> is essentially equivalent to adding the <code class="docutils literal"><span class="pre">Transformer</span></code> via <code class="docutils literal"><span class="pre">Topology#addProcessor()</span></code> to your
+                                <a class="reference internal" href="../concepts.html#streams-concepts-processor-topology"><span class="std std-ref">processor topology</span></a>.</p>
+                            <p class="last">An example is available in the
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#transform-org.apache.kafka.streams.kstream.TransformerSupplier-java.lang.String...-">javadocs</a>.
+                               </p>
+                        </td>
+                    </tr>
+                    <tr class="row-even"><td><p class="first"><strong>Transform (values only)</strong></p>
+                        <ul class="last simple">
+                            <li>KStream -&gt; KStream</li>
+                        </ul>
+                    </td>
+                        <td><p class="first">Applies a <code class="docutils literal"><span class="pre">ValueTransformer</span></code> to each record, while retaining the key of the original record.
+                            <code class="docutils literal"><span class="pre">transformValues()</span></code> allows you to leverage the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a> from the DSL.
+                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#transformValues-org.apache.kafka.streams.kstream.ValueTransformerSupplier-java.lang.String...-">details</a>)</p>
+                            <p>Each input record is transformed into exactly one output record (zero output records or multiple output records are not possible).
+                                The <code class="docutils literal"><span class="pre">ValueTransformer</span></code> may return <code class="docutils literal"><span class="pre">null</span></code> as the new value for a record.</p>
+                            <p><code class="docutils literal"><span class="pre">transformValues</span></code> is preferable to <code class="docutils literal"><span class="pre">transform</span></code> because it will not cause data re-partitioning.</p>
+                            <p><code class="docutils literal"><span class="pre">transformValues</span></code> is essentially equivalent to adding the <code class="docutils literal"><span class="pre">ValueTransformer</span></code> via <code class="docutils literal"><span class="pre">Topology#addProcessor()</span></code> to your
+                                <a class="reference internal" href="../concepts.html#streams-concepts-processor-topology"><span class="std std-ref">processor topology</span></a>.</p>
+                            <p class="last">An example is available in the
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#transformValues-org.apache.kafka.streams.kstream.ValueTransformerSupplier-java.lang.String...-">javadocs</a>.</p>
+                        </td>
+                    </tr>
+                    </tbody>
+                </table>
+                <p>The following example shows how to leverage, via the <code class="docutils literal"><span class="pre">KStream#process()</span></code> method, a custom <code class="docutils literal"><span class="pre">Processor</span></code> that sends an
+                    email notification whenever a page view count reaches a predefined threshold.</p>
+                <p>First, we need to implement a custom stream processor, <code class="docutils literal"><span class="pre">PopularPageEmailAlert</span></code>, that implements the <code class="docutils literal"><span class="pre">Processor</span></code>
+                    interface:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// A processor that sends an alert message about a popular page to a configurable email address</span>
+<span class="kd">public</span> <span class="kd">class</span> <span class="nc">PopularPageEmailAlert</span> <span class="kd">implements</span> <span class="n">Processor</span><span class="o">&lt;</span><span class="n">PageId</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="o">{</span>
+
+  <span class="kd">private</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">emailAddress</span><span class="o">;</span>
+  <span class="kd">private</span> <span class="n">ProcessorContext</span> <span class="n">context</span><span class="o">;</span>
+
+  <span class="kd">public</span> <span class="nf">PopularPageEmailAlert</span><span class="o">(</span><span class="n">String</span> <span class="n">emailAddress</span><span class="o">)</span> <span class="o">{</span>
+    <span class="k">this</span><span class="o">.</span><span class="na">emailAddress</span> <span class="o">=</span> <span class="n">emailAddress</span><span class="o">;</span>
+  <span class="o">}</span>
+
+  <span class="nd">@Override</span>
+  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">init</span><span class="o">(</span><span class="n">ProcessorContext</span> <span class="n">context</span><span class="o">)</span> <span class="o">{</span>
+    <span class="k">this</span><span class="o">.</span><span class="na">context</span> <span class="o">=</span> <span class="n">context</span><span class="o">;</span>
+
+    <span class="c1">// Here you would perform any additional initializations such as setting up an email client.</span>
+  <span class="o">}</span>
+
+  <span class="nd">@Override</span>
+  <span class="kt">void</span> <span class="nf">process</span><span class="o">(</span><span class="n">PageId</span> <span class="n">pageId</span><span class="o">,</span> <span class="n">Long</span> <span class="n">count</span><span class="o">)</span> <span class="o">{</span>
+    <span class="c1">// Here you would format and send the alert email.</span>
+    <span class="c1">//</span>
+    <span class="c1">// In this specific example, you would be able to include information about the page&#39;s ID and its view count</span>
+    <span class="c1">// (because the class implements `Processor&lt;PageId, Long&gt;`).</span>
+  <span class="o">}</span>
+
+  <span class="nd">@Override</span>
+  <span class="kt">void</span> <span class="nf">close</span><span class="o">()</span> <span class="o">{</span>
+    <span class="c1">// Any code for clean up would go here.  This processor instance will not be used again after this call.</span>
+  <span class="o">}</span>
+
+<span class="o">}</span>
+</pre></div>
+                </div>
+                <div class="admonition tip">
+                    <p><b>Tip</b></p>
+                    <p class="last">Even though we do not demonstrate it in this example, a stream processor can access any available state stores by
+                        calling <code class="docutils literal"><span class="pre">ProcessorContext#getStateStore()</span></code>.  Only such state stores are available that (1) have been named in the
+                        corresponding <code class="docutils literal"><span class="pre">KStream#process()</span></code> method call (note that this is a different method than <code class="docutils literal"><span class="pre">Processor#process()</span></code>),
+                        plus (2) all global stores.  Note that global stores do not need to be attached explicitly;  however, they only
+                        allow for read-only access.</p>
+                </div>
+                <p>Then we can leverage the <code class="docutils literal"><span class="pre">PopularPageEmailAlert</span></code> processor in the DSL via <code class="docutils literal"><span class="pre">KStream#process</span></code>.</p>
+                <p>In Java 8+, using lambda expressions:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">pageViews</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Send an email notification when the view count of a page reaches one thousand.</span>
+<span class="n">pageViews</span><span class="o">.</span><span class="na">groupByKey</span><span class="o">()</span>
+         <span class="o">.</span><span class="na">count</span><span class="o">()</span>
+         <span class="o">.</span><span class="na">filter</span><span class="o">((</span><span class="n">PageId</span> <span class="n">pageId</span><span class="o">,</span> <span class="n">Long</span> <span class="n">viewCount</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">viewCount</span> <span class="o">==</span> <span class="mi">1000</span><span class="o">)</span>
+         <span class="c1">// PopularPageEmailAlert is your custom processor that implements the</span>
+         <span class="c1">// `Processor` interface, see further down below.</span>
+         <span class="o">.</span><span class="na">process</span><span class="o">(()</span> <span class="o">-&gt;</span> <span class="k">new</span> <span class="n">PopularPageEmailAlert</span><span class="o">(</span><span class="s">&quot;alerts@yourcompany.com&quot;</span><span class="o">));</span>
+</pre></div>
+                </div>
+                <p>In Java 7:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Send an email notification when the view count of a page reaches one thousand.</span>
+<span class="n">pageViews</span><span class="o">.</span><span class="na">groupByKey</span><span class="o">().</span>
+         <span class="o">.</span><span class="na">count</span><span class="o">()</span>
+         <span class="o">.</span><span class="na">filter</span><span class="o">(</span>
+            <span class="k">new</span> <span class="n">Predicate</span><span class="o">&lt;</span><span class="n">PageId</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span>
+              <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">test</span><span class="o">(</span><span class="n">PageId</span> <span class="n">pageId</span><span class="o">,</span> <span class="n">Long</span> <span class="n">viewCount</span><span class="o">)</span> <span class="o">{</span>
+                <span class="k">return</span> <span class="n">viewCount</span> <span class="o">==</span> <span class="mi">1000</span><span class="o">;</span>
+              <span class="o">}</span>
+            <span class="o">})</span>
+         <span class="o">.</span><span class="na">process</span><span class="o">(</span>
+           <span class="k">new</span> <span class="n">ProcessorSupplier</span><span class="o">&lt;</span><span class="n">PageId</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;()</span> <span class="o">{</span>
+             <span class="kd">public</span> <span class="n">Processor</span><span class="o">&lt;</span><span class="n">PageId</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="nf">get</span><span class="o">()</span> <span class="o">{</span>
+               <span class="c1">// PopularPageEmailAlert is your custom processor that implements</span>
+               <span class="c1">// the `Processor` interface, see further down below.</span>
+               <span class="k">return</span> <span class="k">new</span> <span class="n">PopularPageEmailAlert</span><span class="o">(</span><span class="s">&quot;alerts@yourcompany.com&quot;</span><span class="o">);</span>
+             <span class="o">}</span>
+           <span class="o">});</span>
+</pre></div>
+                </div>
+            </div>
+        </div>
+        <div class="section" id="writing-streams-back-to-kafka">
+            <span id="streams-developer-guide-dsl-destinations"></span><h2><a class="toc-backref" href="#id25">Writing streams back to Kafka</a><a class="headerlink" href="#writing-streams-back-to-kafka" title="Permalink to this headline"></a></h2>
+            <p>Any streams and tables may be (continuously) written back to a Kafka topic.  As we will describe in more detail below, the output data might be
+                re-partitioned on its way to Kafka, depending on the situation.</p>
+            <table border="1" class="non-scrolling-table width-100-percent docutils">
+                <colgroup>
+                    <col width="22%" />
+                    <col width="78%" />
+                </colgroup>
+                <thead valign="bottom">
+                <tr class="row-odd"><th class="head">Writing to Kafka</th>
+                    <th class="head">Description</th>
+                </tr>
+                </thead>
+                <tbody valign="top">
+                <tr class="row-even"><td><p class="first"><strong>To</strong></p>
+                    <ul class="last simple">
+                        <li>KStream -&gt; void</li>
+                    </ul>
+                </td>
+                    <td><p class="first"><strong>Terminal operation.</strong>  Write the records to a Kafka topic.
+                        (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#to(java.lang.String)">KStream details</a>)</p>
+                        <p>When to provide serdes explicitly:</p>
+                        <ul class="simple">
+                            <li>If you do not specify SerDes explicitly, the default SerDes from the
+                                <a class="reference internal" href="config-streams.html#streams-developer-guide-configuration"><span class="std std-ref">configuration</span></a> are used.</li>
+                            <li>You <strong>must specify SerDes explicitly</strong> via the <code class="docutils literal"><span class="pre">Produced</span></code> class if the key and/or value types of the
+                                <code class="docutils literal"><span class="pre">KStream</span></code> do not match the configured default SerDes.</li>
+                            <li>See <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data Types and Serialization</span></a> for information about configuring default SerDes, available SerDes,
+                                and implementing your own custom SerDes.</li>
+                        </ul>
+                        <p>A variant of <code class="docutils literal"><span class="pre">to</span></code> exists that enables you to specify how the data is produced by using a <code class="docutils literal"><span class="pre">Produced</span></code>
+                            instance to specify, for example, a <code class="docutils literal"><span class="pre">StreamPartitioner</span></code> that gives you control over
+                            how output records are distributed across the partitions of the output topic.</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">table</span> <span class="o">=</span> <span class="o">...;</span>
+
+
+<span class="c1">// Write the stream to the output topic, using the configured default key</span>
+<span class="c1">// and value serdes of your `StreamsConfig`.</span>
+<span class="n">stream</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;my-stream-output-topic&quot;</span><span class="o">);</span>
+
+<span class="c1">// Same for table</span>
+<span class="n">table</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;my-table-output-topic&quot;</span><span class="o">);</span>
+
+<span class="c1">// Write the stream to the output topic, using explicit key and value serdes,</span>
+<span class="c1">// (thus overriding the defaults of your `StreamsConfig`).</span>
+<span class="n">stream</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;my-stream-output-topic&quot;</span><span class="o">,</span> <span class="n">Produced</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span cla [...]
+</pre></div>
+                        </div>
+                        <p><strong>Causes data re-partitioning if any of the following conditions is true:</strong></p>
+                        <ol class="last arabic simple">
+                            <li>If the output topic has a different number of partitions than the stream/table.</li>
+                            <li>If the <code class="docutils literal"><span class="pre">KStream</span></code> was marked for re-partitioning.</li>
+                            <li>If you provide a custom <code class="docutils literal"><span class="pre">StreamPartitioner</span></code> to explicitly control how to distribute the output records
+                                across the partitions of the output topic.</li>
+                            <li>If the key of an output record is <code class="docutils literal"><span class="pre">null</span></code>.</li>
+                        </ol>
+                    </td>
+                </tr>
+                <tr class="row-odd"><td><p class="first"><strong>Through</strong></p>
+                    <ul class="last simple">
+                        <li>KStream -&gt; KStream</li>
+                        <li>KTable -&gt; KTable</li>
+                    </ul>
+                </td>
+                    <td><p class="first">Write the records to a Kafka topic and create a new stream/table from that topic.
+                        Essentially a shorthand for <code class="docutils literal"><span class="pre">KStream#to()</span></code> followed by <code class="docutils literal"><span class="pre">StreamsBuilder#stream()</span></code>, same for tables.
+                        (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#through(java.lang.String)">KStream details</a>)</p>
+                        <p>When to provide SerDes explicitly:</p>
+                        <ul class="simple">
+                            <li>If you do not specify SerDes explicitly, the default SerDes from the
+                                <a class="reference internal" href="config-streams.html#streams-developer-guide-configuration"><span class="std std-ref">configuration</span></a> are used.</li>
+                            <li>You <strong>must specify SerDes explicitly</strong> if the key and/or value types of the <code class="docutils literal"><span class="pre">KStream</span></code> or <code class="docutils literal"><span class="pre">KTable</span></code> do not
+                                match the configured default SerDes.</li>
+                            <li>See <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data Types and Serialization</span></a> for information about configuring default SerDes, available SerDes,
+                                and implementing your own custom SerDes.</li>
+                        </ul>
+                        <p>A variant of <code class="docutils literal"><span class="pre">through</span></code> exists that enables you to specify how the data is produced by using a <code class="docutils literal"><span class="pre">Produced</span></code>
+                            instance to specify, for example, a <code class="docutils literal"><span class="pre">StreamPartitioner</span></code> that gives you control over
+                            how output records are distributed across the partitions of the output topic.</p>
+                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">table</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Variant 1: Imagine that your application needs to continue reading and processing</span>
+<span class="c1">// the records after they have been written to a topic via ``to()``.  Here, one option</span>
+<span class="c1">// is to write to an output topic, then read from the same topic by constructing a</span>
+<span class="c1">// new stream from it, and then begin processing it (here: via `map`, for example).</span>
+<span class="n">stream</span><span class="o">.</span><span class="na">to</span><span class="o">(</span><span class="s">&quot;my-stream-output-topic&quot;</span><span class="o">);</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">newStream</span> <span class="o">=</span> <span class="n">builder</span><span class="o">.</span><span class="na">stream</span><span class="o">(</span><span class="s">&quot;my-stream-output-topic&quot;</span><span class="o">).</span><span class="na">map</span><span class="o">(...);</span>
+
+<span class="c1">// Variant 2 (better): Since the above is a common pattern, the DSL provides the</span>
+<span class="c1">// convenience method ``through`` that is equivalent to the code above.</span>
+<span class="c1">// Note that you may need to specify key and value serdes explicitly, which is</span>
+<span class="c1">// not shown in this simple example.</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">newStream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">through</span><span class="o">(</span><span class="s">&quot;user-clicks-topic&quot;</span><span class="o">).</span><span class="na">map</span><span class="o">(...);</span>
+
+<span class="c1">// ``through`` is also available for tables</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">newTable</span> <span class="o">=</span> <span class="n">table</span><span class="o">.</span><span class="na">through</span><span class="o">(</span><span class="s">&quot;my-table-output-topic&quot;</span><span class="o">).</span><span class="na">map</span><span class="o">(...);</span>
+</pre></div>
+                        </div>
+                        <p><strong>Causes data re-partitioning if any of the following conditions is true:</strong></p>
+                        <ol class="last arabic simple">
+                            <li>If the output topic has a different number of partitions than the stream/table.</li>
+                            <li>If the <code class="docutils literal"><span class="pre">KStream</span></code> was marked for re-partitioning.</li>
+                            <li>If you provide a custom <code class="docutils literal"><span class="pre">StreamPartitioner</span></code> to explicitly control how to distribute the output records
+                                across the partitions of the output topic.</li>
+                            <li>If the key of an output record is <code class="docutils literal"><span class="pre">null</span></code>.</li>
+                        </ol>
+                    </td>
+                </tr>
+                </tbody>
+            </table>
+            <div class="admonition note">
+                <p class="first admonition-title">Note</p>
+                <p class="last"><strong>When you want to write to systems other than Kafka:</strong>
+                    Besides writing the data back to Kafka, you can also apply a
+                    <a class="reference internal" href="#streams-developer-guide-dsl-process"><span class="std std-ref">custom processor</span></a> as a stream sink at the end of the processing to, for
+                    example, write to external databases.  First, doing so is not a recommended pattern &#8211; we strongly suggest to use the
+                    <a class="reference internal" href="../../connect/index.html#kafka-connect"><span class="std std-ref">Kafka Connect API</span></a> instead.  However, if you do use such a sink processor, please be aware that
+                    it is now your responsibility to guarantee message delivery semantics when talking to such external systems (e.g., to
+                    retry on delivery failure or to prevent message duplication).</p>
+</div>
+</div>
+</div>
+
+
+               </div>
+              </div>
+  <div class="pagination">
+    <a href="/{{version}}/documentation/streams/developer-guide/config-streams" class="pagination__btn pagination__btn__prev">Previous</a>
+    <a href="/{{version}}/documentation/streams/developer-guide/processor-api" class="pagination__btn pagination__btn__next">Next</a>
+  </div>
+</script>
+
+<!--#include virtual="../../../includes/_header.htm" -->
+<!--#include virtual="../../../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+  <!--#include virtual="../../../includes/_nav.htm" -->
+  <div class="right">
+    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <ul class="breadcrumbs">
+      <li><a href="/documentation">Documentation</a></li>
+      <li><a href="/documentation/streams">Kafka Streams</a></li>
+      <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+    </ul>
+    <div class="p-content"></div>
+  </div>
+</div>
+<!--#include virtual="../../../includes/_footer.htm" -->
+<script>
+    $(function() {
+        // Show selected style on nav item
+        $('.b-nav__streams').addClass('selected');
+
+        //sticky secondary nav
+        var $navbar = $(".sub-nav-sticky"),
+            y_pos = $navbar.offset().top,
+            height = $navbar.height();
+
+        $(window).scroll(function() {
+            var scrollTop = $(window).scrollTop();
+
+            if (scrollTop > y_pos - height) {
+                $navbar.addClass("navbar-fixed")
+            } else if (scrollTop <= y_pos) {
+                $navbar.removeClass("navbar-fixed")
+            }
+        });
+
+        // Display docs subnav items
+        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+    });
+</script>
diff --git a/docs/streams/developer-guide/index.html b/docs/streams/developer-guide/index.html
new file mode 100644
index 0000000..443ad7d
--- /dev/null
+++ b/docs/streams/developer-guide/index.html
@@ -0,0 +1,104 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <h1>Developer Guide for Kafka Streams</h1>
+    <div class="sub-nav-sticky">
+        <div class="sticky-top">
+            <div style="height:35px">
+                <a href="/{{version}}/documentation/streams/">Introduction</a>
+                <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+                <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+                <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+                <a href="/{{version}}/documentation/streams/architecture">Architecture</a>
+                <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
+                <a href="/{{version}}/documentation/streams/upgrade-guide">Upgrade</a>
+            </div>
+        </div>
+    </div>
+
+                
+  <div class="section" id="developer-guide">
+<!-- span id="streams-developer-guide"></span><h1>Developer Guide<a class="headerlink" href="#developer-guide" title="Permalink to this headline"></a></h1 -->
+<p>This developer guide describes how to write, configure, and execute a Kafka Streams application.</p>
+<div class="toctree-wrapper compound">
+<ul>
+<li class="toctree-l1"><a class="reference internal" href="write-streams.html">Writing a Streams Application</a></li>
+<li class="toctree-l1"><a class="reference internal" href="config-streams.html">Configuring a Streams Application</a></li>
+<li class="toctree-l1"><a class="reference internal" href="dsl-api.html">Streams DSL</a></li>
+<li class="toctree-l1"><a class="reference internal" href="processor-api.html">Processor API</a></li>
+<li class="toctree-l1"><a class="reference internal" href="datatypes.html">Data Types and Serialization</a></li>
+<li class="toctree-l1"><a class="reference internal" href="interactive-queries.html">Interactive Queries</a></li>
+<li class="toctree-l1"><a class="reference internal" href="memory-mgmt.html">Memory Management</a></li>
+<li class="toctree-l1"><a class="reference internal" href="running-app.html">Running Streams Applications</a></li>
+<li class="toctree-l1"><a class="reference internal" href="manage-topics.html">Managing Streams Application Topics</a></li>
+<li class="toctree-l1"><a class="reference internal" href="security.html">Streams Security</a></li>
+<li class="toctree-l1"><a class="reference internal" href="app-reset-tool.html">Application Reset Tool</a></li>
+</ul>
+</div>
+</div>
+
+
+               </div>
+              </div>
+
+    <div class="pagination">
+        <a href="/{{version}}/documentation/streams/architecture" class="pagination__btn pagination__btn__prev">Previous</a>
+        <a href="/{{version}}/documentation/streams/developer-guide/write-streams" class="pagination__btn pagination__btn__next">Next</a>
+    </div>
+</script>
+
+<!--#include virtual="../../../includes/_header.htm" -->
+<!--#include virtual="../../../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+    <!--#include virtual="../../../includes/_nav.htm" -->
+    <div class="right">
+        <!--#include virtual="../../../includes/_docs_banner.htm" -->
+        <ul class="breadcrumbs">
+            <li><a href="/documentation">Documentation</a></li>
+            <li><a href="/documentation/streams">Kafka Streams</a></li>
+        </ul>
+        <div class="p-content"></div>
+    </div>
+</div>
+<!--#include virtual="../../../includes/_footer.htm" -->
+<script>
+    $(function() {
+        // Show selected style on nav item
+        $('.b-nav__streams').addClass('selected');
+
+        //sticky secondary nav
+        var $navbar = $(".sub-nav-sticky"),
+            y_pos = $navbar.offset().top,
+            height = $navbar.height();
+
+        $(window).scroll(function() {
+            var scrollTop = $(window).scrollTop();
+
+            if (scrollTop > y_pos - height) {
+                $navbar.addClass("navbar-fixed")
+            } else if (scrollTop <= y_pos) {
+                $navbar.removeClass("navbar-fixed")
+            }
+        });
+
+        // Display docs subnav items
+        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+    });
+</script>
diff --git a/docs/streams/developer-guide/interactive-queries.html b/docs/streams/developer-guide/interactive-queries.html
new file mode 100644
index 0000000..f93d2d6
--- /dev/null
+++ b/docs/streams/developer-guide/interactive-queries.html
@@ -0,0 +1,530 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <!-- div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div -->
+    </div>
+  </div>
+
+    <div class="section" id="interactive-queries">
+        <span id="streams-developer-guide-interactive-queries"></span><h1>Interactive Queries<a class="headerlink" href="#interactive-queries" title="Permalink to this headline"></a></h1>
+        <p>Interactive queries allow you to leverage the state of your application from outside your application. The Kafka Streams API enables your applications to be queryable.</p>
+        <div class="contents local topic" id="table-of-contents">
+            <p class="topic-title first"><b>Table of Contents</b></p>
+            <ul class="simple">
+                <li><a class="reference internal" href="#querying-local-state-stores-for-an-app-instance" id="id3">Querying local state stores for an app instance</a><ul>
+                    <li><a class="reference internal" href="#querying-local-key-value-stores" id="id4">Querying local key-value stores</a></li>
+                    <li><a class="reference internal" href="#querying-local-window-stores" id="id5">Querying local window stores</a></li>
+                    <li><a class="reference internal" href="#querying-local-custom-state-stores" id="id6">Querying local custom state stores</a></li>
+                </ul>
+                </li>
+                <li><a class="reference internal" href="#querying-remote-state-stores-for-the-entire-app" id="id7">Querying remote state stores for the entire app</a><ul>
+                    <li><a class="reference internal" href="#adding-an-rpc-layer-to-your-application" id="id8">Adding an RPC layer to your application</a></li>
+                    <li><a class="reference internal" href="#exposing-the-rpc-endpoints-of-your-application" id="id9">Exposing the RPC endpoints of your application</a></li>
+                    <li><a class="reference internal" href="#discovering-and-accessing-application-instances-and-their-local-state-stores" id="id10">Discovering and accessing application instances and their local state stores</a></li>
+                </ul>
+                </li>
+                <li><a class="reference internal" href="#demo-applications" id="id11">Demo applications</a></li>
+            </ul>
+        </div>
+        <p>The full state of your application is typically <a class="reference internal" href="../architecture.html#streams-architecture-state"><span class="std std-ref">split across many distributed instances of your application</span></a>, and across many state stores that are managed locally by these application instances.</p>
+        <div class="figure align-center">
+            <a class="reference internal image-reference" href="../../../images/streams-interactive-queries-03.png"><img alt="../../../images/streams-interactive-queries-03.png" src="../../../images/streams-interactive-queries-03.png" style="width: 400pt; height: 400pt;" /></a>
+        </div>
+        <p>There are local and remote components to interactively querying the state of your application.</p>
+        <dl class="docutils">
+            <dt>Local state</dt>
+            <dd>An application instance can query the locally managed portion of the state and directly query its own local state stores.  You can use the corresponding local data in other parts of your application code, as long as it doesn&#8217;t required calling the Kafka Streams API.  Querying state stores is always read-only to guarantee that the underlying state stores will never be mutated out-of-band (e.g., you cannot add new entries). State stores should only be mutated by the c [...]
+            <dt>Remote state</dt>
+            <dd><p class="first">To query the full state of your application, you must connect the various fragments of the state, including:</p>
+                <ul class="simple">
+                    <li>query local state stores</li>
+                    <li>discover all running instances of your application in the network and their state stores</li>
+                    <li>communicate with these instances over the network (e.g., an RPC layer)</li>
+                </ul>
+                <p class="last">Connecting these fragments enables communication between instances of the same app and communication from other applications for interactive queries. For more information, see <a class="reference internal" href="#streams-developer-guide-interactive-queries-discovery"><span class="std std-ref">Querying remote state stores for the entire app</span></a>.</p>
+            </dd>
+        </dl>
+        <p>Kafka Streams natively provides all of the required functionality for interactively querying the state of your application, except if you want to expose the full state of your application via interactive queries. To allow application instances to communicate over the network, you must add a Remote Procedure Call (RPC) layer to your application (e.g., REST API).</p>
+        <p>This table shows the Kafka Streams native communication support for various procedures.</p>
+        <table border="1" class="docutils">
+            <colgroup>
+                <col width="42%" />
+                <col width="27%" />
+                <col width="31%" />
+            </colgroup>
+            <thead valign="bottom">
+            <tr class="row-odd"><th class="head">Procedure</th>
+                <th class="head">Application instance</th>
+                <th class="head">Entire application</th>
+            </tr>
+            </thead>
+            <tbody valign="top">
+            <tr class="row-even"><td>Query local state stores of an app instance</td>
+                <td>Supported</td>
+                <td>Supported</td>
+            </tr>
+            <tr class="row-odd"><td>Make an app instance discoverable to others</td>
+                <td>Supported</td>
+                <td>Supported</td>
+            </tr>
+            <tr class="row-even"><td>Discover all running app instances and their state stores</td>
+                <td>Supported</td>
+                <td>Supported</td>
+            </tr>
+            <tr class="row-odd"><td>Communicate with app instances over the network (RPC)</td>
+                <td>Supported</td>
+                <td>Not supported (you must configure)</td>
+            </tr>
+            </tbody>
+        </table>
+        <div class="section" id="querying-local-state-stores-for-an-app-instance">
+            <span id="streams-developer-guide-interactive-queries-local-stores"></span><h2><a class="toc-backref" href="#id3">Querying local state stores for an app instance</a><a class="headerlink" href="#querying-local-state-stores-for-an-app-instance" title="Permalink to this headline"></a></h2>
+            <p>A Kafka Streams application typically runs on multiple instances.  The state that is locally available on any given instance is only a subset of the <a class="reference internal" href="../architecture.html#streams-architecture-state"><span class="std std-ref">application&#8217;s entire state</span></a>.  Querying the local stores on an instance will only return data locally available on that particular instance.</p>
+            <p>The method <code class="docutils literal"><span class="pre">KafkaStreams#store(...)</span></code> finds an application instance&#8217;s local state stores by name and type.</p>
+            <div class="figure align-center" id="id1">
+                <a class="reference internal image-reference" href="../../../images/streams-interactive-queries-api-01.png"><img alt="../../../images/streams-interactive-queries-api-01.png" src="../../../images/streams-interactive-queries-api-01.png" style="width: 500pt;" /></a>
+                <p class="caption"><span class="caption-text">Every application instance can directly query any of its local state stores.</span></p>
+            </div>
+            <p>The <em>name</em> of a state store is defined when you create the store. You can create the store explicitly by using the Processor API or implicitly by using stateful operations in the DSL.</p>
+            <p>The <em>type</em> of a state store is defined by <code class="docutils literal"><span class="pre">QueryableStoreType</span></code>. You can access the built-in types via the class <code class="docutils literal"><span class="pre">QueryableStoreTypes</span></code>.
+                Kafka Streams currently has two built-in types:</p>
+            <ul class="simple">
+                <li>A key-value store <code class="docutils literal"><span class="pre">QueryableStoreTypes#keyValueStore()</span></code>, see <a class="reference internal" href="#streams-developer-guide-interactive-queries-local-key-value-stores"><span class="std std-ref">Querying local key-value stores</span></a>.</li>
+                <li>A window store <code class="docutils literal"><span class="pre">QueryableStoreTypes#windowStore()</span></code>, see <a class="reference internal" href="#streams-developer-guide-interactive-queries-local-window-stores"><span class="std std-ref">Querying local window stores</span></a>.</li>
+            </ul>
+            <p>You can also <a class="reference internal" href="#streams-developer-guide-interactive-queries-custom-stores"><span class="std std-ref">implement your own QueryableStoreType</span></a> as described in section <a class="reference internal" href="#streams-developer-guide-interactive-queries-custom-stores"><span class="std std-ref">Querying local custom state stores</span></a>.</p>
+            <div class="admonition note">
+                <p class="first admonition-title">Note</p>
+                <p class="last">Kafka Streams materializes one state store per stream partition. This means your application will potentially manage
+                    many underlying state stores.  The API enables you to query all of the underlying stores without having to know which
+                    partition the data is in.</p>
+            </div>
+            <div class="section" id="querying-local-key-value-stores">
+                <span id="streams-developer-guide-interactive-queries-local-key-value-stores"></span><h3><a class="toc-backref" href="#id4">Querying local key-value stores</a><a class="headerlink" href="#querying-local-key-value-stores" title="Permalink to this headline"></a></h3>
+                <p>To query a local key-value store, you must first create a topology with a key-value store. This example creates a key-value
+                    store named &#8220;CountsKeyValueStore&#8221;. This store will hold the latest count for any word that is found on the topic &#8220;word-count-input&#8221;.</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">textLines</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Define the processing topology (here: WordCount)</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedByWord</span> <span class="o">=</span> <span class="n">textLines</span>
+  <span class="o">.</span><span class="na">flatMapValues</span><span class="o">(</span><span class="n">value</span> <span class="o">-&gt;</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">().</span><span class="na">split</span><span class="o">(</span><span class="s">&quot;\\W+&quot;</span><span class="o">)))</span>
+  <span class="o">.</span><span class="na">groupBy</span><span class="o">((</span><span class="n">key</span><span class="o">,</span> <span class="n">word</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">word</span><span class="o">,</span> <span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">stringSerde</span><span class="o">,</span> <span class="n">stringSerde</span><span class="o">));</span>
+
+<span class="c1">// Create a key-value store named &quot;CountsKeyValueStore&quot; for the all-time word counts</span>
+<span class="n">groupedByWord</span><span class="o">.</span><span class="na">count</span><span class="o">(</span><span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">,</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span clas [...]
+
+<span class="c1">// Start an instance of the topology</span>
+<span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaStreams</span><span class="o">(</span><span class="n">builder</span><span class="o">,</span> <span class="n">config</span><span class="o">);</span>
+<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span>
+</pre></div>
+                </div>
+                <p>After the application has started, you can get access to &#8220;CountsKeyValueStore&#8221; and then query it via the <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java">ReadOnlyKeyValueStore</a> API:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Get the key-value store CountsKeyValueStore</span>
+<span class="n">ReadOnlyKeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">keyValueStore</span> <span class="o">=</span>
+    <span class="n">streams</span><span class="o">.</span><span class="na">store</span><span class="o">(</span><span class="s">&quot;CountsKeyValueStore&quot;</span><span class="o">,</span> <span class="n">QueryableStoreTypes</span><span class="o">.</span><span class="na">keyValueStore</span><span class="o">());</span>
+
+<span class="c1">// Get value by key</span>
+<span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;count for hello:&quot;</span> <span class="o">+</span> <span class="n">keyValueStore</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">&quot;hello&quot;</span><span class="o">));</span>
+
+<span class="c1">// Get the values for a range of keys available in this application instance</span>
+<span class="n">KeyValueIterator</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">range</span> <span class="o">=</span> <span class="n">keyValueStore</span><span class="o">.</span><span class="na">range</span><span class="o">(</span><span class="s">&quot;all&quot;</span><span class="o">,</span> <span class="s">&quot;streams&quot;</span><span class="o">);</span>
+<span class="k">while</span> <span class="o">(</span><span class="n">range</span><span class="o">.</span><span class="na">hasNext</span><span class="o">())</span> <span class="o">{</span>
+  <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">next</span> <span class="o">=</span> <span class="n">range</span><span class="o">.</span><span class="na">next</span><span class="o">();</span>
+  <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;count for &quot;</span> <span class="o">+</span> <span class="n">next</span><span class="o">.</span><span class="na">key</span> <span class="o">+</span> <span class="s">&quot;: &quot;</span> <span class="o">+</span> <span class="n">value</span><span class="o">);</span>
+<span class="o">}</span>
+
+<span class="c1">// Get the values for all of the keys available in this application instance</span>
+<span class="n">KeyValueIterator</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">range</span> <span class="o">=</span> <span class="n">keyValueStore</span><span class="o">.</span><span class="na">all</span><span class="o">();</span>
+<span class="k">while</span> <span class="o">(</span><span class="n">range</span><span class="o">.</span><span class="na">hasNext</span><span class="o">())</span> <span class="o">{</span>
+  <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">next</span> <span class="o">=</span> <span class="n">range</span><span class="o">.</span><span class="na">next</span><span class="o">();</span>
+  <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;count for &quot;</span> <span class="o">+</span> <span class="n">next</span><span class="o">.</span><span class="na">key</span> <span class="o">+</span> <span class="s">&quot;: &quot;</span> <span class="o">+</span> <span class="n">value</span><span class="o">);</span>
+<span class="o">}</span>
+</pre></div>
+                </div>
+                <p>You can also materialize the results of stateless operators by using the overloaded methods that take a <code class="docutils literal"><span class="pre">queryableStoreName</span></code>
+                    as shown in the example below:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">regionCounts</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// materialize the result of filtering corresponding to odd numbers</span>
+<span class="c1">// the &quot;queryableStoreName&quot; can be subsequently queried.</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">oddCounts</span> <span class="o">=</span> <span class="n">numberLines</span><span class="o">.</span><span class="na">filter</span><span class="o">((</span><span class="n">region</span><span class="o">,</span> <span class="n">count</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="o">( [...]
+  <span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">,</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;queryableStoreName&quot;</span><span class="o">));</span>
+
+<span class="c1">// do not materialize the result of filtering corresponding to even numbers</span>
+<span class="c1">// this means that these results will not be materialized and cannot be queried.</span>
+<span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">oddCounts</span> <span class="o">=</span> <span class="n">numberLines</span><span class="o">.</span><span class="na">filter</span><span class="o">((</span><span class="n">region</span><span class="o">,</span> <span class="n">count</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="o">( [...]
+</pre></div>
+                </div>
+            </div>
+            <div class="section" id="querying-local-window-stores">
+                <span id="streams-developer-guide-interactive-queries-local-window-stores"></span><h3><a class="toc-backref" href="#id5">Querying local window stores</a><a class="headerlink" href="#querying-local-window-stores" title="Permalink to this headline"></a></h3>
+                <p>A window store will potentially have many results for any given key because the key can be present in multiple windows.
+                    However, there is only one result per window for a given key.</p>
+                <p>To query a local window store, you must first create a topology with a window store. This example creates a window store
+                    named &#8220;CountsWindowStore&#8221; that contains the counts for words in 1-minute windows.</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">textLines</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Define the processing topology (here: WordCount)</span>
+<span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedByWord</span> <span class="o">=</span> <span class="n">textLines</span>
+  <span class="o">.</span><span class="na">flatMapValues</span><span class="o">(</span><span class="n">value</span> <span class="o">-&gt;</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">().</span><span class="na">split</span><span class="o">(</span><span class="s">&quot;\\W+&quot;</span><span class="o">)))</span>
+  <span class="o">.</span><span class="na">groupBy</span><span class="o">((</span><span class="n">key</span><span class="o">,</span> <span class="n">word</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">word</span><span class="o">,</span> <span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">stringSerde</span><span class="o">,</span> <span class="n">stringSerde</span><span class="o">));</span>
+
+<span class="c1">// Create a window state store named &quot;CountsWindowStore&quot; that contains the word counts for every minute</span>
+<span class="n">groupedByWord</span><span class="o">.</span><span class="na">windowedBy</span><span class="o">(</span><span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="mi">60000</span><span class="o">))</span>
+  <span class="o">.</span><span class="na">count</span><span class="o">(</span><span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">WindowStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class="o">(</span><span class="s">&quot;Co [...]
+</pre></div>
+                </div>
+                <p>After the application has started, you can get access to &#8220;CountsWindowStore&#8221; and then query it via the <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyWindowStore.java">ReadOnlyWindowStore</a> API:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Get the window store named &quot;CountsWindowStore&quot;</span>
+<span class="n">ReadOnlyWindowStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">windowStore</span> <span class="o">=</span>
+    <span class="n">streams</span><span class="o">.</span><span class="na">store</span><span class="o">(</span><span class="s">&quot;CountsWindowStore&quot;</span><span class="o">,</span> <span class="n">QueryableStoreTypes</span><span class="o">.</span><span class="na">windowStore</span><span class="o">());</span>
+
+<span class="c1">// Fetch values for the key &quot;world&quot; for all of the windows available in this application instance.</span>
+<span class="c1">// To get *all* available windows we fetch windows from the beginning of time until now.</span>
+<span class="kt">long</span> <span class="n">timeFrom</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="c1">// beginning of time = oldest available</span>
+<span class="kt">long</span> <span class="n">timeTo</span> <span class="o">=</span> <span class="n">System</span><span class="o">.</span><span class="na">currentTimeMillis</span><span class="o">();</span> <span class="c1">// now (in processing-time)</span>
+<span class="n">WindowStoreIterator</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">iterator</span> <span class="o">=</span> <span class="n">windowStore</span><span class="o">.</span><span class="na">fetch</span><span class="o">(</span><span class="s">&quot;world&quot;</span><span class="o">,</span> <span class="n">timeFrom</span><span class="o">,</span> <span class="n">timeTo</span><span class="o">);</span>
+<span class="k">while</span> <span class="o">(</span><span class="n">iterator</span><span class="o">.</span><span class="na">hasNext</span><span class="o">())</span> <span class="o">{</span>
+  <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">next</span> <span class="o">=</span> <span class="n">iterator</span><span class="o">.</span><span class="na">next</span><span class="o">();</span>
+  <span class="kt">long</span> <span class="n">windowTimestamp</span> <span class="o">=</span> <span class="n">next</span><span class="o">.</span><span class="na">key</span><span class="o">;</span>
+  <span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Count of &#39;world&#39; @ time &quot;</span> <span class="o">+</span> <span class="n">windowTimestamp</span> <span class="o">+</span> <span class="s">&quot; is &quot;</span> <span class="o">+</span> <span class="n">next</span><span class="o">.</span><span class="na">value</span><span class="o">);</span>
+<span class="o">}</span>
+</pre></div>
+                </div>
+            </div>
+            <div class="section" id="querying-local-custom-state-stores">
+                <span id="streams-developer-guide-interactive-queries-custom-stores"></span><h3><a class="toc-backref" href="#id6">Querying local custom state stores</a><a class="headerlink" href="#querying-local-custom-state-stores" title="Permalink to this headline"></a></h3>
+                <div class="admonition note">
+                    <p class="first admonition-title">Note</p>
+                    <p class="last">Only the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a> supports custom state stores.</p>
+                </div>
+                <p>Before querying the custom state stores you must implement these interfaces:</p>
+                <ul class="simple">
+                    <li>Your custom state store must implement <code class="docutils literal"><span class="pre">StateStore</span></code>.</li>
+                    <li>You must have an interface to represent the operations available on the store.</li>
+                    <li>You must provide an implementation of <code class="docutils literal"><span class="pre">StoreBuilder</span></code> for creating instances of your store.</li>
+                    <li>It is recommended that you provide an interface that restricts access to read-only operations. This prevents users of this API from mutating the state of your running Kafka Streams application out-of-band.</li>
+                </ul>
+                <p>The class/interface hierarchy for your custom store might look something like:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="kd">implements</span> <span class="n">StateStore</span><span class="o">,</span> <span class="n">MyWriteableCustomStore</span><span class="o">&lt;</span><span class="n">K [...]
+  <span class="c1">// implementation of the actual store</span>
+<span class="o">}</span>
+
+<span class="c1">// Read-write interface for MyCustomStore</span>
+<span class="kd">public</span> <span class="kd">interface</span> <span class="nc">MyWriteableCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="kd">extends</span> <span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="o">{</span>
+  <span class="kt">void</span> <span class="nf">write</span><span class="o">(</span><span class="n">K</span> <span class="n">Key</span><span class="o">,</span> <span class="n">V</span> <span class="n">value</span><span class="o">);</span>
+<span class="o">}</span>
+
+<span class="c1">// Read-only interface for MyCustomStore</span>
+<span class="kd">public</span> <span class="kd">interface</span> <span class="nc">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="o">{</span>
+  <span class="n">V</span> <span class="nf">read</span><span class="o">(</span><span class="n">K</span> <span class="n">key</span><span class="o">);</span>
+<span class="o">}</span>
+
+<span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyCustomStoreBuilder</span> <span class="kd">implements</span> <span class="n">StoreBuilder</span> <span class="o">{</span>
+  <span class="c1">// implementation of the supplier for MyCustomStore</span>
+<span class="o">}</span>
+</pre></div>
+                </div>
+                <p>To make this store queryable you must:</p>
+                <ul class="simple">
+                    <li>Provide an implementation of <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/src/main/java/org/apache/kafka/streams/state/QueryableStoreType.java">QueryableStoreType</a>.</li>
+                    <li>Provide a wrapper class that has access to all of the underlying instances of the store and is used for querying.</li>
+                </ul>
+                <p>Here is how to implement <code class="docutils literal"><span class="pre">QueryableStoreType</span></code>:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyCustomStoreType</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="kd">implements</span> <span class="n">QueryableStoreType</span><span class="o">&lt;</span><span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><spa [...]
+
+  <span class="c1">// Only accept StateStores that are of type MyCustomStore</span>
+  <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">accepts</span><span class="o">(</span><span class="kd">final</span> <span class="n">StateStore</span> <span class="n">stateStore</span><span class="o">)</span> <span class="o">{</span>
+    <span class="k">return</span> <span class="n">stateStore</span> <span class="n">instanceOf</span> <span class="n">MyCustomStore</span><span class="o">;</span>
+  <span class="o">}</span>
+
+  <span class="kd">public</span> <span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="nf">create</span><span class="o">(</span><span class="kd">final</span> <span class="n">StateStoreProvider</span> <span class="n">storeProvider</span><span class="o">,</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">storeName</span><span cla [...]
+      <span class="k">return</span> <span class="k">new</span> <span class="n">MyCustomStoreTypeWrapper</span><span class="o">(</span><span class="n">storeProvider</span><span class="o">,</span> <span class="n">storeName</span><span class="o">,</span> <span class="k">this</span><span class="o">);</span>
+  <span class="o">}</span>
+
+<span class="o">}</span>
+</pre></div>
+                </div>
+                <p>A wrapper class is required because each instance of a Kafka Streams application may run multiple stream tasks and manage
+                    multiple local instances of a particular state store.  The wrapper class hides this complexity and lets you query a &#8220;logical&#8221;
+                    state store by name without having to know about all of the underlying local instances of that state store.</p>
+                <p>When implementing your wrapper class you must use the
+                    <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/src/main/java/org/apache/kafka/streams/state/internals/StateStoreProvider.java">StateStoreProvider</a>
+                    interface to get access to the underlying instances of your store.
+                    <code class="docutils literal"><span class="pre">StateStoreProvider#stores(String</span> <span class="pre">storeName,</span> <span class="pre">QueryableStoreType&lt;T&gt;</span> <span class="pre">queryableStoreType)</span></code> returns a <code class="docutils literal"><span class="pre">List</span></code> of state
+                    stores with the given storeName and of the type as defined by <code class="docutils literal"><span class="pre">queryableStoreType</span></code>.</p>
+                <p>Here is an example implementation of the wrapper follows (Java 8+):</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// We strongly recommended implementing a read-only interface</span>
+<span class="c1">// to restrict usage of the store to safe read operations!</span>
+<span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyCustomStoreTypeWrapper</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="kd">implements</span> <span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="o">{</span>
+
+  <span class="kd">private</span> <span class="kd">final</span> <span class="n">QueryableStoreType</span><span class="o">&lt;</span><span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">V</span><span class="o">&gt;&gt;</span> <span class="n">customStoreType</span><span class="o">;</span>
+  <span class="kd">private</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">storeName</span><span class="o">;</span>
+  <span class="kd">private</span> <span class="kd">final</span> <span class="n">StateStoreProvider</span> <span class="n">provider</span><span class="o">;</span>
+
+  <span class="kd">public</span> <span class="nf">CustomStoreTypeWrapper</span><span class="o">(</span><span class="kd">final</span> <span class="n">StateStoreProvider</span> <span class="n">provider</span><span class="o">,</span>
+                                <span class="kd">final</span> <span class="n">String</span> <span class="n">storeName</span><span class="o">,</span>
+                                <span class="kd">final</span> <span class="n">QueryableStoreType</span><span class="o">&lt;</span><span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">V</span><span class="o">&gt;&gt;</span> <span class="n">customStoreType</span><span class="o">)</span> <span class="o">{</span>
+
+    <span class="c1">// ... assign fields ...</span>
+  <span class="o">}</span>
+
+  <span class="c1">// Implement a safe read method</span>
+  <span class="nd">@Override</span>
+  <span class="kd">public</span> <span class="n">V</span> <span class="nf">read</span><span class="o">(</span><span class="kd">final</span> <span class="n">K</span> <span class="n">key</span><span class="o">)</span> <span class="o">{</span>
+    <span class="c1">// Get all the stores with storeName and of customStoreType</span>
+    <span class="kd">final</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">V</span><span class="o">&gt;&gt;</span> <span class="n">stores</span> <span class="o">=</span> <span class="n">provider</span><span class="o">.</span><span class="na">getStores</span><span class="o">(</span><span class="n">storeName</span><span class="o">,</span> <spa [...]
+    <span class="c1">// Try and find the value for the given key</span>
+    <span class="kd">final</span> <span class="n">Optional</span><span class="o">&lt;</span><span class="n">V</span><span class="o">&gt;</span> <span class="n">value</span> <span class="o">=</span> <span class="n">stores</span><span class="o">.</span><span class="na">stream</span><span class="o">().</span><span class="na">filter</span><span class="o">(</span><span class="n">store</span> <span class="o">-&gt;</span> <span class="n">store</span><span class="o">.</span><span class="na">read [...]
+    <span class="c1">// Return the value if it exists</span>
+    <span class="k">return</span> <span class="n">value</span><span class="o">.</span><span class="na">orElse</span><span class="o">(</span><span class="kc">null</span><span class="o">);</span>
+  <span class="o">}</span>
+
+<span class="o">}</span>
+</pre></div>
+                </div>
+                <p>You can now find and query your custom store:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">Topology</span> <span class="n">topology</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">ProcessorSupplier</span> <span class="n">processorSuppler</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Create CustomStoreSupplier for store name the-custom-store</span>
+<span class="n">MyCustomStoreBuilder</span> <span class="n">customStoreBuilder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">MyCustomStoreBuilder</span><span class="o">(</span><span class="s">&quot;the-custom-store&quot;</span><span class="o">)</span> <span class="c1">//...;</span>
+<span class="c1">// Add the source topic</span>
+<span class="n">topology</span><span class="o">.</span><span class="na">addSource</span><span class="o">(</span><span class="s">&quot;input&quot;</span><span class="o">,</span> <span class="s">&quot;inputTopic&quot;</span><span class="o">);</span>
+<span class="c1">// Add a custom processor that reads from the source topic</span>
+<span class="n">topology</span><span class="o">.</span><span class="na">addProcessor</span><span class="o">(</span><span class="s">&quot;the-processor&quot;</span><span class="o">,</span> <span class="n">processorSupplier</span><span class="o">,</span> <span class="s">&quot;input&quot;</span><span class="o">);</span>
+<span class="c1">// Connect your custom state store to the custom processor above</span>
+<span class="n">topology</span><span class="o">.</span><span class="na">addStateStore</span><span class="o">(</span><span class="n">customStoreBuilder</span><span class="o">,</span> <span class="s">&quot;the-processor&quot;</span><span class="o">);</span>
+
+<span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaStreams</span><span class="o">(</span><span class="n">topology</span><span class="o">,</span> <span class="n">config</span><span class="o">);</span>
+<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span>
+
+<span class="c1">// Get access to the custom store</span>
+<span class="n">MyReadableCustomStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">store</span> <span class="o">=</span> <span class="n">streams</span><span class="o">.</span><span class="na">store</span><span class="o">(</span><span class="s">&quot;the-custom-store&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="n">MyCustomStoreType</span><span c [...]
+<span class="c1">// Query the store</span>
+<span class="n">String</span> <span class="n">value</span> <span class="o">=</span> <span class="n">store</span><span class="o">.</span><span class="na">read</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">);</span>
+</pre></div>
+                </div>
+            </div>
+        </div>
+        <div class="section" id="querying-remote-state-stores-for-the-entire-app">
+            <span id="streams-developer-guide-interactive-queries-discovery"></span><h2><a class="toc-backref" href="#id7">Querying remote state stores for the entire app</a><a class="headerlink" href="#querying-remote-state-stores-for-the-entire-app" title="Permalink to this headline"></a></h2>
+            <p>To query remote states for the entire app, you must expose the application&#8217;s full state to other applications, including
+                applications that are running on different machines.</p>
+            <p>For example, you have a Kafka Streams application that processes user events in a multi-player video game, and you want to retrieve the latest status of each user directly and display it in a mobile app. Here are the required steps to make the full state of your application queryable:</p>
+            <ol class="arabic simple">
+                <li><a class="reference internal" href="#streams-developer-guide-interactive-queries-rpc-layer"><span class="std std-ref">Add an RPC layer to your application</span></a> so that
+                    the instances of your application can be interacted with via the network (e.g., a REST API, Thrift, a custom protocol,
+                    and so on). The instances must respond to interactive queries. You can follow the reference examples provided to get
+                    started.</li>
+                <li><a class="reference internal" href="#streams-developer-guide-interactive-queries-expose-rpc"><span class="std std-ref">Expose the RPC endpoints</span></a> of
+                    your application&#8217;s instances via the <code class="docutils literal"><span class="pre">application.server</span></code> configuration setting of Kafka Streams.  Because RPC
+                    endpoints must be unique within a network, each instance has its own value for this configuration setting.
+                    This makes an application instance discoverable by other instances.</li>
+                <li>In the RPC layer, <a class="reference internal" href="#streams-developer-guide-interactive-queries-discover-app-instances-and-stores"><span class="std std-ref">discover remote application instances</span></a> and their state stores and <a class="reference internal" href="#streams-developer-guide-interactive-queries-local-stores"><span class="std std-ref">query locally available state stores</span></a> to make the full state of your application queryable. The remote ap [...]
+            </ol>
+            <div class="figure align-center" id="id2">
+                <a class="reference internal image-reference" href="../../../images/streams-interactive-queries-api-02.png"><img alt="../../../images/streams-interactive-queries-api-02.png" src="../../../images/streams-interactive-queries-api-02.png" style="width: 500pt;" /></a>
+                <p class="caption"><span class="caption-text">Discover any running instances of the same application as well as the respective RPC endpoints they expose for
+interactive queries</span></p>
+            </div>
+            <div class="section" id="adding-an-rpc-layer-to-your-application">
+                <span id="streams-developer-guide-interactive-queries-rpc-layer"></span><h3><a class="toc-backref" href="#id8">Adding an RPC layer to your application</a><a class="headerlink" href="#adding-an-rpc-layer-to-your-application" title="Permalink to this headline"></a></h3>
+                <p>There are many ways to add an RPC layer. The only requirements are that the RPC layer is embedded within the Kafka Streams
+                    application and that it exposes an endpoint that other application instances and applications can connect to.</p>
+            </div>
+            <div class="section" id="exposing-the-rpc-endpoints-of-your-application">
+                <span id="streams-developer-guide-interactive-queries-expose-rpc"></span><h3><a class="toc-backref" href="#id9">Exposing the RPC endpoints of your application</a><a class="headerlink" href="#exposing-the-rpc-endpoints-of-your-application" title="Permalink to this headline"></a></h3>
+                <p>To enable remote state store discovery in a distributed Kafka Streams application, you must set the <a class="reference internal" href="config-streams.html#streams-developer-guide-required-configs"><span class="std std-ref">configuration property</span></a> in <code class="docutils literal"><span class="pre">StreamsConfig</span></code>.
+                    The <code class="docutils literal"><span class="pre">application.server</span></code> property defines a unique <code class="docutils literal"><span class="pre">host:port</span></code> pair that points to the RPC endpoint of the respective instance of a Kafka Streams application.
+                    The value of this configuration property will vary across the instances of your application.
+                    When this property is set, Kafka Streams will keep track of the RPC endpoint information for every instance of an application, its state stores, and assigned stream partitions through instances of <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a>.</p>
+                <div class="admonition tip">
+                    <p><b>Tip</b></p>
+                    <p class="last">Consider leveraging the exposed RPC endpoints of your application for further functionality, such as
+                        piggybacking additional inter-application communication that goes beyond interactive queries.</p>
+                </div>
+                <p>This example shows how to configure and run a Kafka Streams application that supports the discovery of its state stores.</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">props</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="c1">// Set the unique RPC endpoint of this application instance through which it</span>
+<span class="c1">// can be interactively queried.  In a real application, the value would most</span>
+<span class="c1">// probably not be hardcoded but derived dynamically.</span>
+<span class="n">String</span> <span class="n">rpcEndpoint</span> <span class="o">=</span> <span class="s">&quot;host1:4460&quot;</span><span class="o">;</span>
+<span class="n">props</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">APPLICATION_SERVER_CONFIG</span><span class="o">,</span> <span class="n">rpcEndpoint</span><span class="o">);</span>
+<span class="c1">// ... further settings may follow here ...</span>
+
+<span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsConfig</span><span class="o">(</span><span class="n">props</span><span class="o">);</span>
+<span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsBuilder</span><span class="o">();</span>
+
+<span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">textLines</span> <span class="o">=</span> <span class="n">builder</span><span class="o">.</span><span class="na">stream</span><span class="o">(</span><span class="n">stringSerde</span><span class="o">,</span> <span class="n">stringSerde</span><span class="o">,</span> <span class="s">&quot;word-count-input&q [...]
+
+<span class="kd">final</span> <span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">groupedByWord</span> <span class="o">=</span> <span class="n">textLines</span>
+    <span class="o">.</span><span class="na">flatMapValues</span><span class="o">(</span><span class="n">value</span> <span class="o">-&gt;</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">value</span><span class="o">.</span><span class="na">toLowerCase</span><span class="o">().</span><span class="na">split</span><span class="o">(</span><span class="s">&quot;\\W+&quot;</span><span class="o">)))</span>
+    <span class="o">.</span><span class="na">groupBy</span><span class="o">((</span><span class="n">key</span><span class="o">,</span> <span class="n">word</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">word</span><span class="o">,</span> <span class="n">Serialized</span><span class="o">.</span><span class="na">with</span><span class="o">(</span><span class="n">stringSerde</span><span class="o">,</span> <span class="n">stringSerde</span><span class="o">));</span>
+
+<span class="c1">// This call to `count()` creates a state store named &quot;word-count&quot;.</span>
+<span class="c1">// The state store is discoverable and can be queried interactively.</span>
+<span class="n">groupedByWord</span><span class="o">.</span><span class="na">count</span><span class="o">(</span><span class="n">Materialized</span><span class="o">.&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">,</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">Bytes</span><span class="o">,</span> <span class="kt">byte</span><span class="o">[]&gt;</span><span class="n">as</span><span class= [...]
+
+<span class="c1">// Start an instance of the topology</span>
+<span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaStreams</span><span class="o">(</span><span class="n">builder</span><span class="o">,</span> <span class="n">streamsConfiguration</span><span class="o">);</span>
+<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span>
+
+<span class="c1">// Then, create and start the actual RPC service for remote access to this</span>
+<span class="c1">// application instance&#39;s local state stores.</span>
+<span class="c1">//</span>
+<span class="c1">// This service should be started on the same host and port as defined above by</span>
+<span class="c1">// the property `StreamsConfig.APPLICATION_SERVER_CONFIG`.  The example below is</span>
+<span class="c1">// fictitious, but we provide end-to-end demo applications (such as KafkaMusicExample)</span>
+<span class="c1">// that showcase how to implement such a service to get you started.</span>
+<span class="n">MyRPCService</span> <span class="n">rpcService</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="n">rpcService</span><span class="o">.</span><span class="na">listenAt</span><span class="o">(</span><span class="n">rpcEndpoint</span><span class="o">);</span>
+</pre></div>
+                </div>
+            </div>
+            <div class="section" id="discovering-and-accessing-application-instances-and-their-local-state-stores">
+                <span id="streams-developer-guide-interactive-queries-discover-app-instances-and-stores"></span><h3><a class="toc-backref" href="#id10">Discovering and accessing application instances and their local state stores</a><a class="headerlink" href="#discovering-and-accessing-application-instances-and-their-local-state-stores" title="Permalink to this headline"></a></h3>
+                <p>The following methods return <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a> objects, which provide meta-information about application instances such as their RPC endpoint and locally available state stores.</p>
+                <ul class="simple">
+                    <li><code class="docutils literal"><span class="pre">KafkaStreams#allMetadata()</span></code>: find all instances of this application</li>
+                    <li><code class="docutils literal"><span class="pre">KafkaStreams#allMetadataForStore(String</span> <span class="pre">storeName)</span></code>: find those applications instances that manage local instances of the state store &#8220;storeName&#8221;</li>
+                    <li><code class="docutils literal"><span class="pre">KafkaStreams#metadataForKey(String</span> <span class="pre">storeName,</span> <span class="pre">K</span> <span class="pre">key,</span> <span class="pre">Serializer&lt;K&gt;</span> <span class="pre">keySerializer)</span></code>: using the default stream partitioning strategy, find the one application instance that holds the data for the given key in the given state store</li>
+                    <li><code class="docutils literal"><span class="pre">KafkaStreams#metadataForKey(String</span> <span class="pre">storeName,</span> <span class="pre">K</span> <span class="pre">key,</span> <span class="pre">StreamPartitioner&lt;K,</span> <span class="pre">?&gt;</span> <span class="pre">partitioner)</span></code>: using <code class="docutils literal"><span class="pre">partitioner</span></code>, find the one application instance that holds the data for the given key in t [...]
+                </ul>
+                <div class="admonition attention">
+                    <p class="first admonition-title">Attention</p>
+                    <p class="last">If <code class="docutils literal"><span class="pre">application.server</span></code> is not configured for an application instance, then the above methods will not find any <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a> for it.</p>
+                </div>
+                <p>For example, we can now find the <code class="docutils literal"><span class="pre">StreamsMetadata</span></code> for the state store named &#8220;word-count&#8221; that we defined in the
+                    code example shown in the previous section:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="o">...;</span>
+<span class="c1">// Find all the locations of local instances of the state store named &quot;word-count&quot;</span>
+<span class="n">Collection</span><span class="o">&lt;</span><span class="n">StreamsMetadata</span><span class="o">&gt;</span> <span class="n">wordCountHosts</span> <span class="o">=</span> <span class="n">streams</span><span class="o">.</span><span class="na">allMetadataForStore</span><span class="o">(</span><span class="s">&quot;word-count&quot;</span><span class="o">);</span>
+
+<span class="c1">// For illustrative purposes, we assume using an HTTP client to talk to remote app instances.</span>
+<span class="n">HttpClient</span> <span class="n">http</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="c1">// Get the word count for word (aka key) &#39;alice&#39;: Approach 1</span>
+<span class="c1">//</span>
+<span class="c1">// We first find the one app instance that manages the count for &#39;alice&#39; in its local state stores.</span>
+<span class="n">StreamsMetadata</span> <span class="n">metadata</span> <span class="o">=</span> <span class="n">streams</span><span class="o">.</span><span class="na">metadataForKey</span><span class="o">(</span><span class="s">&quot;word-count&quot;</span><span class="o">,</span> <span class="s">&quot;alice&quot;</span><span class="o">,</span> <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">().</span><span class="na">serializer</span><s [...]
+<span class="c1">// Then, we query only that single app instance for the latest count of &#39;alice&#39;.</span>
+<span class="c1">// Note: The RPC URL shown below is fictitious and only serves to illustrate the idea.  Ultimately,</span>
+<span class="c1">// the URL (or, in general, the method of communication) will depend on the RPC layer you opted to</span>
+<span class="c1">// implement.  Again, we provide end-to-end demo applications (such as KafkaMusicExample) that showcase</span>
+<span class="c1">// how to implement such an RPC layer.</span>
+<span class="n">Long</span> <span class="n">result</span> <span class="o">=</span> <span class="n">http</span><span class="o">.</span><span class="na">getLong</span><span class="o">(</span><span class="s">&quot;http://&quot;</span> <span class="o">+</span> <span class="n">metadata</span><span class="o">.</span><span class="na">host</span><span class="o">()</span> <span class="o">+</span> <span class="s">&quot;:&quot;</span> <span class="o">+</span> <span class="n">metadata</span><span cl [...]
+
+<span class="c1">// Get the word count for word (aka key) &#39;alice&#39;: Approach 2</span>
+<span class="c1">//</span>
+<span class="c1">// Alternatively, we could also choose (say) a brute-force approach where we query every app instance</span>
+<span class="c1">// until we find the one that happens to know about &#39;alice&#39;.</span>
+<span class="n">Optional</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">result</span> <span class="o">=</span> <span class="n">streams</span><span class="o">.</span><span class="na">allMetadataForStore</span><span class="o">(</span><span class="s">&quot;word-count&quot;</span><span class="o">)</span>
+    <span class="o">.</span><span class="na">stream</span><span class="o">()</span>
+    <span class="o">.</span><span class="na">map</span><span class="o">(</span><span class="n">streamsMetadata</span> <span class="o">-&gt;</span> <span class="o">{</span>
+        <span class="c1">// Construct the (fictituous) full endpoint URL to query the current remote application instance</span>
+        <span class="n">String</span> <span class="n">url</span> <span class="o">=</span> <span class="s">&quot;http://&quot;</span> <span class="o">+</span> <span class="n">streamsMetadata</span><span class="o">.</span><span class="na">host</span><span class="o">()</span> <span class="o">+</span> <span class="s">&quot;:&quot;</span> <span class="o">+</span> <span class="n">streamsMetadata</span><span class="o">.</span><span class="na">port</span><span class="o">()</span> <span class="o" [...]
+        <span class="c1">// Read and return the count for &#39;alice&#39;, if any.</span>
+        <span class="k">return</span> <span class="n">http</span><span class="o">.</span><span class="na">getLong</span><span class="o">(</span><span class="n">url</span><span class="o">);</span>
+    <span class="o">})</span>
+    <span class="o">.</span><span class="na">filter</span><span class="o">(</span><span class="n">s</span> <span class="o">-&gt;</span> <span class="n">s</span> <span class="o">!=</span> <span class="kc">null</span><span class="o">)</span>
+    <span class="o">.</span><span class="na">findFirst</span><span class="o">();</span>
+</pre></div>
+                </div>
+                <p>At this point the full state of the application is interactively queryable:</p>
+                <ul class="simple">
+                    <li>You can discover the running instances of the application and the state stores they manage locally.</li>
+                    <li>Through the RPC layer that was added to the application, you can communicate with these application instances over the
+                        network and query them for locally available state.</li>
+                    <li>The application instances are able to serve such queries because they can directly query their own local state stores
+                        and respond via the RPC layer.</li>
+                    <li>Collectively, this allows us to query the full state of the entire application.</li>
+                </ul>
+                <p>To see an end-to-end application with interactive queries, review the
+                    <a class="reference internal" href="#streams-developer-guide-interactive-queries-demos"><span class="std std-ref">demo applications</span></a>.</p>
+            </div>
+        </div>
+</div>
+</div>
+
+
+               </div>
+              </div>
+              <div class="pagination">
+                <a href="/{{version}}/documentation/streams/developer-guide/datatypes" class="pagination__btn pagination__btn__prev">Previous</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/memory-mgmt" class="pagination__btn pagination__btn__next">Next</a>
+              </div>
+                </script>
+
+                <!--#include virtual="../../../includes/_header.htm" -->
+                <!--#include virtual="../../../includes/_top.htm" -->
+                    <div class="content documentation documentation--current">
+                    <!--#include virtual="../../../includes/_nav.htm" -->
+                    <div class="right">
+                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <ul class="breadcrumbs">
+                    <li><a href="/documentation">Documentation</a></li>
+                    <li><a href="/documentation/streams">Kafka Streams</a></li>
+                    <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+                </ul>
+                <div class="p-content"></div>
+                    </div>
+                    </div>
+                    <!--#include virtual="../../../includes/_footer.htm" -->
+                    <script>
+                    $(function() {
+                        // Show selected style on nav item
+                        $('.b-nav__streams').addClass('selected');
+
+                        //sticky secondary nav
+                        var $navbar = $(".sub-nav-sticky"),
+                            y_pos = $navbar.offset().top,
+                            height = $navbar.height();
+
+                        $(window).scroll(function() {
+                            var scrollTop = $(window).scrollTop();
+
+                            if (scrollTop > y_pos - height) {
+                                $navbar.addClass("navbar-fixed")
+                            } else if (scrollTop <= y_pos) {
+                                $navbar.removeClass("navbar-fixed")
+                            }
+                        });
+
+                        // Display docs subnav items
+                        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+                    });
+              </script>
diff --git a/docs/streams/developer-guide/memory-mgmt.html b/docs/streams/developer-guide/memory-mgmt.html
new file mode 100644
index 0000000..b9ee1f3
--- /dev/null
+++ b/docs/streams/developer-guide/memory-mgmt.html
@@ -0,0 +1,241 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <!-- div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div -->
+    </div>
+  </div>
+
+  <div class="section" id="memory-management">
+    <span id="streams-developer-guide-memory-management"></span><h1>Memory Management<a class="headerlink" href="#memory-management" title="Permalink to this headline"></a></h1>
+    <p>You can specify the total memory (RAM) size used for internal caching and compacting of records. This caching happens
+      before the records are written to state stores or forwarded downstream to other nodes.</p>
+    <p>The record caches are implemented slightly different in the DSL and Processor API.</p>
+    <div class="contents local topic" id="table-of-contents">
+      <p class="topic-title first"><b>Table of Contents</b></p>
+      <ul class="simple">
+        <li><a class="reference internal" href="#record-caches-in-the-dsl" id="id1">Record caches in the DSL</a></li>
+        <li><a class="reference internal" href="#record-caches-in-the-processor-api" id="id2">Record caches in the Processor API</a></li>
+        <li><a class="reference internal" href="#other-memory-usage" id="id3">Other memory usage</a></li>
+      </ul>
+    </div>
+    <div class="section" id="record-caches-in-the-dsl">
+      <span id="streams-developer-guide-memory-management-record-cache"></span><h2><a class="toc-backref" href="#id1">Record caches in the DSL</a><a class="headerlink" href="#record-caches-in-the-dsl" title="Permalink to this headline"></a></h2>
+      <p>You can specify the total memory (RAM) size of the record cache for an instance of the processing topology. It is leveraged
+        by the following <code class="docutils literal"><span class="pre">KTable</span></code> instances:</p>
+      <ul class="simple">
+        <li>Source <code class="docutils literal"><span class="pre">KTable</span></code>: <code class="docutils literal"><span class="pre">KTable</span></code> instances that are created via <code class="docutils literal"><span class="pre">StreamsBuilder#table()</span></code> or <code class="docutils literal"><span class="pre">StreamsBuilder#globalTable()</span></code>.</li>
+        <li>Aggregation <code class="docutils literal"><span class="pre">KTable</span></code>: instances of <code class="docutils literal"><span class="pre">KTable</span></code> that are created as a result of <a class="reference internal" href="dsl-api.html#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregations</span></a>.</li>
+      </ul>
+      <p>For such <code class="docutils literal"><span class="pre">KTable</span></code> instances, the record cache is used for:</p>
+      <ul class="simple">
+        <li>Internal caching and compacting of output records before they are written by the underlying stateful
+          <a class="reference internal" href="../concepts.html#streams-concepts-processor"><span class="std std-ref">processor node</span></a> to its internal state stores.</li>
+        <li>Internal caching and compacting of output records before they are forwarded from the underlying stateful
+          <a class="reference internal" href="../concepts.html#streams-concepts-processor"><span class="std std-ref">processor node</span></a> to any of its downstream processor nodes.</li>
+      </ul>
+      <p>Use the following example to understand the behaviors with and without record caching. In this example, the input is a
+        <code class="docutils literal"><span class="pre">KStream&lt;String,</span> <span class="pre">Integer&gt;</span></code> with the records <code class="docutils literal"><span class="pre">&lt;K,V&gt;:</span> <span class="pre">&lt;A,</span> <span class="pre">1&gt;,</span> <span class="pre">&lt;D,</span> <span class="pre">5&gt;,</span> <span class="pre">&lt;A,</span> <span class="pre">20&gt;,</span> <span class="pre">&lt;A,</span> <span class="pre">300&gt;</span></code>. The focus in  [...]
+        on the records with key == <code class="docutils literal"><span class="pre">A</span></code>.</p>
+      <ul>
+        <li><p class="first">An <a class="reference internal" href="dsl-api.html#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregation</span></a> computes the sum of record values, grouped by key, for
+          the input and returns a <code class="docutils literal"><span class="pre">KTable&lt;String,</span> <span class="pre">Integer&gt;</span></code>.</p>
+          <blockquote>
+            <div><ul class="simple">
+              <li><strong>Without caching</strong>: a sequence of output records is emitted for key <code class="docutils literal"><span class="pre">A</span></code> that represent changes in the
+                resulting aggregation table. The parentheses (<code class="docutils literal"><span class="pre">()</span></code>) denote changes, the left number is the new aggregate value
+                and the right number is the old aggregate value: <code class="docutils literal"><span class="pre">&lt;A,</span> <span class="pre">(1,</span> <span class="pre">null)&gt;,</span> <span class="pre">&lt;A,</span> <span class="pre">(21,</span> <span class="pre">1)&gt;,</span> <span class="pre">&lt;A,</span> <span class="pre">(321,</span> <span class="pre">21)&gt;</span></code>.</li>
+              <li><strong>With caching</strong>: a single output record is emitted for key <code class="docutils literal"><span class="pre">A</span></code> that would likely be compacted in the cache,
+                leading to a single output record of <code class="docutils literal"><span class="pre">&lt;A,</span> <span class="pre">(321,</span> <span class="pre">null)&gt;</span></code>. This record is written to the aggregation&#8217;s internal state
+                store and forwarded to any downstream operations.</li>
+            </ul>
+            </div></blockquote>
+        </li>
+      </ul>
+      <p>The cache size is specified through the <code class="docutils literal"><span class="pre">cache.max.bytes.buffering</span></code> parameter, which is a global setting per
+        processing topology:</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Enable record cache of size 10 MB.</span>
+<span class="n">Properties</span> <span class="n">streamsConfiguration</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="n">streamsConfiguration</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">CACHE_MAX_BYTES_BUFFERING_CONFIG</span><span class="o">,</span> <span class="mi">10</span> <span class="o">*</span> <span class="mi">1024</span> <span class="o">*</span> <span class="mi">1024L</span><span class="o">);</span>
+</pre></div>
+      </div>
+      <p>This parameter controls the number of bytes allocated for caching. Specifically, for a processor topology instance with
+        <code class="docutils literal"><span class="pre">T</span></code> threads and <code class="docutils literal"><span class="pre">C</span></code> bytes allocated for caching, each thread will have an even <code class="docutils literal"><span class="pre">C/T</span></code> bytes to construct its own
+        cache and use as it sees fit among its tasks. This means that there are as many caches as there are threads, but no sharing of
+        caches across threads happens.</p>
+      <p>The basic API for the cache is made of <code class="docutils literal"><span class="pre">put()</span></code> and <code class="docutils literal"><span class="pre">get()</span></code> calls.  Records are
+        evicted using a simple LRU scheme after the cache size is reached.  The first time a keyed record <code class="docutils literal"><span class="pre">R1</span> <span class="pre">=</span> <span class="pre">&lt;K1,</span> <span class="pre">V1&gt;</span></code>
+        finishes processing at a node, it is marked as dirty in the cache.  Any other keyed record <code class="docutils literal"><span class="pre">R2</span> <span class="pre">=</span> <span class="pre">&lt;K1,</span> <span class="pre">V2&gt;</span></code> with the
+        same key <code class="docutils literal"><span class="pre">K1</span></code> that is processed on that node during that time will overwrite <code class="docutils literal"><span class="pre">&lt;K1,</span> <span class="pre">V1&gt;</span></code>, this is referred to as
+        &#8220;being compacted&#8221;.  This has the same effect as
+        <a class="reference external" href="https://kafka.apache.org/documentation.html#compaction">Kafka&#8217;s log compaction</a>, but happens earlier, while the
+        records are still in memory, and within your client-side application, rather than on the server-side (i.e. the Kafka
+        broker).  After flushing, <code class="docutils literal"><span class="pre">R2</span></code> is forwarded to the next processing node and then written to the local state store.</p>
+      <p>The semantics of caching is that data is flushed to the state store and forwarded to the next downstream processor node
+        whenever the earliest of <code class="docutils literal"><span class="pre">commit.interval.ms</span></code> or <code class="docutils literal"><span class="pre">cache.max.bytes.buffering</span></code> (cache pressure) hits.  Both
+        <code class="docutils literal"><span class="pre">commit.interval.ms</span></code> and <code class="docutils literal"><span class="pre">cache.max.bytes.buffering</span></code> are global parameters. As such, it is not possible to specify
+        different parameters for individual nodes.</p>
+      <p>Here are example settings for both parameters based on desired scenarios.</p>
+      <ul>
+        <li><p class="first">To turn off caching the cache size can be set to zero:</p>
+          <blockquote>
+            <div><div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Disable record cache</span>
+<span class="n">Properties</span> <span class="n">streamsConfiguration</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="n">streamsConfiguration</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">CACHE_MAX_BYTES_BUFFERING_CONFIG</span><span class="o">,</span> <span class="mi">0</span><span class="o">);</span>
+</pre></div>
+            </div>
+              <p>Turning off caching might result in high write traffic for the underlying RocksDB store.
+                With default settings caching is enabled within Kafka Streams but RocksDB caching is disabled.
+                Thus, to avoid high write traffic it is recommended to enable RocksDB caching if Kafka Streams caching is turned off.</p>
+              <p>For example, the RocksDB Block Cache could be set to 100MB and Write Buffer size to 32 MB. For more information, see
+                the <a class="reference internal" href="config-streams.html#streams-developer-guide-rocksdb-config"><span class="std std-ref">RocksDB config</span></a>.</p>
+            </div></blockquote>
+        </li>
+        <li><p class="first">To enable caching but still have an upper bound on how long records will be cached, you can set the commit interval. In this example, it is set to 1000 milliseconds:</p>
+          <blockquote>
+            <div><div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsConfiguration</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="c1">// Enable record cache of size 10 MB.</span>
+<span class="n">streamsConfiguration</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">CACHE_MAX_BYTES_BUFFERING_CONFIG</span><span class="o">,</span> <span class="mi">10</span> <span class="o">*</span> <span class="mi">1024</span> <span class="o">*</span> <span class="mi">1024L</span><span class="o">);</span>
+<span class="c1">// Set commit interval to 1 second.</span>
+<span class="n">streamsConfiguration</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">COMMIT_INTERVAL_MS_CONFIG</span><span class="o">,</span> <span class="mi">1000</span><span class="o">);</span>
+</pre></div>
+            </div>
+            </div></blockquote>
+        </li>
+      </ul>
+      <p>The effect of these two configurations is described in the figure below. The records are shown using 4 keys: blue, red, yellow, and green. Assume the cache has space for only 3 keys.</p>
+      <ul>
+        <li><p class="first">When the cache is disabled (a), all of the input records will be output.</p>
+        </li>
+        <li><p class="first">When the cache is enabled (b):</p>
+          <blockquote>
+            <div><ul class="simple">
+              <li>Most records are output at the end of commit intervals (e.g., at <code class="docutils literal"><span class="pre">t1</span></code> a single blue record is output, which is the final over-write of the blue key up to that time).</li>
+              <li>Some records are output because of cache pressure (i.e. before the end of a commit interval). For example, see the red record before <code class="docutils literal"><span class="pre">t2</span></code>. With smaller cache sizes we expect cache pressure to be the primary factor that dictates when records are output. With large cache sizes, the commit interval will be the primary factor.</li>
+              <li>The total number of records output has been reduced from 15 to 8.</li>
+            </ul>
+            </div></blockquote>
+        </li>
+      </ul>
+      <div class="figure align-center">
+        <a class="reference internal image-reference" href="../../../images/streams-cache-and-commit-interval.png"><img alt="../../../images/streams-cache-and-commit-interval.png" src="../../../images/streams-cache-and-commit-interval.png" style="width: 500pt; height: 400pt;" /></a>
+      </div>
+    </div>
+    <div class="section" id="record-caches-in-the-processor-api">
+      <span id="streams-developer-guide-memory-management-state-store-cache"></span><h2><a class="toc-backref" href="#id2">Record caches in the Processor API</a><a class="headerlink" href="#record-caches-in-the-processor-api" title="Permalink to this headline"></a></h2>
+      <p>You can specify the total memory (RAM) size of the record cache for an instance of the processing topology. It is used
+        for internal caching and compacting of output records before they are written from a stateful processor node to its
+        state stores.</p>
+      <p>The record cache in the Processor API does not cache or compact any output records that are being forwarded downstream.
+        This means that all downstream processor nodes can see all records, whereas the state stores see a reduced number of records.
+        This does not impact correctness of the system, but is a performance optimization for the state stores. For example, with the
+        Processor API you can store a record in a state store while forwarding a different value downstream.</p>
+      <p>Following from the example first shown in section <a class="reference internal" href="processor-api.html#streams-developer-guide-state-store"><span class="std std-ref">State Stores</span></a>, to enable caching, you can
+        add the <code class="docutils literal"><span class="pre">withCachingEnabled</span></code> call (note that caches are disabled by default and there is no explicit <code class="docutils literal"><span class="pre">withDisableCaching</span></code>
+        call).</p>
+      <p><strong>Tip:</strong> Caches are disabled by default and there is no explicit <code class="docutils literal"><span class="pre">disableCaching</span></code> call).</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">StoreBuilder</span> <span class="n">countStoreBuilder</span> <span class="o">=</span>
+  <span class="n">Stores</span><span class="o">.</span><span class="na">keyValueStoreBuilder</span><span class="o">(</span>
+    <span class="n">Stores</span><span class="o">.</span><span class="na">persistentKeyValueStore</span><span class="o">(</span><span class="s">&quot;Counts&quot;</span><span class="o">),</span>
+    <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span>
+    <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span>
+  <span class="o">.</span><span class="na">withCachingEnabled</span><span class="o">()</span>
+</pre></div>
+      </div>
+    </div>
+    <div class="section" id="other-memory-usage">
+      <h2><a class="toc-backref" href="#id3">Other memory usage</a><a class="headerlink" href="#other-memory-usage" title="Permalink to this headline"></a></h2>
+      <p>There are other modules inside Apache Kafka that allocate memory during runtime. They include the following:</p>
+      <ul class="simple">
+        <li>Producer buffering, managed by the producer config <code class="docutils literal"><span class="pre">buffer.memory</span></code>.</li>
+        <li>Consumer buffering, currently not strictly managed, but can be indirectly controlled by fetch size, i.e.,
+          <code class="docutils literal"><span class="pre">fetch.max.bytes</span></code> and <code class="docutils literal"><span class="pre">fetch.max.wait.ms</span></code>.</li>
+        <li>Both producer and consumer also have separate TCP send / receive buffers that are not counted as the buffering memory.
+          These are controlled by the <code class="docutils literal"><span class="pre">send.buffer.bytes</span></code> / <code class="docutils literal"><span class="pre">receive.buffer.bytes</span></code> configs.</li>
+        <li>Deserialized objects buffering: after <code class="docutils literal"><span class="pre">consumer.poll()</span></code> returns records, they will be deserialized to extract
+          timestamp and buffered in the streams space. Currently this is only indirectly controlled by
+          <code class="docutils literal"><span class="pre">buffered.records.per.partition</span></code>.</li>
+        <li>RocksDB&#8217;s own memory usage, both on-heap and off-heap; critical configs (for RocksDB version 4.1.0) include
+          <code class="docutils literal"><span class="pre">block_cache_size</span></code>, <code class="docutils literal"><span class="pre">write_buffer_size</span></code> and <code class="docutils literal"><span class="pre">max_write_buffer_number</span></code>.  These can be specified through the
+          <code class="docutils literal"><span class="pre">rocksdb.config.setter</span></code> configuration.</li>
+      </ul>
+      <div class="admonition tip">
+        <p><b>Tip</b></p>
+        <p><strong>Iterators should be closed explicitly to release resources:</strong> Store iterators (e.g., <code class="docutils literal"><span class="pre">KeyValueIterator</span></code> and <code class="docutils literal"><span class="pre">WindowStoreIterator</span></code>) must be closed explicitly upon completeness to release resources such as open file handlers and in-memory read buffers, or use try-with-resources statement (available since JDK7) for this Closeable class.</p>
+        <p class="last">Otherwise, stream application&#8217;s memory usage keeps increasing when running until it hits an OOM.</p>
+</div>
+</div>
+
+
+               </div>
+              </div>
+  <div class="pagination">
+    <a href="/{{version}}/documentation/streams/developer-guide/interactive-queries" class="pagination__btn pagination__btn__prev">Previous</a>
+    <a href="/{{version}}/documentation/streams/developer-guide/running-app" class="pagination__btn pagination__btn__next">Next</a>
+  </div>
+</script>
+
+<!--#include virtual="../../../includes/_header.htm" -->
+<!--#include virtual="../../../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+  <!--#include virtual="../../../includes/_nav.htm" -->
+  <div class="right">
+    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <ul class="breadcrumbs">
+      <li><a href="/documentation">Documentation</a></li>
+      <li><a href="/documentation/streams">Kafka Streams</a></li>
+      <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+    </ul>
+    <div class="p-content"></div>
+  </div>
+</div>
+<!--#include virtual="../../../includes/_footer.htm" -->
+<script>
+    $(function() {
+        // Show selected style on nav item
+        $('.b-nav__streams').addClass('selected');
+
+        //sticky secondary nav
+        var $navbar = $(".sub-nav-sticky"),
+            y_pos = $navbar.offset().top,
+            height = $navbar.height();
+
+        $(window).scroll(function() {
+            var scrollTop = $(window).scrollTop();
+
+            if (scrollTop > y_pos - height) {
+                $navbar.addClass("navbar-fixed")
+            } else if (scrollTop <= y_pos) {
+                $navbar.removeClass("navbar-fixed")
+            }
+        });
+
+        // Display docs subnav items
+        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+    });
+</script>
\ No newline at end of file
diff --git a/docs/streams/developer-guide/processor-api.html b/docs/streams/developer-guide/processor-api.html
new file mode 100644
index 0000000..5ed569a
--- /dev/null
+++ b/docs/streams/developer-guide/processor-api.html
@@ -0,0 +1,437 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <!-- div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div -->
+    </div>
+  </div>
+
+    <div class="section" id="processor-api">
+        <span id="streams-developer-guide-processor-api"></span><h1>Processor API<a class="headerlink" href="#processor-api" title="Permalink to this headline"></a></h1>
+        <p>The Processor API allows developers to define and connect custom processors and to interact with state stores. With the
+            Processor API, you can define arbitrary stream processors that process one received record at a time, and connect these
+            processors with their associated state stores to compose the processor topology that represents a customized processing
+            logic.</p>
+        <div class="contents local topic" id="table-of-contents">
+            <p class="topic-title first"><b>Table of Contents</b></p>
+            <ul class="simple">
+                <li><a class="reference internal" href="#overview" id="id1">Overview</a></li>
+                <li><a class="reference internal" href="#defining-a-stream-processor" id="id2">Defining a Stream Processor</a></li>
+                <li><a class="reference internal" href="#state-stores" id="id3">State Stores</a><ul>
+                    <li><a class="reference internal" href="#defining-and-creating-a-state-store" id="id4">Defining and creating a State Store</a></li>
+                    <li><a class="reference internal" href="#fault-tolerant-state-stores" id="id5">Fault-tolerant State Stores</a></li>
+                    <li><a class="reference internal" href="#enable-or-disable-fault-tolerance-of-state-stores-store-changelogs" id="id6">Enable or Disable Fault Tolerance of State Stores (Store Changelogs)</a></li>
+                    <li><a class="reference internal" href="#implementing-custom-state-stores" id="id7">Implementing Custom State Stores</a></li>
+                </ul>
+                </li>
+                <li><a class="reference internal" href="#connecting-processors-and-state-stores" id="id8">Connecting Processors and State Stores</a></li>
+            </ul>
+        </div>
+        <div class="section" id="overview">
+            <h2><a class="toc-backref" href="#id1">Overview</a><a class="headerlink" href="#overview" title="Permalink to this headline"></a></h2>
+            <p>The Processor API can be used to implement both <strong>stateless</strong> as well as <strong>stateful</strong> operations, where the latter is
+                achieved through the use of <a class="reference internal" href="#streams-developer-guide-state-store"><span class="std std-ref">state stores</span></a>.</p>
+            <div class="admonition tip">
+                <p><b>Tip</b></p>
+                <p class="last"><strong>Combining the DSL and the Processor API:</strong>
+                    You can combine the convenience of the DSL with the power and flexibility of the Processor API as described in the
+                    section <a class="reference internal" href="dsl-api.html#streams-developer-guide-dsl-process"><span class="std std-ref">Applying processors and transformers (Processor API integration)</span></a>.</p>
+            </div>
+            <p>For a complete list of available API functionality, see the <a class="reference internal" href="../javadocs.html#streams-javadocs"><span class="std std-ref">Kafka Streams API docs</span></a>.</p>
+        </div>
+        <div class="section" id="defining-a-stream-processor">
+            <span id="streams-developer-guide-stream-processor"></span><h2><a class="toc-backref" href="#id2">Defining a Stream Processor</a><a class="headerlink" href="#defining-a-stream-processor" title="Permalink to this headline"></a></h2>
+            <p>A <a class="reference internal" href="../concepts.html#streams-concepts"><span class="std std-ref">stream processor</span></a> is a node in the processor topology that represents a single processing step.
+                With the Processor API, you can define arbitrary stream processors that processes one received record at a time, and connect
+                these processors with their associated state stores to compose the processor topology.</p>
+            <p>You can define a customized stream processor by implementing the <code class="docutils literal"><span class="pre">Processor</span></code> interface, which provides the <code class="docutils literal"><span class="pre">process()</span></code> API method.
+                The <code class="docutils literal"><span class="pre">process()</span></code> method is called on each of the received records.</p>
+            <p>The <code class="docutils literal"><span class="pre">Processor</span></code> interface also has an <code class="docutils literal"><span class="pre">init()</span></code> method, which is called by the Kafka Streams library during task construction
+                phase. Processor instances should perform any required initialization in this method. The <code class="docutils literal"><span class="pre">init()</span></code> method passes in a <code class="docutils literal"><span class="pre">ProcessorContext</span></code>
+                instance, which provides access to the metadata of the currently processed record, including its source Kafka topic and partition,
+                its corresponding message offset, and further such information. You can also use this context instance to schedule a punctuation
+                function (via <code class="docutils literal"><span class="pre">ProcessorContext#schedule()</span></code>), to forward a new record as a key-value pair to the downstream processors (via <code class="docutils literal"><span class="pre">ProcessorContext#forward()</span></code>),
+                and to commit the current processing progress (via <code class="docutils literal"><span class="pre">ProcessorContext#commit()</span></code>).</p>
+            <p>Specifically, <code class="docutils literal"><span class="pre">ProcessorContext#schedule()</span></code> accepts a user <code class="docutils literal"><span class="pre">Punctuator</span></code> callback interface, which triggers its <code class="docutils literal"><span class="pre">punctuate()</span></code>
+                API method periodically based on the <code class="docutils literal"><span class="pre">PunctuationType</span></code>. The <code class="docutils literal"><span class="pre">PunctuationType</span></code> determines what notion of time is used
+                for the punctuation scheduling: either <a class="reference internal" href="../concepts.html#streams-concepts-time"><span class="std std-ref">stream-time</span></a> or wall-clock-time (by default, stream-time
+                is configured to represent event-time via <code class="docutils literal"><span class="pre">TimestampExtractor</span></code>). When stream-time is used, <code class="docutils literal"><span class="pre">punctuate()</span></code> is triggered purely
+                by data because stream-time is determined (and advanced forward) by the timestamps derived from the input data. When there
+                is no new input data arriving, stream-time is not advanced and thus <code class="docutils literal"><span class="pre">punctuate()</span></code> is not called.</p>
+            <p>For example, if you schedule a <code class="docutils literal"><span class="pre">Punctuator</span></code> function every 10 seconds based on <code class="docutils literal"><span class="pre">PunctuationType.STREAM_TIME</span></code> and if you
+                process a stream of 60 records with consecutive timestamps from 1 (first record) to 60 seconds (last record),
+                then <code class="docutils literal"><span class="pre">punctuate()</span></code> would be called 6 times. This happens regardless of the time required to actually process those records. <code class="docutils literal"><span class="pre">punctuate()</span></code>
+                would be called 6 times regardless of whether processing these 60 records takes a second, a minute, or an hour.</p>
+            <p>When wall-clock-time (i.e. <code class="docutils literal"><span class="pre">PunctuationType.WALL_CLOCK_TIME</span></code>) is used, <code class="docutils literal"><span class="pre">punctuate()</span></code> is triggered purely by the wall-clock time.
+                Reusing the example above, if the <code class="docutils literal"><span class="pre">Punctuator</span></code> function is scheduled based on <code class="docutils literal"><span class="pre">PunctuationType.WALL_CLOCK_TIME</span></code>, and if these
+                60 records were processed within 20 seconds, <code class="docutils literal"><span class="pre">punctuate()</span></code> is called 2 times (one time every 10 seconds). If these 60 records
+                were processed within 5 seconds, then no <code class="docutils literal"><span class="pre">punctuate()</span></code> is called at all. Note that you can schedule multiple <code class="docutils literal"><span class="pre">Punctuator</span></code>
+                callbacks with different <code class="docutils literal"><span class="pre">PunctuationType</span></code> types within the same processor by calling <code class="docutils literal"><span class="pre">ProcessorContext#schedule()</span></code> multiple
+                times inside <code class="docutils literal"><span class="pre">init()</span></code> method.</p>
+            <div class="admonition attention">
+                <p class="first admonition-title">Attention</p>
+                <p class="last">Stream-time is only advanced if all input partitions over all input topics have new data (with newer timestamps) available.
+                    If at least one partition does not have any new data available, stream-time will not be advanced and thus <code class="docutils literal"><span class="pre">punctuate()</span></code> will not be triggered if <code class="docutils literal"><span class="pre">PunctuationType.STREAM_TIME</span></code> was specified.
+                    This behavior is independent of the configured timestamp extractor, i.e., using <code class="docutils literal"><span class="pre">WallclockTimestampExtractor</span></code> does not enable wall-clock triggering of <code class="docutils literal"><span class="pre">punctuate()</span></code>.</p>
+            </div>
+            <p>The following example <code class="docutils literal"><span class="pre">Processor</span></code> defines a simple word-count algorithm and the following actions are performed:</p>
+            <ul class="simple">
+                <li>In the <code class="docutils literal"><span class="pre">init()</span></code> method, schedule the punctuation every 1000 time units (the time unit is normally milliseconds, which in this example would translate to punctuation every 1 second) and retrieve the local state store by its name &#8220;Counts&#8221;.</li>
+                <li>In the <code class="docutils literal"><span class="pre">process()</span></code> method, upon each received record, split the value string into words, and update their counts into the state store (we will talk about this later in this section).</li>
+                <li>In the <code class="docutils literal"><span class="pre">punctuate()</span></code> method, iterate the local state store and send the aggregated counts to the downstream processor (we will talk about downstream processors later in this section), and commit the current stream state.</li>
+            </ul>
+            <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kd">public</span> <span class="kd">class</span> <span class="nc">WordCountProcessor</span> <span class="kd">implements</span> <span class="n">Processor</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="o">{</span>
+
+  <span class="kd">private</span> <span class="n">ProcessorContext</span> <span class="n">context</span><span class="o">;</span>
+  <span class="kd">private</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">kvStore</span><span class="o">;</span>
+
+  <span class="nd">@Override</span>
+  <span class="nd">@SuppressWarnings</span><span class="o">(</span><span class="s">&quot;unchecked&quot;</span><span class="o">)</span>
+  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">init</span><span class="o">(</span><span class="n">ProcessorContext</span> <span class="n">context</span><span class="o">)</span> <span class="o">{</span>
+      <span class="c1">// keep the processor context locally because we need it in punctuate() and commit()</span>
+      <span class="k">this</span><span class="o">.</span><span class="na">context</span> <span class="o">=</span> <span class="n">context</span><span class="o">;</span>
+
+      <span class="c1">// retrieve the key-value store named &quot;Counts&quot;</span>
+      <span class="n">kvStore</span> <span class="o">=</span> <span class="o">(</span><span class="n">KeyValueStore</span><span class="o">)</span> <span class="n">context</span><span class="o">.</span><span class="na">getStateStore</span><span class="o">(</span><span class="s">&quot;Counts&quot;</span><span class="o">);</span>
+
+      <span class="c1">// schedule a punctuate() method every 1000 milliseconds based on stream-time</span>
+      <span class="k">this</span><span class="o">.</span><span class="na">context</span><span class="o">.</span><span class="na">schedule</span><span class="o">(</span><span class="mi">1000</span><span class="o">,</span> <span class="n">PunctuationType</span><span class="o">.</span><span class="na">STREAM_TIME</span><span class="o">,</span> <span class="o">(</span><span class="n">timestamp</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="o">{</span>
+          <span class="n">KeyValueIterator</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">iter</span> <span class="o">=</span> <span class="k">this</span><span class="o">.</span><span class="na">kvStore</span><span class="o">.</span><span class="na">all</span><span class="o">();</span>
+          <span class="k">while</span> <span class="o">(</span><span class="n">iter</span><span class="o">.</span><span class="na">hasNext</span><span class="o">())</span> <span class="o">{</span>
+              <span class="n">KeyValue</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">entry</span> <span class="o">=</span> <span class="n">iter</span><span class="o">.</span><span class="na">next</span><span class="o">();</span>
+              <span class="n">context</span><span class="o">.</span><span class="na">forward</span><span class="o">(</span><span class="n">entry</span><span class="o">.</span><span class="na">key</span><span class="o">,</span> <span class="n">entry</span><span class="o">.</span><span class="na">value</span><span class="o">.</span><span class="na">toString</span><span class="o">());</span>
+          <span class="o">}</span>
+          <span class="n">iter</span><span class="o">.</span><span class="na">close</span><span class="o">();</span>
+
+          <span class="c1">// commit the current processing progress</span>
+          <span class="n">context</span><span class="o">.</span><span class="na">commit</span><span class="o">();</span>
+      <span class="o">});</span>
+  <span class="o">}</span>
+
+  <span class="nd">@Override</span>
+  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">punctuate</span><span class="o">(</span><span class="kt">long</span> <span class="n">timestamp</span><span class="o">)</span> <span class="o">{</span>
+      <span class="c1">// this method is deprecated and should not be used anymore</span>
+  <span class="o">}</span>
+
+  <span class="nd">@Override</span>
+  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">close</span><span class="o">()</span> <span class="o">{</span>
+      <span class="c1">// close the key-value store</span>
+      <span class="n">kvStore</span><span class="o">.</span><span class="na">close</span><span class="o">();</span>
+  <span class="o">}</span>
+
+<span class="o">}</span>
+</pre></div>
+            </div>
+            <div class="admonition note">
+                <p class="first admonition-title">Note</p>
+                <p class="last"><strong>Stateful processing with state stores:</strong>
+                    The <code class="docutils literal"><span class="pre">WordCountProcessor</span></code> defined above can access the currently received record in its <code class="docutils literal"><span class="pre">process()</span></code> method, and it can
+                    leverage <a class="reference internal" href="#streams-developer-guide-state-store"><span class="std std-ref">state stores</span></a> to maintain processing states to, for example, remember recently
+                    arrived records for stateful processing needs like aggregations and joins. For more information, see the <a class="reference internal" href="#streams-developer-guide-state-store"><span class="std std-ref">state stores</span></a> documentation.</p>
+            </div>
+        </div>
+        <div class="section" id="state-stores">
+            <span id="streams-developer-guide-state-store"></span><h2><a class="toc-backref" href="#id3">State Stores</a><a class="headerlink" href="#state-stores" title="Permalink to this headline"></a></h2>
+            <p>To implement a <strong>stateful</strong> <code class="docutils literal"><span class="pre">Processor</span></code> or <code class="docutils literal"><span class="pre">Transformer</span></code>, you must provide one or more state stores to the processor
+                or transformer (<em>stateless</em> processors or transformers do not need state stores).  State stores can be used to remember
+                recently received input records, to track rolling aggregates, to de-duplicate input records, and more.
+                Another feature of state stores is that they can be
+                <a class="reference internal" href="interactive-queries.html#streams-developer-guide-interactive-queries"><span class="std std-ref">interactively queried</span></a> from other applications, such as a
+                NodeJS-based dashboard or a microservice implemented in Scala or Go.</p>
+            <p>The
+                <a class="reference internal" href="#streams-developer-guide-state-store-defining"><span class="std std-ref">available state store types</span></a> in Kafka Streams have
+                <a class="reference internal" href="#streams-developer-guide-state-store-fault-tolerance"><span class="std std-ref">fault tolerance</span></a> enabled by default.</p>
+            <div class="section" id="defining-and-creating-a-state-store">
+                <span id="streams-developer-guide-state-store-defining"></span><h3><a class="toc-backref" href="#id4">Defining and creating a State Store</a><a class="headerlink" href="#defining-and-creating-a-state-store" title="Permalink to this headline"></a></h3>
+                <p>You can either use one of the available store types or
+                    <a class="reference internal" href="#streams-developer-guide-state-store-custom"><span class="std std-ref">implement your own custom store type</span></a>.
+                    It&#8217;s common practice to leverage an existing store type via the <code class="docutils literal"><span class="pre">Stores</span></code> factory.</p>
+                <p>Note that, when using Kafka Streams, you normally don&#8217;t create or instantiate state stores directly in your code.
+                    Rather, you define state stores indirectly by creating a so-called <code class="docutils literal"><span class="pre">StoreBuilder</span></code>.  This buildeer is used by
+                    Kafka Streams as a factory to instantiate the actual state stores locally in application instances when and where
+                    needed.</p>
+                <p>The following store types are available out of the box.</p>
+                <table border="1" class="non-scrolling-table width-100-percent docutils">
+                    <colgroup>
+                        <col width="19%" />
+                        <col width="11%" />
+                        <col width="18%" />
+                        <col width="51%" />
+                    </colgroup>
+                    <thead valign="bottom">
+                    <tr class="row-odd"><th class="head">Store Type</th>
+                        <th class="head">Storage Engine</th>
+                        <th class="head">Fault-tolerant?</th>
+                        <th class="head">Description</th>
+                    </tr>
+                    </thead>
+                    <tbody valign="top">
+                    <tr class="row-even"><td>Persistent
+                        <code class="docutils literal"><span class="pre">KeyValueStore&lt;K,</span> <span class="pre">V&gt;</span></code></td>
+                        <td>RocksDB</td>
+                        <td>Yes (enabled by default)</td>
+                        <td><ul class="first simple">
+                            <li><strong>The recommended store type for most use cases.</strong></li>
+                            <li>Stores its data on local disk.</li>
+                            <li>Storage capacity:
+                                managed local state can be larger than the memory (heap space) of an
+                                application instance, but must fit into the available local disk
+                                space.</li>
+                            <li>RocksDB settings can be fine-tuned, see
+                                <a class="reference internal" href="config-streams.html#streams-developer-guide-rocksdb-config"><span class="std std-ref">RocksDB configuration</span></a>.</li>
+                            <li>Available <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/Stores.PersistentKeyValueFactory.html">store variants</a>:
+                                time window key-value store, session window key-value store.</li>
+                        </ul>
+                            <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Creating a persistent key-value store:</span>
+<span class="c1">// here, we create a `KeyValueStore&lt;String, Long&gt;` named &quot;persistent-counts&quot;.</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.processor.StateStoreSupplier</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.state.Stores</span><span class="o">;</span>
+
+<span class="c1">// Note: The `Stores` factory returns a supplier for the state store,</span>
+<span class="c1">// because that&#39;s what you typically need to pass as API parameter.</span>
+<span class="n">StateStoreSupplier</span> <span class="n">countStoreSupplier</span> <span class="o">=</span>
+  <span class="n">Stores</span><span class="o">.</span><span class="na">create</span><span class="o">(</span><span class="s">&quot;persistent-counts&quot;</span><span class="o">)</span>
+    <span class="o">.</span><span class="na">withKeys</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span>
+    <span class="o">.</span><span class="na">withValues</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span>
+    <span class="o">.</span><span class="na">persistent</span><span class="o">()</span>
+    <span class="o">.</span><span class="na">build</span><span class="o">();</span>
+</pre></div>
+                            </div>
+                            <p class="last">See
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/Stores.PersistentKeyValueFactory.html">PersistentKeyValueFactory</a> for
+                                detailed factory options.</p>
+                        </td>
+                    </tr>
+                    <tr class="row-odd"><td>In-memory
+                        <code class="docutils literal"><span class="pre">KeyValueStore&lt;K,</span> <span class="pre">V&gt;</span></code></td>
+                        <td>-</td>
+                        <td>Yes (enabled by default)</td>
+                        <td><ul class="first simple">
+                            <li>Stores its data in memory.</li>
+                            <li>Storage capacity:
+                                managed local state must fit into memory (heap space) of an
+                                application instance.</li>
+                            <li>Useful when application instances run in an environment where local
+                                disk space is either not available or local disk space is wiped
+                                in-between app instance restarts.</li>
+                        </ul>
+                            <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Creating an in-memory key-value store:</span>
+<span class="c1">// here, we create a `KeyValueStore&lt;String, Long&gt;` named &quot;inmemory-counts&quot;.</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.processor.StateStoreSupplier</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.state.Stores</span><span class="o">;</span>
+
+<span class="c1">// Note: The `Stores` factory returns a supplier for the state store,</span>
+<span class="c1">// because that&#39;s what you typically need to pass as API parameter.</span>
+<span class="n">StateStoreSupplier</span> <span class="n">countStoreSupplier</span> <span class="o">=</span>
+  <span class="n">Stores</span><span class="o">.</span><span class="na">create</span><span class="o">(</span><span class="s">&quot;inmemory-counts&quot;</span><span class="o">)</span>
+    <span class="o">.</span><span class="na">withKeys</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">())</span>
+    <span class="o">.</span><span class="na">withValues</span><span class="o">(</span><span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span>
+    <span class="o">.</span><span class="na">inMemory</span><span class="o">()</span>
+    <span class="o">.</span><span class="na">build</span><span class="o">();</span>
+</pre></div>
+                            </div>
+                            <p class="last">See
+                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/Stores.InMemoryKeyValueFactory.html">InMemoryKeyValueFactory</a> for
+                                detailed factory options.</p>
+                        </td>
+                    </tr>
+                    </tbody>
+                </table>
+            </div>
+            <div class="section" id="fault-tolerant-state-stores">
+                <span id="streams-developer-guide-state-store-fault-tolerance"></span><h3><a class="toc-backref" href="#id5">Fault-tolerant State Stores</a><a class="headerlink" href="#fault-tolerant-state-stores" title="Permalink to this headline"></a></h3>
+                <p>To make state stores fault-tolerant and to allow for state store migration without data loss, a state store can be
+                    continuously backed up to a Kafka topic behind the scenes. For example, to migrate a stateful stream task from one
+                    machine to another when <a class="reference internal" href="running-app.html#streams-developer-guide-execution-scaling"><span class="std std-ref">elastically adding or removing capacity from your application</span></a>.
+                    This topic is sometimes referred to as the state store&#8217;s associated <em>changelog topic</em>, or its <em>changelog</em>.  For example, if
+                    you experience machine failure, the state store and the application&#8217;s state can be fully restored from its changelog. You can
+                    <a class="reference internal" href="#streams-developer-guide-state-store-enable-disable-fault-tolerance"><span class="std std-ref">enable or disable this backup feature</span></a> for a
+                    state store.</p>
+                <p>By default, persistent key-value stores are fault-tolerant.  They are backed by a
+                    <a class="reference external" href="https://kafka.apache.org/documentation.html#compaction">compacted</a> changelog topic.  The purpose of compacting this
+                    topic is to prevent the topic from growing indefinitely, to reduce the storage consumed in the associated Kafka cluster,
+                    and to minimize recovery time if a state store needs to be restored from its changelog topic.</p>
+                <p>Similarly, persistent window stores are fault-tolerant.  They are backed by a topic that uses both compaction and
+                    deletion. Because of the structure of the message keys that are being sent to the changelog topics, this combination of
+                    deletion and compaction is required for the changelog topics of window stores. For window stores, the message keys are
+                    composite keys that include the &#8220;normal&#8221; key and window timestamps.  For these types of composite keys it would not
+                    be sufficient to only enable compaction to prevent a changelog topic from growing out of bounds.  With deletion
+                    enabled, old windows that have expired will be cleaned up by Kafka&#8217;s log cleaner as the log segments expire.  The
+                    default retention setting is <code class="docutils literal"><span class="pre">Windows#maintainMs()</span></code> + 1 day.  You can override this setting by specifying
+                    <code class="docutils literal"><span class="pre">StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG</span></code> in the <code class="docutils literal"><span class="pre">StreamsConfig</span></code>.</p>
+                <p>When you open an <code class="docutils literal"><span class="pre">Iterator</span></code> from a state store you must call <code class="docutils literal"><span class="pre">close()</span></code> on the iterator when you are done working with
+                    it to reclaim resources; or you can use the iterator from within a try-with-resources statement. If you do not close an iterator,
+                    you may encounter an OOM error.</p>
+            </div>
+            <div class="section" id="enable-or-disable-fault-tolerance-of-state-stores-store-changelogs">
+                <span id="streams-developer-guide-state-store-enable-disable-fault-tolerance"></span><h3><a class="toc-backref" href="#id6">Enable or Disable Fault Tolerance of State Stores (Store Changelogs)</a><a class="headerlink" href="#enable-or-disable-fault-tolerance-of-state-stores-store-changelogs" title="Permalink to this headline"></a></h3>
+                <p>You can enable or disable fault tolerance for a state store by enabling or disabling the change logging
+                    of the store through <code class="docutils literal"><span class="pre">enableLogging()</span></code> and <code class="docutils literal"><span class="pre">disableLogging()</span></code>.
+                    You can also fine-tune the associated topic’s configuration if needed.</p>
+                <p>Example for disabling fault-tolerance:</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.streams.state.StoreBuilder</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.state.Stores</span><span class="o">;</span>
+
+<span class="n">StoreBuilder</span><span class="o">&lt;</span><span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;&gt;</span> <span class="n">countStoreSupplier</span> <span class="o">=</span> <span class="n">Stores</span><span class="o">.</span><span class="na">keyValueStoreBuilder</span><span class="o">(</span>
+  <span class="n">Stores</span><span class="o">.</span><span class="na">persistentKeyValueStore</span><span class="o">(</span><span class="s">&quot;Counts&quot;</span><span class="o">),</span>
+    <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span>
+    <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span>
+  <span class="o">.</span><span class="na">withLoggingDisabled</span><span class="o">();</span> <span class="c1">// disable backing up the store to a changelog topic</span>
+</pre></div>
+                </div>
+                <div class="admonition attention">
+                    <p class="first admonition-title">Attention</p>
+                    <p class="last">If the changelog is disabled then the attached state store is no longer fault tolerant and it can&#8217;t have any <a class="reference internal" href="config-streams.html#streams-developer-guide-standby-replicas"><span class="std std-ref">standby replicas</span></a>.</p>
+                </div>
+                <p>Here is an example for enabling fault tolerance, with additional changelog-topic configuration:
+                    You can add any log config from <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/core/src/main/scala/kafka/log/LogConfig.scala#L61">kafka.log.LogConfig</a>.
+                    Unrecognized configs will be ignored.</p>
+                <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.streams.state.StoreBuilder</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.state.Stores</span><span class="o">;</span>
+
+<span class="n">Map</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">changelogConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HashMap</span><span class="o">();</span>
+<span class="c1">// override min.insync.replicas</span>
+<span class="n">changelogConfig</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;min.insyc.replicas&quot;</span><span class="o">,</span> <span class="s">&quot;1&quot;</span><span class="o">)</span>
+
+<span class="n">StoreBuilder</span><span class="o">&lt;</span><span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;&gt;</span> <span class="n">countStoreSupplier</span> <span class="o">=</span> <span class="n">Stores</span><span class="o">.</span><span class="na">keyValueStoreBuilder</span><span class="o">(</span>
+  <span class="n">Stores</span><span class="o">.</span><span class="na">persistentKeyValueStore</span><span class="o">(</span><span class="s">&quot;Counts&quot;</span><span class="o">),</span>
+    <span class="n">Serdes</span><span class="o">.</span><span class="na">String</span><span class="o">(),</span>
+    <span class="n">Serdes</span><span class="o">.</span><span class="na">Long</span><span class="o">())</span>
+  <span class="o">.</span><span class="na">withLoggingEnabled</span><span class="o">(</span><span class="n">changlogConfig</span><span class="o">);</span> <span class="c1">// enable changelogging, with custom changelog settings</span>
+</pre></div>
+                </div>
+            </div>
+            <div class="section" id="implementing-custom-state-stores">
+                <span id="streams-developer-guide-state-store-custom"></span><h3><a class="toc-backref" href="#id7">Implementing Custom State Stores</a><a class="headerlink" href="#implementing-custom-state-stores" title="Permalink to this headline"></a></h3>
+                <p>You can use the <a class="reference internal" href="#streams-developer-guide-state-store-defining"><span class="std std-ref">built-in state store types</span></a> or  implement your own.
+                    The primary interface to implement for the store is
+                    <code class="docutils literal"><span class="pre">org.apache.kafka.streams.processor.StateStore</span></code>.  Kafka Streams also has a few extended interfaces such
+                    as <code class="docutils literal"><span class="pre">KeyValueStore</span></code>.</p>
+                <p>You also need to provide a &#8220;factory&#8221; for the store by implementing the
+                    <code class="docutils literal"><span class="pre">org.apache.kafka.streams.processor.StateStoreSupplier</span></code> interface, which Kafka Streams uses to create instances of
+                    your store.</p>
+            </div>
+        </div>
+        <div class="section" id="connecting-processors-and-state-stores">
+            <h2><a class="toc-backref" href="#id8">Connecting Processors and State Stores</a><a class="headerlink" href="#connecting-processors-and-state-stores" title="Permalink to this headline"></a></h2>
+            <p>Now that a <a class="reference internal" href="#streams-developer-guide-stream-processor"><span class="std std-ref">processor</span></a> (WordCountProcessor) and the
+                state stores have been defined, you can construct the processor topology by connecting these processors and state stores together by
+                using the <code class="docutils literal"><span class="pre">Topology</span></code> instance.  In addition, you can add source processors with the specified Kafka topics
+                to generate input data streams into the topology, and sink processors with the specified Kafka topics to generate
+                output data streams out of the topology.</p>
+            <p>Here is an example implementation:</p>
+            <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Topology</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Topology</span><span class="o">();</span>
+
+<span class="c1">// add the source processor node that takes Kafka topic &quot;source-topic&quot; as input</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">addSource</span><span class="o">(</span><span class="s">&quot;Source&quot;</span><span class="o">,</span> <span class="s">&quot;source-topic&quot;</span><span class="o">)</span>
+
+    <span class="c1">// add the WordCountProcessor node which takes the source processor as its upstream processor</span>
+    <span class="o">.</span><span class="na">addProcessor</span><span class="o">(</span><span class="s">&quot;Process&quot;</span><span class="o">,</span> <span class="o">()</span> <span class="o">-&gt;</span> <span class="k">new</span> <span class="n">WordCountProcessor</span><span class="o">(),</span> <span class="s">&quot;Source&quot;</span><span class="o">)</span>
+
+    <span class="c1">// add the count store associated with the WordCountProcessor processor</span>
+    <span class="o">.</span><span class="na">addStateStore</span><span class="o">(</span><span class="n">countStoreBuilder</span><span class="o">,</span> <span class="s">&quot;Process&quot;</span><span class="o">)</span>
+
+    <span class="c1">// add the sink processor node that takes Kafka topic &quot;sink-topic&quot; as output</span>
+    <span class="c1">// and the WordCountProcessor node as its upstream processor</span>
+    <span class="o">.</span><span class="na">addSink</span><span class="o">(</span><span class="s">&quot;Sink&quot;</span><span class="o">,</span> <span class="s">&quot;sink-topic&quot;</span><span class="o">,</span> <span class="s">&quot;Process&quot;</span><span class="o">);</span>
+</pre></div>
+            </div>
+            <p>Here is a quick explanation of this example:</p>
+            <ul class="simple">
+                <li>A source processor node named <code class="docutils literal"><span class="pre">&quot;Source&quot;</span></code> is added to the topology using the <code class="docutils literal"><span class="pre">addSource</span></code> method, with one Kafka topic
+                    <code class="docutils literal"><span class="pre">&quot;source-topic&quot;</span></code> fed to it.</li>
+                <li>A processor node named <code class="docutils literal"><span class="pre">&quot;Process&quot;</span></code> with the pre-defined <code class="docutils literal"><span class="pre">WordCountProcessor</span></code> logic is then added as the downstream
+                    processor of the <code class="docutils literal"><span class="pre">&quot;Source&quot;</span></code> node using the <code class="docutils literal"><span class="pre">addProcessor</span></code> method.</li>
+                <li>A predefined persistent key-value state store is created and associated with the <code class="docutils literal"><span class="pre">&quot;Process&quot;</span></code> node, using
+                    <code class="docutils literal"><span class="pre">countStoreBuilder</span></code>.</li>
+                <li>A sink processor node is then added to complete the topology using the <code class="docutils literal"><span class="pre">addSink</span></code> method, taking the <code class="docutils literal"><span class="pre">&quot;Process&quot;</span></code> node
+                    as its upstream processor and writing to a separate <code class="docutils literal"><span class="pre">&quot;sink-topic&quot;</span></code> Kafka topic.</li>
+            </ul>
+            <p>In this topology, the <code class="docutils literal"><span class="pre">&quot;Process&quot;</span></code> stream processor node is considered a downstream processor of the <code class="docutils literal"><span class="pre">&quot;Source&quot;</span></code> node, and an
+                upstream processor of the <code class="docutils literal"><span class="pre">&quot;Sink&quot;</span></code> node.  As a result, whenever the <code class="docutils literal"><span class="pre">&quot;Source&quot;</span></code> node forwards a newly fetched record from
+                Kafka to its downstream <code class="docutils literal"><span class="pre">&quot;Process&quot;</span></code> node, the <code class="docutils literal"><span class="pre">WordCountProcessor#process()</span></code> method is triggered to process the record and
+                update the associated state store. Whenever <code class="docutils literal"><span class="pre">context#forward()</span></code> is called in the
+                <code class="docutils literal"><span class="pre">WordCountProcessor#punctuate()</span></code> method, the aggregate key-value pair will be sent via the <code class="docutils literal"><span class="pre">&quot;Sink&quot;</span></code> processor node to
+                the Kafka topic <code class="docutils literal"><span class="pre">&quot;sink-topic&quot;</span></code>.  Note that in the <code class="docutils literal"><span class="pre">WordCountProcessor</span></code> implementation, you must refer to the
+                same store name <code class="docutils literal"><span class="pre">&quot;Counts&quot;</span></code> when accessing the key-value store, otherwise an exception will be thrown at runtime,
+                indicating that the state store cannot be found. If the state store is not associated with the processor
+                in the <code class="docutils literal"><span class="pre">Topology</span></code> code, accessing it in the processor&#8217;s <code class="docutils literal"><span class="pre">init()</span></code> method will also throw an exception at
+                runtime, indicating the state store is not accessible from this processor.</p>
+            <p>Now that you have fully defined your processor topology in your application, you can proceed to
+                <a class="reference internal" href="running-app.html#streams-developer-guide-execution"><span class="std std-ref">running the Kafka Streams application</span></a>.</p>
+</div>
+</div>
+
+
+               </div>
+              </div>
+              <div class="pagination">
+                <a href="/{{version}}/documentation/streams/developer-guide/dsl-api" class="pagination__btn pagination__btn__prev">Previous</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/datatypes" class="pagination__btn pagination__btn__next">Next</a>
+              </div>
+                </script>
+
+                <!--#include virtual="../../../includes/_header.htm" -->
+                <!--#include virtual="../../../includes/_top.htm" -->
+                    <div class="content documentation documentation--current">
+                    <!--#include virtual="../../../includes/_nav.htm" -->
+                    <div class="right">
+                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <ul class="breadcrumbs">
+                    <li><a href="/documentation">Documentation</a></li>
+                    <li><a href="/documentation/streams">Kafka Streams</a></li>
+                    <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+                </ul>
+                <div class="p-content"></div>
+                    </div>
+                    </div>
+                    <!--#include virtual="../../../includes/_footer.htm" -->
+                    <script>
+                    $(function() {
+                        // Show selected style on nav item
+                        $('.b-nav__streams').addClass('selected');
+
+                        //sticky secondary nav
+                        var $navbar = $(".sub-nav-sticky"),
+                            y_pos = $navbar.offset().top,
+                            height = $navbar.height();
+
+                        $(window).scroll(function() {
+                            var scrollTop = $(window).scrollTop();
+
+                            if (scrollTop > y_pos - height) {
+                                $navbar.addClass("navbar-fixed")
+                            } else if (scrollTop <= y_pos) {
+                                $navbar.removeClass("navbar-fixed")
+                            }
+                        });
+
+                        // Display docs subnav items
+                        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+                    });
+              </script>
diff --git a/docs/streams/developer-guide/running-app.html b/docs/streams/developer-guide/running-app.html
new file mode 100644
index 0000000..253866b
--- /dev/null
+++ b/docs/streams/developer-guide/running-app.html
@@ -0,0 +1,197 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <!-- div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div -->
+    </div>
+  </div>
+                
+  <div class="section" id="running-streams-applications">
+<span id="streams-developer-guide-execution"></span><h1>Running Streams Applications<a class="headerlink" href="#running-streams-applications" title="Permalink to this headline"></a></h1>
+<p>You can run Java applications that use the Kafka Streams library without any additional configuration or requirements.</p>
+<div class="contents local topic" id="table-of-contents">
+<p class="topic-title first"><b>Table of Contents</b></p>
+<ul class="simple">
+<li><a class="reference internal" href="#starting-a-kafka-streams-application" id="id3">Starting a Kafka Streams application</a></li>
+<li><a class="reference internal" href="#elastic-scaling-of-your-application" id="id4">Elastic scaling of your application</a><ul>
+<li><a class="reference internal" href="#adding-capacity-to-your-application" id="id5">Adding capacity to your application</a></li>
+<li><a class="reference internal" href="#removing-capacity-from-your-application" id="id6">Removing capacity from your application</a></li>
+<li><a class="reference internal" href="#state-restoration-during-workload-rebalance" id="id7">State restoration during workload rebalance</a></li>
+<li><a class="reference internal" href="#determining-how-many-application-instances-to-run" id="id8">Determining how many application instances to run</a></li>
+</ul>
+</li>
+</ul>
+</div>
+      <div class="section" id="running-streams-applications">
+          <span id="streams-developer-guide-execution"></span><h1>Running Streams Applications<a class="headerlink" href="#running-streams-applications" title="Permalink to this headline"></a></h1>
+          <p>You can run Java applications that use the Kafka Streams library without any additional configuration or requirements. Kafka Streams
+              also provides the ability to receive notification of the various states of the application. The ability to monitor the runtime
+              status is discussed in <a class="reference internal" href="../monitoring.html#streams-monitoring"><span class="std std-ref">the monitoring guide</span></a>.</p>
+          <div class="contents local topic" id="table-of-contents">
+              <p class="topic-title first"><b>Table of Contents</b></p>
+              <ul class="simple">
+                  <li><a class="reference internal" href="#starting-a-kafka-streams-application" id="id3">Starting a Kafka Streams application</a></li>
+                  <li><a class="reference internal" href="#elastic-scaling-of-your-application" id="id4">Elastic scaling of your application</a><ul>
+                      <li><a class="reference internal" href="#adding-capacity-to-your-application" id="id5">Adding capacity to your application</a></li>
+                      <li><a class="reference internal" href="#removing-capacity-from-your-application" id="id6">Removing capacity from your application</a></li>
+                      <li><a class="reference internal" href="#state-restoration-during-workload-rebalance" id="id7">State restoration during workload rebalance</a></li>
+                      <li><a class="reference internal" href="#determining-how-many-application-instances-to-run" id="id8">Determining how many application instances to run</a></li>
+                  </ul>
+                  </li>
+              </ul>
+          </div>
+          <div class="section" id="starting-a-kafka-streams-application">
+              <span id="streams-developer-guide-execution-starting"></span><h2><a class="toc-backref" href="#id3">Starting a Kafka Streams application</a><a class="headerlink" href="#starting-a-kafka-streams-application" title="Permalink to this headline"></a></h2>
+              <p>You can package your Java application as a fat JAR file and then start the application like this:</p>
+              <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># Start the application in class `com.example.MyStreamsApp`</span>
+<span class="c1"># from the fat JAR named `path-to-app-fatjar.jar`.</span>
+$ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
+</pre></div>
+              </div>
+              <p>For more information about how you can package your application in this way, see the
+                  <a class="reference internal" href="../code-examples.html#streams-code-examples"><span class="std std-ref">Streams code examples</span></a>.</p>
+              <p>When you start your application you are launching a Kafka Streams instance of your application. You can run multiple
+                  instances of your application. A common scenario is that there are multiple instances of your application running in
+                  parallel. For more information, see <a class="reference internal" href="../architecture.html#streams-architecture-parallelism-model"><span class="std std-ref">Parallelism Model</span></a>.</p>
+              <p>When the application instance starts running, the defined processor topology will be initialized as one or more stream tasks.
+                  If the processor topology defines any state stores, these are also constructed during the initialization period. For
+                  more information, see the  <a class="reference internal" href="#streams-developer-guide-execution-scaling-state-restoration"><span class="std std-ref">State restoration during workload rebalance</span></a> section).</p>
+          </div>
+          <div class="section" id="elastic-scaling-of-your-application">
+              <span id="streams-developer-guide-execution-scaling"></span><h2><a class="toc-backref" href="#id4">Elastic scaling of your application</a><a class="headerlink" href="#elastic-scaling-of-your-application" title="Permalink to this headline"></a></h2>
+              <p>Kafka Streams makes your stream processing applications elastic and scalable.  You can add and remove processing capacity
+                  dynamically during application runtime without any downtime or data loss.  This makes your applications
+                  resilient in the face of failures and for allows you to perform maintenance as needed (e.g. rolling upgrades).</p>
+              <p>For more information about this elasticity, see the <a class="reference internal" href="../architecture.html#streams-architecture-parallelism-model"><span class="std std-ref">Parallelism Model</span></a> section. Kafka Streams
+                  leverages the Kafka group management functionality, which is built right into the <a class="reference external" href="https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol">Kafka wire protocol</a>. It is the foundation that enables the
+                  elasticity of Kafka Streams applications: members of a group coordinate and collaborate jointly on the consumption and
+                  processing of data in Kafka.  Additionally, Kafka Streams provides stateful processing and allows for fault-tolerant
+                  state in environments where application instances may come and go at any time.</p>
+              <div class="section" id="adding-capacity-to-your-application">
+                  <h3><a class="toc-backref" href="#id5">Adding capacity to your application</a><a class="headerlink" href="#adding-capacity-to-your-application" title="Permalink to this headline"></a></h3>
+                  <p>If you need more processing capacity for your stream processing application, you can simply start another instance of your stream processing application, e.g. on another machine, in order to scale out.  The instances of your application will become aware of each other and automatically begin to share the processing work.  More specifically, what will be handed over from the existing instances to the new instances is (some of) the stream tasks that have been run by th [...]
+                  <p>The various instances of your application each run in their own JVM process, which means that each instance can leverage all the processing capacity that is available to their respective JVM process (minus the capacity that any non-Kafka-Streams part of your application may be using).  This explains why running additional instances will grant your application additional processing capacity.  The exact capacity you will be adding by running a new instance depends of c [...]
+                  <div class="figure align-center" id="id1">
+                      <a class="reference internal image-reference" href="../../../images/streams-elastic-scaling-1.png"><img alt="../../../images/streams-elastic-scaling-1.png" src="../../../images/streams-elastic-scaling-1.png" style="width: 500pt; height: 400pt;" /></a>
+                      <p class="caption"><span class="caption-text">Before adding capacity: only a single instance of your Kafka Streams application is running.  At this point the corresponding Kafka consumer group of your application contains only a single member (this instance).  All data is being read and processed by this single instance.</span></p>
+                  </div>
+                  <div class="figure align-center" id="id2">
+                      <a class="reference internal image-reference" href="../../../images/streams-elastic-scaling-2.png"><img alt="../../../images/streams-elastic-scaling-2.png" src="../../../images/streams-elastic-scaling-2.png" style="width: 500pt; height: 400pt;" /></a>
+                      <p class="caption"><span class="caption-text">After adding capacity: now two additional instances of your Kafka Streams application are running, and they have automatically joined the application&#8217;s Kafka consumer group for a total of three current members. These three instances are automatically splitting the processing work between each other. The splitting is based on the Kafka topic partitions from which data is being read.</span></p>
+                  </div>
+              </div>
+              <div class="section" id="removing-capacity-from-your-application">
+                  <h3><a class="toc-backref" href="#id6">Removing capacity from your application</a><a class="headerlink" href="#removing-capacity-from-your-application" title="Permalink to this headline"></a></h3>
+                  <p>To remove processing capacity, you can stop running stream processing application instances (e.g., shut down two of
+                      the four instances), it will automatically leave the application’s consumer group, and the remaining instances of
+                      your application will automatically take over the processing work. The remaining instances take over the stream tasks that
+                      were run by the stopped instances.  Moving stream tasks from one instance to another results in moving the processing
+                      work plus any internal state of these stream tasks. The state of a stream task is recreated in the target instance
+                      from its changelog topic.</p>
+                  <div class="figure align-center">
+                      <a class="reference internal image-reference" href="../../../images/streams-elastic-scaling-3.png"><img alt="../../../images/streams-elastic-scaling-3.png" src="../../../images/streams-elastic-scaling-3.png" style="width: 500pt; height: 400pt;" /></a>
+                  </div>
+              </div>
+              <div class="section" id="state-restoration-during-workload-rebalance">
+                  <span id="streams-developer-guide-execution-scaling-state-restoration"></span><h3><a class="toc-backref" href="#id7">State restoration during workload rebalance</a><a class="headerlink" href="#state-restoration-during-workload-rebalance" title="Permalink to this headline"></a></h3>
+                  <p>When a task is migrated, the task processing state is fully restored before the application instance resumes
+                      processing. This guarantees the correct processing results. In Kafka Streams, state restoration is usually done by
+                      replaying the corresponding changelog topic to reconstruct the state store. To minimize changelog-based restoration
+                      latency by using replicated local state stores, you can specify <code class="docutils literal"><span class="pre">num.standby.replicas</span></code>. When a stream task is
+                      initialized or re-initialized on the application instance, its state store is restored like this:</p>
+                  <ul class="simple">
+                      <li>If no local state store exists, the changelog is replayed from the earliest to the current offset. This reconstructs the local state store to the most recent snapshot.</li>
+                      <li>If a local state store exists, the changelog is replayed from the previously checkpointed offset. The changes are applied and the state is restored to the most recent snapshot. This method takes less time because it is applying a smaller portion of the changelog.</li>
+                  </ul>
+                  <p>For more information, see <a class="reference internal" href="config-streams.html#streams-developer-guide-standby-replicas"><span class="std std-ref">Standby Replicas</span></a>.</p>
+              </div>
+              <div class="section" id="determining-how-many-application-instances-to-run">
+                  <h3><a class="toc-backref" href="#id8">Determining how many application instances to run</a><a class="headerlink" href="#determining-how-many-application-instances-to-run" title="Permalink to this headline"></a></h3>
+                  <p>The parallelism of a Kafka Streams application is primarily determined by how many partitions the input topics have. For
+                      example, if your application reads from a single topic that has ten partitions, then you can run up to ten instances
+                      of your applications. You can run further instances, but these will be idle.</p>
+                  <p>The number of topic partitions is the upper limit for the parallelism of your Kafka Streams application and for the
+                      number of running instances of your application.</p>
+                  <p>To achieve balanced workload processing across application instances and to prevent processing hotpots, you should
+                      distribute data and processing workloads:</p>
+                  <ul class="simple">
+                      <li>Data should be equally distributed across topic partitions. For example, if two topic partitions each have 1 million messages, this is better than a single partition with 2 million messages and none in the other.</li>
+                      <li>Processing workload should be equally distributed across topic partitions. For example, if the time to process messages varies widely, then it is better to spread the processing-intensive messages across partitions rather than storing these messages within the same partition.</li>
+                  </ul>
+</div>
+</div>
+</div>
+
+
+               </div>
+              </div>
+              <div class="pagination">
+                <a href="/{{version}}/documentation/streams/developer-guide/memory-mgmt" class="pagination__btn pagination__btn__prev">Previous</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/manage-topics" class="pagination__btn pagination__btn__next">Next</a>
+              </div>
+                </script>
+
+                <!--#include virtual="../../../includes/_header.htm" -->
+                <!--#include virtual="../../../includes/_top.htm" -->
+                    <div class="content documentation documentation--current">
+                    <!--#include virtual="../../../includes/_nav.htm" -->
+                    <div class="right">
+                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <ul class="breadcrumbs">
+                    <li><a href="/documentation">Documentation</a></li>
+                    <li><a href="/documentation/streams">Kafka Streams</a></li>
+                    <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+                </ul>
+                <div class="p-content"></div>
+                    </div>
+                    </div>
+                    <!--#include virtual="../../../includes/_footer.htm" -->
+                    <script>
+                    $(function() {
+                        // Show selected style on nav item
+                        $('.b-nav__streams').addClass('selected');
+
+                        //sticky secondary nav
+                        var $navbar = $(".sub-nav-sticky"),
+                            y_pos = $navbar.offset().top,
+                            height = $navbar.height();
+
+                        $(window).scroll(function() {
+                            var scrollTop = $(window).scrollTop();
+
+                            if (scrollTop > y_pos - height) {
+                                $navbar.addClass("navbar-fixed")
+                            } else if (scrollTop <= y_pos) {
+                                $navbar.removeClass("navbar-fixed")
+                            }
+                        });
+
+                        // Display docs subnav items
+                        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+                    });
+              </script>
\ No newline at end of file
diff --git a/docs/streams/developer-guide/security.html b/docs/streams/developer-guide/security.html
new file mode 100644
index 0000000..2e9b387
--- /dev/null
+++ b/docs/streams/developer-guide/security.html
@@ -0,0 +1,176 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <!-- div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div -->
+    </div>
+  </div>
+
+    <div class="section" id="streams-security">
+        <span id="streams-developer-guide-security"></span><h1>Streams Security<a class="headerlink" href="#streams-security" title="Permalink to this headline"></a></h1>
+        <div class="contents local topic" id="table-of-contents">
+            <p class="topic-title first"><b>Table of Contents</b></p>
+            <ul class="simple">
+                <li><a class="reference internal" href="#required-acl-setting-for-secure-kafka-clusters" id="id1">Required ACL setting for secure Kafka clusters</a></li>
+                <li><a class="reference internal" href="#security-example" id="id2">Security example</a></li>
+            </ul>
+        </div>
+        <p>Kafka Streams natively integrates with the <a class="reference internal" href="../../kafka/security.html#kafka-security"><span class="std std-ref">Kafka&#8217;s security features</span></a> and supports all of the
+            client-side security features in Kafka.  Streams leverages the <a class="reference internal" href="../../clients/index.html#kafka-clients"><span class="std std-ref">Java Producer and Consumer API</span></a>.</p>
+        <p>To secure your Stream processing applications, configure the security settings in the corresponding Kafka producer
+            and consumer clients, and then specify the corresponding configuration settings in your Kafka Streams application.</p>
+        <p>Kafka supports cluster encryption and authentication, including a mix of authenticated and unauthenticated,
+            and encrypted and non-encrypted clients. Using security is optional.</p>
+        <p>Here a few relevant client-side security features:</p>
+        <dl class="docutils">
+            <dt>Encrypt data-in-transit between your applications and Kafka brokers</dt>
+            <dd>You can enable the encryption of the client-server communication between your applications and the Kafka brokers.
+                For example, you can configure your applications to always use encryption when reading and writing data to and from
+                Kafka. This is critical when reading and writing data across security domains such as internal network, public
+                internet, and partner networks.</dd>
+            <dt>Client authentication</dt>
+            <dd>You can enable client authentication for connections from your application to Kafka brokers. For example, you can
+                define that only specific applications are allowed to connect to your Kafka cluster.</dd>
+            <dt>Client authorization</dt>
+            <dd>You can enable client authorization of read and write operations by your applications. For example, you can define
+                that only specific applications are allowed to read from a Kafka topic.  You can also restrict write access to Kafka
+                topics to prevent data pollution or fraudulent activities.</dd>
+        </dl>
+        <p>For more information about the security features in Apache Kafka, see <a class="reference internal" href="../../kafka/security.html#kafka-security"><span class="std std-ref">Kafka Security</span></a>.</p>
+        <div class="section" id="required-acl-setting-for-secure-kafka-clusters">
+            <span id="streams-developer-guide-security-acls"></span><h2><a class="toc-backref" href="#id1">Required ACL setting for secure Kafka clusters</a><a class="headerlink" href="#required-acl-setting-for-secure-kafka-clusters" title="Permalink to this headline"></a></h2>
+            <p>When applications are run against a secured Kafka cluster, the principal running the application must have the ACL
+                <code class="docutils literal"><span class="pre">--cluster</span> <span class="pre">--operation</span> <span class="pre">Create</span></code> set so that the application has the permissions to create
+                <a class="reference internal" href="manage-topics.html#streams-developer-guide-topics-internal"><span class="std std-ref">internal topics</span></a>.</p>
+        </div>
+        <div class="section" id="security-example">
+            <span id="streams-developer-guide-security-example"></span><h2><a class="toc-backref" href="#id2">Security example</a><a class="headerlink" href="#security-example" title="Permalink to this headline"></a></h2>
+            <p>The purpose is to configure a Kafka Streams application to enable client authentication and encrypt data-in-transit when
+                communicating with its Kafka cluster.</p>
+            <p>This example assumes that the Kafka brokers in the cluster already have their security setup and that the necessary SSL
+                certificates are available to the application in the local filesystem locations. For example, if you are using Docker
+                then you must also include these SSL certificates in the correct locations within the Docker image.</p>
+            <p>The snippet below shows the settings to enable client authentication and SSL encryption for data-in-transit between your
+                Kafka Streams application and the Kafka cluster it is reading and writing from:</p>
+            <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># Essential security settings to enable client authentication and SSL encryption</span>
+bootstrap.servers<span class="o">=</span>kafka.example.com:9093
+security.protocol<span class="o">=</span>SSL
+ssl.truststore.location<span class="o">=</span>/etc/security/tls/kafka.client.truststore.jks
+ssl.truststore.password<span class="o">=</span>test1234
+ssl.keystore.location<span class="o">=</span>/etc/security/tls/kafka.client.keystore.jks
+ssl.keystore.password<span class="o">=</span>test1234
+ssl.key.password<span class="o">=</span>test1234
+</pre></div>
+            </div>
+            <p>Configure these settings in the application for your <code class="docutils literal"><span class="pre">StreamsConfig</span></code> instance. These settings will encrypt any
+                data-in-transit that is being read from or written to Kafka, and your application will authenticate itself against the
+                Kafka brokers that it is communicating with. Note that this example does not cover client authorization.</p>
+            <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Code of your Java application that uses the Kafka Streams library</span>
+<span class="n">Properties</span> <span class="n">settings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">APPLICATION_ID_CONFIG</span><span class="o">,</span> <span class="s">&quot;secure-kafka-streams-app&quot;</span><span class="o">);</span>
+<span class="c1">// Where to find secure Kafka brokers.  Here, it&#39;s on port 9093.</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">BOOTSTRAP_SERVERS_CONFIG</span><span class="o">,</span> <span class="s">&quot;kafka.example.com:9093&quot;</span><span class="o">);</span>
+<span class="c1">//</span>
+<span class="c1">// ...further non-security related settings may follow here...</span>
+<span class="c1">//</span>
+<span class="c1">// Security settings.</span>
+<span class="c1">// 1. These settings must match the security settings of the secure Kafka cluster.</span>
+<span class="c1">// 2. The SSL trust store and key store files must be locally accessible to the application.</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">CommonClientConfigs</span><span class="o">.</span><span class="na">SECURITY_PROTOCOL_CONFIG</span><span class="o">,</span> <span class="s">&quot;SSL&quot;</span><span class="o">);</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_TRUSTSTORE_LOCATION_CONFIG</span><span class="o">,</span> <span class="s">&quot;/etc/security/tls/kafka.client.truststore.jks&quot;</span><span class="o">);</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_TRUSTSTORE_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEYSTORE_LOCATION_CONFIG</span><span class="o">,</span> <span class="s">&quot;/etc/security/tls/kafka.client.keystore.jks&quot;</span><span class="o">);</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEYSTORE_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEY_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
+<span class="n">StreamsConfig</span> <span class="n">streamsConfiguration</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsConfig</span><span class="o">(</span><span class="n">settings</span><span class="o">);</span>
+</pre></div>
+            </div>
+            <p>If you incorrectly configure a security setting in your application, it will fail at runtime, typically right after you
+                start it.  For example, if you enter an incorrect password for the <code class="docutils literal"><span class="pre">ssl.keystore.password</span></code> setting, an error message
+                similar to this would be logged and then the application would terminate:</p>
+            <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># Misconfigured ssl.keystore.password</span>
+Exception in thread <span class="s2">&quot;main&quot;</span> org.apache.kafka.common.KafkaException: Failed to construct kafka producer
+<span class="o">[</span>...snip...<span class="o">]</span>
+Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException:
+   java.io.IOException: Keystore was tampered with, or password was incorrect
+<span class="o">[</span>...snip...<span class="o">]</span>
+Caused by: java.security.UnrecoverableKeyException: Password verification failed
+</pre></div>
+            </div>
+            <p>Monitor your Kafka Streams application log files for such error messages to spot any misconfigured applications quickly.</p>
+</div>
+</div>
+
+
+               </div>
+              </div>
+              <div class="pagination">
+                <a href="/{{version}}/documentation/streams/developer-guide/manage-topics" class="pagination__btn pagination__btn__prev">Previous</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/app-reset-tool" class="pagination__btn pagination__btn__next">Next</a>
+              </div>
+                </script>
+
+                <!--#include virtual="../../../includes/_header.htm" -->
+                <!--#include virtual="../../../includes/_top.htm" -->
+                    <div class="content documentation documentation--current">
+                    <!--#include virtual="../../../includes/_nav.htm" -->
+                    <div class="right">
+                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <ul class="breadcrumbs">
+                    <li><a href="/documentation">Documentation</a></li>
+                    <li><a href="/documentation/streams">Kafka Streams</a></li>
+                    <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+                </ul>
+                <div class="p-content"></div>
+                    </div>
+                    </div>
+                    <!--#include virtual="../../../includes/_footer.htm" -->
+                    <script>
+                    $(function() {
+                        // Show selected style on nav item
+                        $('.b-nav__streams').addClass('selected');
+
+                        //sticky secondary nav
+                        var $navbar = $(".sub-nav-sticky"),
+                            y_pos = $navbar.offset().top,
+                            height = $navbar.height();
+
+                        $(window).scroll(function() {
+                            var scrollTop = $(window).scrollTop();
+
+                            if (scrollTop > y_pos - height) {
+                                $navbar.addClass("navbar-fixed")
+                            } else if (scrollTop <= y_pos) {
+                                $navbar.removeClass("navbar-fixed")
+                            }
+                        });
+
+                        // Display docs subnav items
+                        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+                    });
+              </script>
\ No newline at end of file
diff --git a/docs/streams/developer-guide/write-streams.html b/docs/streams/developer-guide/write-streams.html
new file mode 100644
index 0000000..c9ca49c
--- /dev/null
+++ b/docs/streams/developer-guide/write-streams.html
@@ -0,0 +1,248 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <!-- div class="sticky-top">
+      <div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div>
+    </div -->
+  </div>
+
+  <div class="section" id="writing-a-streams-application">
+    <span id="streams-write-app"></span><h1>Writing a Streams Application<a class="headerlink" href="#writing-a-streams-application" title="Permalink to this headline"></a></h1>
+      <p class="topic-title first"><b>Table of Contents</b></p>
+      <ul class="simple">
+          <li><a class="reference internal" href="#libraries-and-maven-artifacts" id="id1">Libraries and Maven artifacts</a></li>
+          <li><a class="reference internal" href="#using-kafka-streams-within-your-application-code" id="id2">Using Kafka Streams within your application code</a></li>
+      </ul>
+    <p>Any Java application that makes use of the Kafka Streams library is considered a Kafka Streams application.
+      The computational logic of a Kafka Streams application is defined as a <a class="reference internal" href="../concepts.html#streams-concepts"><span class="std std-ref">processor topology</span></a>,
+      which is a graph of stream processors (nodes) and streams (edges).</p>
+    <p>You can define the processor topology with the Kafka Streams APIs:</p>
+    <dl class="docutils">
+      <dt><a class="reference internal" href="dsl-api.html#streams-developer-guide-dsl"><span class="std std-ref">Kafka Streams DSL</span></a></dt>
+      <dd>A high-level API that provides provides the most common data transformation operations such as <code class="docutils literal"><span class="pre">map</span></code>, <code class="docutils literal"><span class="pre">filter</span></code>, <code class="docutils literal"><span class="pre">join</span></code>, and <code class="docutils literal"><span class="pre">aggregations</span></code> out of the box. The DSL is the recommended starting point for developers new to Kafka Streams, and  [...]
+      <dt><a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a></dt>
+      <dd>A low-level API that lets you add and connect processors as well as interact directly with state stores. The Processor API provides you with even more flexibility than the DSL but at the expense of requiring more manual work on the side of the application developer (e.g., more lines of code).</dd>
+    </dl>
+      <div class="section" id="libraries-and-maven-artifacts">
+          <span id="streams-developer-guide-maven"></span><h2>Libraries and Maven artifacts</h2>
+          <p>This section lists the Kafka Streams related libraries that are available for writing your Kafka Streams applications.</p>
+          <p>You can define dependencies on the following libraries for your Kafka Streams applications.</p>
+          <table border="1" class="non-scrolling-table docutils">
+              <colgroup>
+                  <col width="14%" />
+                  <col width="19%" />
+                  <col width="12%" />
+                  <col width="55%" />
+              </colgroup>
+              <thead valign="bottom">
+              <tr class="row-odd"><th class="head">Group ID</th>
+                  <th class="head">Artifact ID</th>
+                  <th class="head">Version</th>
+                  <th class="head">Description</th>
+              </tr>
+              </thead>
+              <tbody valign="top">
+              <tr class="row-even"><td><code class="docutils literal"><span class="pre">org.apache.kafka</span></code></td>
+                  <td><code>kafka-streams</code></td>
+                  <td><code class="docutils literal"><span class="pre">1.0.0</span></code></td>
+                  <td>(Required) Base library for Kafka Streams.</td>
+              </tr>
+              <tr class="row-odd"><td><code class="docutils literal"><span class="pre">org.apache.kafka</span></code></td>
+                  <td><code class="docutils literal"><span class="pre">kafka-clients</span></code></td>
+                  <td><code class="docutils literal"><span class="pre">1.0.0</span></code></td>
+                  <td>(Required) Kafka client library.  Contains built-in serializers/deserializers.</td>
+              </tr>
+              </tbody>
+          </table>
+          <div class="admonition tip">
+              <p><b>Tip</b></p>
+              <p class="last">See the section <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data Types and Serialization</span></a> for more information about Serializers/Deserializers.</p>
+          </div>
+          <p>Example <code class="docutils literal"><span class="pre">pom.xml</span></code> snippet when using Maven:</p>
+          <div class="highlight-xml"><div class="highlight"><pre><span></span><span class="nt">&lt;dependency&gt;</span>
+    <span class="nt">&lt;groupId&gt;</span>org.apache.kafka<span class="nt">&lt;/groupId&gt;</span>
+    <span class="nt">&lt;artifactId&gt;</span>kafka-streams<span class="nt">&lt;/artifactId&gt;</span>
+    <span class="nt">&lt;version&gt;</span>1.0.0<span class="nt">&lt;/version&gt;</span>
+<span class="nt">&lt;/dependency&gt;</span>
+<span class="nt">&lt;dependency&gt;</span>
+    <span class="nt">&lt;groupId&gt;</span>org.apache.kafka<span class="nt">&lt;/groupId&gt;</span>
+    <span class="nt">&lt;artifactId&gt;</span>kafka-clients<span class="nt">&lt;/artifactId&gt;</span>
+    <span class="nt">&lt;version&gt;</span>1.0.0<span class="nt">&lt;/version&gt;</span>
+<span class="nt">&lt;/dependency&gt;</span>
+              
+</pre></div>
+          </div>
+      </div>
+    <div class="section" id="using-kafka-streams-within-your-application-code">
+      <h2>Using Kafka Streams within your application code<a class="headerlink" href="#using-kafka-streams-within-your-application-code" title="Permalink to this headline"></a></h2>
+      <p>You can call Kafka Streams from anywhere in your application code, but usually these calls are made within the <code class="docutils literal"><span class="pre">main()</span></code> method of
+        your application, or some variant thereof.  The basic elements of defining a processing topology within your application
+        are described below.</p>
+      <p>First, you must create an instance of <code class="docutils literal"><span class="pre">KafkaStreams</span></code>.</p>
+      <ul class="simple">
+        <li>The first argument of the <code class="docutils literal"><span class="pre">KafkaStreams</span></code> constructor takes a topology (either <code class="docutils literal"><span class="pre">StreamsBuilder#build()</span></code> for the
+          <a class="reference internal" href="dsl-api.html#streams-developer-guide-dsl"><span class="std std-ref">DSL</span></a> or <code class="docutils literal"><span class="pre">Topology</span></code> for the
+          <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a>) that is used to define a topology.</li>
+        <li>The second argument is an instance of <code class="docutils literal"><span class="pre">StreamsConfig</span></code>, which defines the configuration for this specific topology.</li>
+      </ul>
+      <p>Code example:</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.streams.KafkaStreams</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.StreamsConfig</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.StreamsBuilder</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.processor.Topology</span><span class="o">;</span>
+
+<span class="c1">// Use the builders to define the actual processing topology, e.g. to specify</span>
+<span class="c1">// from which input topics to read, which stream operations (filter, map, etc.)</span>
+<span class="c1">// should be called, and so on.  We will cover this in detail in the subsequent</span>
+<span class="c1">// sections of this Developer Guide.</span>
+
+<span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="o">...;</span>  <span class="c1">// when using the DSL</span>
+<span class="n">Topology</span> <span class="n">topology</span> <span class="o">=</span> <span class="n">builder</span><span class="o">.</span><span class="na">build</span><span class="o">();</span>
+<span class="c1">//</span>
+<span class="c1">// OR</span>
+<span class="c1">//</span>
+<span class="n">Topology</span> <span class="n">topology</span> <span class="o">=</span> <span class="o">...;</span> <span class="c1">// when using the Processor API</span>
+
+<span class="c1">// Use the configuration to tell your application where the Kafka cluster is,</span>
+<span class="c1">// which Serializers/Deserializers to use by default, to specify security settings,</span>
+<span class="c1">// and so on.</span>
+<span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="o">...;</span>
+
+<span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaStreams</span><span class="o">(</span><span class="n">topology</span><span class="o">,</span> <span class="n">config</span><span class="o">);</span>
+</pre></div>
+      </div>
+      <p>At this point, internal structures are initialized, but the processing is not started yet.
+        You have to explicitly start the Kafka Streams thread by calling the <code class="docutils literal"><span class="pre">KafkaStreams#start()</span></code> method:</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Start the Kafka Streams threads</span>
+<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span>
+</pre></div>
+      </div>
+      <p>If there are other instances of this stream processing application running elsewhere (e.g., on another machine), Kafka
+        Streams transparently re-assigns tasks from the existing instances to the new instance that you just started.
+        For more information, see <a class="reference internal" href="../architecture.html#streams-architecture-tasks"><span class="std std-ref">Stream Partitions and Tasks</span></a> and <a class="reference internal" href="../architecture.html#streams-architecture-threads"><span class="std std-ref">Threading Model</span></a>.</p>
+      <p>To catch any unexpected exceptions, you can set an <code class="docutils literal"><span class="pre">java.lang.Thread.UncaughtExceptionHandler</span></code> before you start the
+        application.  This handler is called whenever a stream thread is terminated by an unexpected exception:</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Java 8+, using lambda expressions</span>
+<span class="n">streams</span><span class="o">.</span><span class="na">setUncaughtExceptionHandler</span><span class="o">((</span><span class="n">Thread</span> <span class="n">thread</span><span class="o">,</span> <span class="n">Throwable</span> <span class="n">throwable</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="o">{</span>
+  <span class="c1">// here you should examine the throwable/exception and perform an appropriate action!</span>
+<span class="o">});</span>
+
+
+<span class="c1">// Java 7</span>
+<span class="n">streams</span><span class="o">.</span><span class="na">setUncaughtExceptionHandler</span><span class="o">(</span><span class="k">new</span> <span class="n">Thread</span><span class="o">.</span><span class="na">UncaughtExceptionHandler</span><span class="o">()</span> <span class="o">{</span>
+  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">uncaughtException</span><span class="o">(</span><span class="n">Thread</span> <span class="n">thread</span><span class="o">,</span> <span class="n">Throwable</span> <span class="n">throwable</span><span class="o">)</span> <span class="o">{</span>
+    <span class="c1">// here you should examine the throwable/exception and perform an appropriate action!</span>
+  <span class="o">}</span>
+<span class="o">});</span>
+</pre></div>
+      </div>
+      <p>To stop the application instance, call the <code class="docutils literal"><span class="pre">KafkaStreams#close()</span></code> method:</p>
+      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Stop the Kafka Streams threads</span>
+<span class="n">streams</span><span class="o">.</span><span class="na">close</span><span class="o">();</span>
+</pre></div>
+      </div>
+      <p>To allow your application to gracefully shutdown in response to SIGTERM, it is recommended that you add a shutdown hook
+        and call <code class="docutils literal"><span class="pre">KafkaStreams#close</span></code>.</p>
+      <ul>
+        <li><p class="first">Here is a shutdown hook example in Java 8+:</p>
+          <blockquote>
+            <div><div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Add shutdown hook to stop the Kafka Streams threads.</span>
+<span class="c1">// You can optionally provide a timeout to `close`.</span>
+<span class="n">Runtime</span><span class="o">.</span><span class="na">getRuntime</span><span class="o">().</span><span class="na">addShutdownHook</span><span class="o">(</span><span class="k">new</span> <span class="n">Thread</span><span class="o">(</span><span class="n">streams</span><span class="o">::</span><span class="n">close</span><span class="o">));</span>
+</pre></div>
+            </div>
+            </div></blockquote>
+        </li>
+        <li><p class="first">Here is a shutdown hook example in Java 7:</p>
+          <blockquote>
+            <div><div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Add shutdown hook to stop the Kafka Streams threads.</span>
+<span class="c1">// You can optionally provide a timeout to `close`.</span>
+<span class="n">Runtime</span><span class="o">.</span><span class="na">getRuntime</span><span class="o">().</span><span class="na">addShutdownHook</span><span class="o">(</span><span class="k">new</span> <span class="n">Thread</span><span class="o">(</span><span class="k">new</span> <span class="n">Runnable</span><span class="o">()</span> <span class="o">{</span>
+  <span class="nd">@Override</span>
+  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">run</span><span class="o">()</span> <span class="o">{</span>
+      <span class="n">streams</span><span class="o">.</span><span class="na">close</span><span class="o">();</span>
+  <span class="o">}</span>
+<span class="o">}));</span>
+</pre></div>
+            </div>
+            </div></blockquote>
+        </li>
+      </ul>
+      <p>After an application is stopped, Kafka Streams will migrate any tasks that had been running in this instance to available remaining
+        instances.</p>
+</div>
+</div>
+
+
+               </div>
+              </div>
+  <div class="pagination">
+    <a href="/{{version}}/documentation/streams/developer-guide/" class="pagination__btn pagination__btn__prev">Previous</a>
+    <a href="/{{version}}/documentation/streams/developer-guide/config-streams" class="pagination__btn pagination__btn__next">Next</a>
+  </div>
+</script>
+
+<!--#include virtual="../../../includes/_header.htm" -->
+<!--#include virtual="../../../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+  <!--#include virtual="../../../includes/_nav.htm" -->
+  <div class="right">
+    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+    <ul class="breadcrumbs">
+      <li><a href="/documentation">Documentation</a></li>
+      <li><a href="/documentation/streams">Kafka Streams</a></li>
+      <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+    </ul>
+    <div class="p-content"></div>
+  </div>
+</div>
+<!--#include virtual="../../../includes/_footer.htm" -->
+<script>
+    $(function() {
+        // Show selected style on nav item
+        $('.b-nav__streams').addClass('selected');
+
+        //sticky secondary nav
+        var $navbar = $(".sub-nav-sticky"),
+            y_pos = $navbar.offset().top,
+            height = $navbar.height();
+
+        $(window).scroll(function() {
+            var scrollTop = $(window).scrollTop();
+
+            if (scrollTop > y_pos - height) {
+                $navbar.addClass("navbar-fixed")
+            } else if (scrollTop <= y_pos) {
+                $navbar.removeClass("navbar-fixed")
+            }
+        });
+
+        // Display docs subnav items
+        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+    });
+</script>
\ No newline at end of file
diff --git a/docs/streams/index.html b/docs/streams/index.html
index ab72c87..1cbd9be 100644
--- a/docs/streams/index.html
+++ b/docs/streams/index.html
@@ -16,19 +16,21 @@
   <!--#include virtual="../js/templateData.js" -->
 </script>
 <script id="streams-template" type="text/x-handlebars-template">
-  <h1>Kafka Streams API</h1>
+  <h1>Kafka Streams</h1>
        <div class="sub-nav-sticky">
           <div class="sticky-top">
              <div style="height:35px">
                 <a  class="active-menu-item" href="/{{version}}/documentation/streams/">Introduction</a>
-                <a href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-                <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
                 <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
                 <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
-             </div>
-           </div>
-       </div>
-       <h3 class="streams_intro">The easiest way to write mission-critical real-time applications and microservices</h3>
+                <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+                <a href="/{{version}}/documentation/streams/architecture">Architecture</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
+                <a href="/{{version}}/documentation/streams/upgrade-guide">Upgrade</a>
+            </div>
+        </div>
+    </div>
+    <h3 class="streams_intro">The easiest way to write mission-critical real-time applications and microservices</h3>
        <p class="streams__description">Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in Kafka clusters. It combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology.</p>
        <div class="video__series__grid">
           <div class="yt__video__block">
@@ -287,7 +289,7 @@
        </div>
        
        <div class="pagination">
-           <a href="#" class="pagination__btn pagination__btn__prev pagination__btn--disabled">Previous</a>
+           <a href="/{{version}}/documentation" class="pagination__btn pagination__btn__prev">Previous</a>
            <a href="/{{version}}/documentation/streams/quickstart" class="pagination__btn pagination__btn__next">Next</a>
        </div>
      
diff --git a/docs/streams/quickstart.html b/docs/streams/quickstart.html
index 314bce3..b73b3f9 100644
--- a/docs/streams/quickstart.html
+++ b/docs/streams/quickstart.html
@@ -19,17 +19,19 @@
 <script id="content-template" type="text/x-handlebars-template">
 
   <h1>Run Streams Demo Application</h1>
-  <div class="sub-nav-sticky">
-      <div class="sticky-top">
-        <div style="height:35px">
-          <a href="/{{version}}/documentation/streams/">Introduction</a>
-          <a href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-          <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-          <a class="active-menu-item" href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-          <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+    <div class="sub-nav-sticky">
+        <div class="sticky-top">
+            <div style="height:35px">
+                <a href="/{{version}}/documentation/streams/">Introduction</a>
+                <a class="active-menu-item" href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+                <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+                <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+                <a href="/{{version}}/documentation/streams/architecture">Architecture</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
+                <a href="/{{version}}/documentation/streams/upgrade-guide">Upgrade</a>
+            </div>
         </div>
-      </div>
-  </div> 
+    </div>
 <p>
   This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data. However, if you have already started Kafka and
   ZooKeeper, feel free to skip the first two steps.
diff --git a/docs/streams/tutorial.html b/docs/streams/tutorial.html
index 71c9ca3..0bc7314 100644
--- a/docs/streams/tutorial.html
+++ b/docs/streams/tutorial.html
@@ -19,16 +19,18 @@
 <script id="content-template" type="text/x-handlebars-template">
     <h1>Tutorial: Write a Streams Application</h1>
     <div class="sub-nav-sticky">
-      <div class="sticky-top">
-        <div style="height:35px">
-          <a href="/{{version}}/documentation/streams/">Introduction</a>
-          <a href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-          <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-          <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-          <a class="active-menu-item" href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+        <div class="sticky-top">
+            <div style="height:35px">
+                <a href="/{{version}}/documentation/streams/">Introduction</a>
+                <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+                <a class="active-menu-item" href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+                <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+                <a href="/{{version}}/documentation/streams/architecture">Architecture</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
+                <a href="/{{version}}/documentation/streams/upgrade-guide">Upgrade</a>
+            </div>
         </div>
-      </div>
-  </div> 
+    </div>
     <p>
         In this guide we will start from scratch on setting up your own project to write a stream processing application using Kafka Streams.
         It is highly recommended to read the <a href="/{{version}}/documentation/streams/quickstart">quickstart</a> first on how to run a Streams application written in Kafka Streams if you have not done so.
@@ -617,7 +619,7 @@
 
     <div class="pagination">
         <a href="/{{version}}/documentation/streams/quickstart" class="pagination__btn pagination__btn__prev">Previous</a>
-        <a href="/{{version}}/documentation/streams/developer-guide" class="pagination__btn pagination__btn__next">Next</a>
+        <a href="/{{version}}/documentation/streams/core-concepts" class="pagination__btn pagination__btn__next">Next</a>
     </div>
 </script>
 
diff --git a/docs/streams/upgrade-guide.html b/docs/streams/upgrade-guide.html
index 2974058..87d34f1 100644
--- a/docs/streams/upgrade-guide.html
+++ b/docs/streams/upgrade-guide.html
@@ -19,6 +19,19 @@
 
 <script id="content-template" type="text/x-handlebars-template">
     <h1>Upgrade Guide &amp; API Changes</h1>
+    <div class="sub-nav-sticky">
+        <div class="sticky-top">
+            <div style="height:35px">
+                <a href="/{{version}}/documentation/streams/">Introduction</a>
+                <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+                <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+                <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+                <a href="/{{version}}/documentation/streams/architecture">Architecture</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
+                <a class="active-menu-item" href="/{{version}}/documentation/streams/upgrade-guide">Upgrade</a>
+            </div>
+        </div>
+    </div>
 
     <p>
         If you want to upgrade from 0.11.0.x to 1.0.0 you don't need to do any code changes as the public API is fully backward compatible.
@@ -360,7 +373,7 @@
     </ul>
 
     <div class="pagination">
-        <a href="/{{version}}/documentation/streams/architecture" class="pagination__btn pagination__btn__prev">Previous</a>
+        <a href="/{{version}}/documentation/streams/developer-guide/app-reset-tool" class="pagination__btn pagination__btn__prev">Previous</a>
         <a href="#" class="pagination__btn pagination__btn__next pagination__btn--disabled">Next</a>
     </div>
 </script>

-- 
To stop receiving notification emails like this one, please contact
['"commits@kafka.apache.org" <commits@kafka.apache.org>'].

Mime
View raw message