kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From damian...@apache.org
Subject [01/10] kafka-site git commit: Update site for 0.11.0.1 release
Date Wed, 13 Sep 2017 12:25:18 GMT
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 8c85a0e4f -> 067552523


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/streams/core-concepts.html
----------------------------------------------------------------------
diff --git a/0110/streams/core-concepts.html b/0110/streams/core-concepts.html
index b50495d..7349c3a 100644
--- a/0110/streams/core-concepts.html
+++ b/0110/streams/core-concepts.html
@@ -21,6 +21,28 @@
     <h1>Core Concepts</h1>
 
     <p>
+        Kafka Streams is a client library for processing and analyzing data stored in Kafka.
+        It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, and simple yet efficient management of application state.
+    </p>
+    <p>
+        Kafka Streams has a <b>low barrier to entry</b>: You can quickly write and run a small-scale proof-of-concept on a single machine; and you only need to run additional instances of your application on multiple machines to scale up to high-volume production workloads.
+        Kafka Streams transparently handles the load balancing of multiple instances of the same application by leveraging Kafka's parallelism model.
+    </p>
+    <p>
+        Some highlights of Kafka Streams:
+    </p>
+
+    <ul>
+        <li>Designed as a <b>simple and lightweight client library</b>, which can be easily embedded in any Java application and integrated with any existing packaging, deployment and operational tools that users have for their streaming applications.</li>
+        <li>Has <b>no external dependencies on systems other than Apache Kafka itself</b> as the internal messaging layer; notably, it uses Kafka's partitioning model to horizontally scale processing while maintaining strong ordering guarantees.</li>
+        <li>Supports <b>fault-tolerant local state</b>, which enables very fast and efficient stateful operations like windowed joins and aggregations.</li>
+        <li>Supports <b>exactly-once</b> processing semantics to guarantee that each record will be processed once and only once even when there is a failure on either Streams clients or Kafka brokers in the middle of processing.</li>
+        <li>Employs <b>one-record-at-a-time processing</b> to achieve millisecond processing latency, and supports <b>event-time based windowing operations</b> with late arrival of records.</li>
+        <li>Offers necessary stream processing primitives, along with a <b>high-level Streams DSL</b> and a <b>low-level Processor API</b>.</li>
+
+    </ul>
+
+    <p>
         We first summarize the key concepts of Kafka Streams.
     </p>
 
@@ -131,11 +153,11 @@
         To read more details on how this is done inside Kafka Streams, readers are recommended to read <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-129%3A+Streams+Exactly-Once+Semantics">KIP-129</a>.
 
         In order to achieve exactly-once semantics when running Kafka Streams applications, users can simply set the <code>processing.guarantee</code> config value to <b>exactly_once</b> (default value is <b>at_least_once</b>).
-        More details can be found in the <a href="/{{version}}/documentation#streamsconfigs">Kafka Streams Configs</a> section.
+        More details can be found in the <a href="/{{version}}/documentation#streamsconfigs"><b>Kafka Streams Configs</b></a> section.
     </p>
 
     <div class="pagination">
-        <a href="/{{version}}/documentation/streams" class="pagination__btn pagination__btn__prev">Previous</a>
+        <a href="/{{version}}/documentation/streams/developer-guide" class="pagination__btn pagination__btn__prev">Previous</a>
         <a href="/{{version}}/documentation/streams/architecture" class="pagination__btn pagination__btn__next">Next</a>
     </div>
 </script>
@@ -148,7 +170,7 @@
         <!--#include virtual="../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
-            <li><a href="/documentation/streams">Streams</a></li>
+            <li><a href="/documentation/streams">Kafka Streams API</a></li>
         </ul>
         <div class="p-content"></div>
     </div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/streams/developer-guide.html
----------------------------------------------------------------------
diff --git a/0110/streams/developer-guide.html b/0110/streams/developer-guide.html
index 6c75172..15298a7 100644
--- a/0110/streams/developer-guide.html
+++ b/0110/streams/developer-guide.html
@@ -18,7 +18,7 @@
 <script><!--#include virtual="../js/templateData.js" --></script>
 
 <script id="content-template" type="text/x-handlebars-template">
-    <h1>Developer Guide</h1>
+    <h1>Developer Manual</h1>
 
     <p>
         There is a <a href="/{{version}}/documentation/#quickstart_kafkastreams">quickstart</a> example that provides how to run a stream processing program coded in the Kafka Streams library.
@@ -505,7 +505,7 @@
         A Kafka Streams application is typically running on many instances.
         The state that is locally available on any given instance is only a subset of the application's entire state.
         Querying the local stores on an instance will, by definition, <i>only return data locally available on that particular instance</i>.
-        We explain how to access data in state stores that are not locally available in section <a href="#streams_developer-guide_interactive-queries_discovery">Querying remote state stores (for the entire application)</a>.
+        We explain how to access data in state stores that are not locally available in section <a href="#streams_developer-guide_interactive-queries_discovery"><b>Querying remote state stores</b></a> (for the entire application).
     </p>
 
     <p>
@@ -536,7 +536,7 @@
         This read-only constraint is important to guarantee that the underlying state stores will never be mutated (e.g. new entries added) out-of-band, i.e. only the corresponding processing topology of Kafka Streams is allowed to mutate and update the state stores in order to ensure data consistency.
     </p>
     <p>
-        You can also implement your own <code>QueryableStoreType</code> as described in section <a href="#streams_developer-guide_interactive-queries_custom-stores#">Querying local custom stores</a>
+        You can also implement your own <code>QueryableStoreType</code> as described in section <a href="#streams_developer-guide_interactive-queries_custom-stores#"><b>Querying local custom stores</b></a>
     </p>
 
     <p>
@@ -924,6 +924,181 @@
         <li>Collectively, this allows us to query the full state of the entire application</li>
     </ul>
 
+    <h3><a id="streams_developer-guide_memory-management" href="#streams_developer-guide_memory-management">Memory Management</a></h3>
+
+
+    <h4><a id="streams_developer-guide_memory-management_record-cache" href="#streams_developer-guide_memory-management_record-cache">Record caches in the DSL</a></h4>
+    <p>
+    Developers of an application using the DSL have the option to specify, for an instance of a processing topology, the
+    total memory (RAM) size of a record cache that is leveraged by the following <code>KTable</code> instances:
+    </p>
+
+    <ol>
+        <li>Source <code>KTable</code>, i.e. <code>KTable</code> instances that are created via <code>KStreamBuilder#table()</code> or <code>KStreamBuilder#globalTable()</code>.</li>
+        <li>Aggregation <code>KTable</code>, i.e. instances of <code>KTable</code> that are created as a result of aggregations</li>
+    </ol>
+    <p>
+        For such <code>KTable</code> instances, the record cache is used for:
+    </p>
+    <ol>
+        <li>Internal caching and compacting of output records before they are written by the underlying stateful processor node to its internal state store.</li>
+        <li>Internal caching and compacting of output records before they are forwarded from the underlying stateful processor node to any of its downstream processor nodes</li>
+    </ol>
+    <p>
+        Here is a motivating example:
+    </p>
+
+    <ul>
+        <li>Imagine the input is a <code>KStream&lt;String, Integer&gt;</code> with the records <code>&lt;A, 1&gt;, &lt;D, 5&gt;, &lt;A, 20&gt;, &lt;A, 300&gt;</code>.
+            Note that the focus in this example is on the records with key == <code>A</code>
+        </li>
+        <li>
+            An aggregation computes the sum of record values, grouped by key, for the input above and returns a <code>KTable&lt;String, Integer&gt;</code>.
+            <ul>
+                <li><b>Without caching</b>, what is emitted for key <code>A</code> is a sequence of output records that represent changes in the
+                    resulting aggregation table (here, the parentheses denote changes, where the left and right numbers denote the new
+                    aggregate value and the previous aggregate value, respectively):
+                    <code>&lt;A, (1, null)&gt;, &lt;A, (21, 1)&gt;, &lt;A, (321, 21)&gt;</code>.</li>
+                <li>
+                    <b>With caching</b>, the aforementioned three output records for key <code>A</code> would likely be compacted in the cache,
+                    leading to a single output record <code>&lt;A, (321, null)&gt;</code> that is written to the aggregation's internal state store
+                    and being forwarded to any downstream operations.
+                </li>
+            </ul>
+        </li>
+    </ul>
+
+    <p>
+        The cache size is specified through the <code>cache.max.bytes.buffering</code> parameter, which is a global setting per processing topology:
+    </p>
+
+    <pre class="brush: java;">
+        // Enable record cache of size 10 MB.
+        Properties streamsConfiguration = new Properties();
+        streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 10 * 1024 * 1024L);
+    </pre>
+
+    <p>
+        This parameter controls the number of bytes allocated for caching.
+        Specifically, for a processor topology instance with <code>T</code> threads and <code>C</code> bytes allocated for caching,
+        each thread will have an even <code>C/T</code> bytes to construct its own cache and use as it sees fit among its tasks.
+        I.e., there are as many caches as there are threads, but no sharing of caches across threads happens.
+        The basic API for the cache is made of <code>put()</code> and <code>get()</code> calls.
+        Records are evicted using a simple LRU scheme once the cache size is reached.
+        The first time a keyed record <code>R1 = &lt;K1, V1&gt;</code> finishes processing at a node, it is marked as dirty in the cache.
+        Any other keyed record <code>R2 = &lt;K1, V2&gt;</code> with the same key <code>K1</code> that is processed on that node during that time will overwrite <code>&lt;K1, V1&gt;</code>, which we also refer to as "being compacted".
+        Note that this has the same effect as <a href="https://kafka.apache.org/documentation.html#compaction">Kafka's log compaction</a>, but happens (a) earlier, while the
+        records are still in memory, and (b) within your client-side application rather than on the server-side aka the Kafka broker.
+        Upon flushing <code>R2</code> is (1) forwarded to the next processing node and (2) written to the local state store.
+    </p>
+
+    <p>
+        The semantics of caching is that data is flushed to the state store and forwarded to the next downstream processor node
+        whenever the earliest of <code>commit.interval.ms</code> or <code>cache.max.bytes.buffering</code> (cache pressure) hits.
+        Both <code>commit.interval.ms</code> and <code>cache.max.bytes.buffering</code> are <b>global</b> parameters:  they apply to all processor nodes in
+        the topology, i.e., it is not possible to specify different parameters for each node.
+        Below we provide some example settings for both parameters based on desired scenarios.
+    </p>
+
+    <p>To turn off caching the cache size can be set to zero:</p>
+    <pre class="brush: java;">
+        // Disable record cache
+        Properties streamsConfiguration = new Properties();
+        streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
+    </pre>
+
+    <p>
+        Turning off caching might result in high write traffic for the underlying RocksDB store.
+        With default settings caching is enabled within Kafka Streams but RocksDB caching is disabled.
+        Thus, to avoid high write traffic it is recommended to enable RocksDB caching if Kafka Streams caching is turned off.
+    </p>
+
+    <p>
+        For example, the RocksDB Block Cache could be set to 100MB and Write Buffer size to 32 MB.
+    </p>
+    <p>
+        To enable caching but still have an upper bound on how long records will be cached, the commit interval can be set
+        appropriately (in this example, it is set to 1000 milliseconds):
+    </p>
+    <pre class="brush: java;">
+        Properties streamsConfiguration = new Properties();
+        // Enable record cache of size 10 MB.
+        streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 10 * 1024 * 1024L);
+        // Set commit interval to 1 second.
+        streamsConfiguration.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);
+    </pre>
+
+    <p>
+        The illustration below shows the effect of these two configurations visually.
+        For simplicity we have records with 4 keys: blue, red, yellow and green. Without loss of generality, let's assume the cache has space for only 3 keys.
+        When the cache is disabled, we observer that all the input records will be output. With the cache enabled, we make the following observations.
+        First, most records are output at the end of a commit intervals (e.g., at <code>t1</code> one blue records is output, which is the final over-write of the blue key up to that time).
+        Second, some records are output because of cache pressure, i.e. before the end of a commit interval (cf. the red record right before t2).
+        With smaller cache sizes we expect cache pressure to be the primary factor that dictates when records are output. With large cache sizes, the commit interval will be the primary factor.
+        Third, the number of records output has been reduced (here: from 15 to 8).
+    </p>
+
+    <img class="centered" src="/{{version}}/images/streams-cache-and-commit-interval.png" style="width:500pt;height:400pt;">
+    <h4><a id="streams_developer-guide_memory-management_state-store-cache" href="#streams_developer-guide_memory-management_state-store-cache">State store caches in the Processor API</a></h4>
+
+    <p>
+        Developers of a Kafka Streams application using the Processor API have the option to specify, for an instance of a
+        processing topology, the total memory (RAM) size of the <i>state store cache</i> that is used for:
+    </p>
+
+    <ul><li>Internal <i>caching and compacting</i> of output records before they are written from a <b>stateful</b> processor node to its state stores.</li></ul>
+
+    <p>
+        Note that, unlike <a href="#streams_developer-guide_memory-management_record-cache">record caches</a> in the DSL, the state
+        store cache in the Processor API <i>will not cache or compact</i> any output records that are being forwarded downstream.
+        In other words, downstream processor nodes see all records, whereas the state stores see a reduced number of records.
+        It is important to note that this does not impact correctness of the system but is merely a performance optimization
+        for the state stores.
+    </p>
+    <p>
+        A note on terminology: we use the narrower term <i>state store caches</i> when we refer to the Processor API and the
+        broader term <i>record caches</i> when we are writing about the DSL.
+        We made a conscious choice to not expose the more general record caches to the Processor API so that we keep it simple and flexible.
+        For example, developers of the Processor API might chose to store a record in a state store while forwarding a different value downstream, i.e., they
+        might not want to use the unified record cache for both state store and forwarding downstream.
+    </p>
+    <p>
+        Following from the example first shown in section <a href="#streams_processor_statestore">State Stores</a>, to enable caching, you can
+        add the <code>enableCaching</code> call (note that caches are disabled by default and there is no explicit <code>disableCaching</code>
+        call) :
+    </p>
+    <pre class="brush: java;">
+        StateStoreSupplier countStoreSupplier =
+            Stores.create("Counts")
+                .withKeys(Serdes.String())
+                .withValues(Serdes.Long())
+                .persistent()
+                .enableCaching()
+                .build();
+    </pre>
+
+    <h4><a id="streams_developer-guide_memory-management_other_memory_usage" href="#streams_developer-guide_memory-management_other_memory_usage">Other memory usage</a></h4>
+    <p>
+    There are other modules inside Apache Kafka that allocate memory during runtime. They include the following:
+    </p>
+    <ul>
+        <li>Producer buffering, managed by the producer config <code>buffer.memory</code></li>
+
+        <li>Consumer buffering, currently not strictly managed, but can be indirectly controlled by fetch size, i.e.,
+            <code>fetch.max.bytes</code> and <code>fetch.max.wait.ms</code>.</li>
+
+        <li>Both producer and consumer also have separate TCP send / receive buffers that are not counted as the buffering memory.
+            These are controlled by the <code>send.buffer.bytes</code> / <code>receive.buffer.bytes</code> configs.</li>
+
+        <li>Deserialized objects buffering: after ``consumer.poll()`` returns records, they will be deserialized to extract
+            timestamp and buffered in the streams space.
+            Currently this is only indirectly controlled by <code>buffered.records.per.partition</code>.</li>
+
+        <li>RocksDB's own memory usage, both on-heap and off-heap; critical configs (for RocksDB version 4.1.0) include
+            <code>block_cache_size</code>, <code>write_buffer_size</code> and <code>max_write_buffer_number</code>.
+            These can be specified through the ``rocksdb.config.setter`` configuration.</li>
+    </ul>
+
     <h3><a id="streams_configure_execute" href="#streams_configure_execute">Application Configuration and Execution</a></h3>
 
     <p>
@@ -1094,8 +1269,8 @@
     </p>
 
     <div class="pagination">
-        <a href="/{{version}}/documentation/streams/architecture" class="pagination__btn pagination__btn__prev">Previous</a>
-        <a href="/{{version}}/documentation/streams/upgrade-guide" class="pagination__btn pagination__btn__next">Next</a>
+        <a href="/{{version}}/documentation/streams/quickstart" class="pagination__btn pagination__btn__prev">Previous</a>
+        <a href="/{{version}}/documentation/streams/core-concepts" class="pagination__btn pagination__btn__next">Next</a>
     </div>
 </script>
 
@@ -1107,7 +1282,7 @@
         <!--#include virtual="../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
-            <li><a href="/documentation/streams">Streams</a></li>
+            <li><a href="/documentation/streams">Kafka Streams API</a></li>
         </ul>
         <div class="p-content"></div>
     </div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/streams/index.html
----------------------------------------------------------------------
diff --git a/0110/streams/index.html b/0110/streams/index.html
index 2d30169..bcaa831 100644
--- a/0110/streams/index.html
+++ b/0110/streams/index.html
@@ -18,56 +18,195 @@
 <script><!--#include virtual="../js/templateData.js" --></script>
 
 <script id="streams-template" type="text/x-handlebars-template">
-    <h1>Streams</h1>
-
-    <ol class="toc">
-        <li>
-            <a href="/{{version}}/documentation/streams/core-concepts">Core Concepts</a>
-        </li>
-        <li>
-            <a href="/{{version}}/documentation/streams/architecture">Architecture</a>
-        </li>
-        <li>
-            <a href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-            <ul>
-                <li><a href="/{{version}}/documentation/streams/developer-guide#streams_processor">Low-level Processor API</a></li>
-                <li><a href="/{{version}}/documentation/streams/developer-guide#streams_dsl">High-level Streams DSL</a></li>
-                <li><a href="/{{version}}/documentation/streams/developer-guide#streams_interactive_querie">Interactive Queries</a></li>
-                <li><a href="/{{version}}/documentation/streams/developer-guide#streams_execute">Application Configuration and Execution</a></li>
-            </ul>
-        </li>
-        <li>
-            <a href="/{{version}}/documentation/streams/upgrade-guide">Upgrade Guide and API Changes</a>
-        </li>
-    </ol>
-
-    <h2>Overview</h2>
-
-    <p>
-        Kafka Streams is a client library for processing and analyzing data stored in Kafka.
-        It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, and simple yet efficient management of application state.
-    </p>
-    <p>
-        Kafka Streams has a <b>low barrier to entry</b>: You can quickly write and run a small-scale proof-of-concept on a single machine; and you only need to run additional instances of your application on multiple machines to scale up to high-volume production workloads.
-        Kafka Streams transparently handles the load balancing of multiple instances of the same application by leveraging Kafka's parallelism model.
-    </p>
-    <p>
-        Some highlights of Kafka Streams:
-    </p>
-
-    <ul>
-        <li>Designed as a <b>simple and lightweight client library</b>, which can be easily embedded in any Java application and integrated with any existing packaging, deployment and operational tools that users have for their streaming applications.</li>
-        <li>Has <b>no external dependencies on systems other than Apache Kafka itself</b> as the internal messaging layer; notably, it uses Kafka's partitioning model to horizontally scale processing while maintaining strong ordering guarantees.</li>
-        <li>Supports <b>fault-tolerant local state</b>, which enables very fast and efficient stateful operations like windowed joins and aggregations.</li>
-        <li>Supports <b>exactly-once</b> processing semantics to guarantee that each record will be processed once and only once even when there is a failure on either Streams clients or Kafka brokers in the middle of processing.</li>
-        <li>Employs <b>one-record-at-a-time processing</b> to achieve millisecond processing latency, and supports <b>event-time based windowing operations</b> with late arrival of records.</li>
-        <li>Offers necessary stream processing primitives, along with a <b>high-level Streams DSL</b> and a <b>low-level Processor API</b>.</li>
+    <h1>Kafka Streams API</h1>
 
+    <h3 style="max-width: 75rem;">The easiest way to write mission-critical real-time applications and microservices with all the benefits of Kafka's server-side cluster technology.</h3>
+
+    <div class="hero">
+        <div class="hero__diagram">
+            <img src="/{{version}}/images/streams-welcome.png" />
+        </div>
+        <div class="hero__cta">
+            <a style="display: none;" href="/{{version}}/documentation/streams/tutorial" class="btn">Write your first app</a>
+            <a href="/{{version}}/documentation/streams/quickstart" class="btn">Play with demo app</a>
+        </div>
+    </div>
+
+    <ul class="feature-list">
+        <li>Write standard Java applications</li>
+        <li>Exactly-once processing semantics</li>
+        <li>No seperate processing cluster required</li>
+        <li>Develop on Mac, Linux, Windows</li>
+        <li>Elastic, highly scalable, fault-tolerant</li>
+        <li>Deploy to containers, VMs, bare metal, cloud</li>
+        <li>Equally viable for small, medium, &amp; large use cases</li>
+        <li>Fully integrated with Kafka security</li>
     </ul>
 
+    <div class="cards">
+        <a class="card" href="/{{version}}/documentation/streams/developer-guide">
+            <img class="card__icon" src="/{{version}}/images/icons/documentation.png" />
+            <img class="card__icon card__icon--hover" src="/{{version}}/images/icons/documentation--white.png" />
+            <span class="card__label">Developer manual</span>
+        </a>
+        <a style="display: none;" class="card" href="/{{version}}/documentation/streams/tutorial">
+            <img class="card__icon" src="/{{version}}/images/icons/tutorials.png" />
+            <img class="card__icon card__icon--hover" src="/{{version}}/images/icons/tutorials--white.png" />
+            <span class="card__label">Tutorials</span>
+        </a>
+        <a class="card" href="/{{version}}/documentation/streams/core-concepts">
+            <img class="card__icon" src="/{{version}}/images/icons/architecture.png" />
+            <img class="card__icon card__icon--hover" src="/{{version}}/images/icons/architecture--white.png" />
+            <span class="card__label">Concepts</span>
+        </a>
+    </div>
+
+    <h3>Hello Kafka Streams</h3>
+    <p>The code example below implements a WordCount application that is elastic, highly scalable, fault-tolerant, stateful, and ready to run in production at large scale</p>
+
+    <div class="code-example">
+        <div class="btn-group">
+            <a class="selected b-java-8" data-section="java-8">Java 8+</a>
+            <a class="b-java-7" data-section="java-7">Java 7</a>
+            <a class="b-scala" data-section="scala">Scala</a>
+        </div>
+
+        <div class="code-example__snippet b-java-8 selected">
+            <pre class="brush: java;">
+                import org.apache.kafka.common.serialization.Serdes;
+                import org.apache.kafka.streams.KafkaStreams;
+                import org.apache.kafka.streams.StreamsConfig;
+                import org.apache.kafka.streams.kstream.KStream;
+                import org.apache.kafka.streams.kstream.KStreamBuilder;
+                import org.apache.kafka.streams.kstream.KTable;
+
+                import java.util.Arrays;
+                import java.util.Properties;
+
+                public class WordCountApplication {
+
+                    public static void main(final String[] args) throws Exception {
+                        Properties config = new Properties();
+                        config.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application");
+                        config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092");
+                        config.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+                        config.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+
+                        KStreamBuilder builder = new KStreamBuilder();
+                        KStream&lt;String, String&gt; textLines = builder.stream("TextLinesTopic");
+                        KTable&lt;String, Long&gt; wordCounts = textLines
+                            .flatMapValues(textLine -> Arrays.asList(textLine.toLowerCase().split("\\W+")))
+                            .groupBy((key, word) -> word)
+                            .count("Counts");
+                        wordCounts.to(Serdes.String(), Serdes.Long(), "WordsWithCountsTopic");
+
+                        KafkaStreams streams = new KafkaStreams(builder, config);
+                        streams.start();
+                    }
+
+                }
+            </pre>
+        </div>
+
+        <div class="code-example__snippet b-java-7">
+            <pre class="brush: java;">
+                import org.apache.kafka.common.serialization.Serdes;
+                import org.apache.kafka.streams.KafkaStreams;
+                import org.apache.kafka.streams.StreamsConfig;
+                import org.apache.kafka.streams.kstream.KStream;
+                import org.apache.kafka.streams.kstream.KStreamBuilder;
+                import org.apache.kafka.streams.kstream.KTable;
+                import org.apache.kafka.streams.kstream.KeyValueMapper;
+                import org.apache.kafka.streams.kstream.ValueMapper;
+
+                import java.util.Arrays;
+                import java.util.Properties;
+
+                public class WordCountApplication {
+
+                    public static void main(final String[] args) throws Exception {
+                        Properties config = new Properties();
+                        config.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application");
+                        config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092");
+                        config.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+                        config.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+
+                        KStreamBuilder builder = new KStreamBuilder();
+                        KStream&lt;String, String&gt; textLines = builder.stream("TextLinesTopic");
+                        KTable&lt;String, Long&gt; wordCounts = textLines
+                            .flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
+                                @Override
+                                public Iterable&lt;String&gt; apply(String textLine) {
+                                    return Arrays.asList(textLine.toLowerCase().split("\\W+"));
+                                }
+                            })
+                            .groupBy(new KeyValueMapper&lt;String, String, String&gt;() {
+                                @Override
+                                public String apply(String key, String word) {
+                                    return word;
+                                }
+                            })
+                            .count("Counts");
+                        wordCounts.to(Serdes.String(), Serdes.Long(), "WordsWithCountsTopic");
+
+                        KafkaStreams streams = new KafkaStreams(builder, config);
+                        streams.start();
+                    }
+
+                }
+            </pre>
+        </div>
+
+        <div class="code-example__snippet b-scala">
+            <pre class="brush: scala;">
+                import java.lang.Long
+                import java.util.Properties
+                import java.util.concurrent.TimeUnit
+
+                import org.apache.kafka.common.serialization._
+                import org.apache.kafka.streams._
+                import org.apache.kafka.streams.kstream.{KStream, KStreamBuilder, KTable}
+
+                import scala.collection.JavaConverters.asJavaIterableConverter
+
+                object WordCountApplication {
+
+                    def main(args: Array[String]) {
+                        val config: Properties = {
+                            val p = new Properties()
+                            p.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application")
+                            p.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092")
+                            p.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass)
+                            p.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass)
+                            p
+                        }
+
+                        val builder: KStreamBuilder = new KStreamBuilder()
+                        val textLines: KStream[String, String] = builder.stream("TextLinesTopic")
+                        val wordCounts: KTable[String, Long] = textLines
+                            .flatMapValues(textLine => textLine.toLowerCase.split("\\W+").toIterable.asJava)
+                            .groupBy((_, word) => word)
+                            .count("Counts")
+                        wordCounts.to(Serdes.String(), Serdes.Long(), "WordsWithCountsTopic")
+
+                        val streams: KafkaStreams = new KafkaStreams(builder, config)
+                        streams.start()
+
+                        Runtime.getRuntime.addShutdownHook(new Thread(() => {
+                            streams.close(10, TimeUnit.SECONDS)
+                        }))
+                    }
+
+                }
+            </pre>
+        </div>
+    </div>
+
+    
+
     <div class="pagination">
         <a href="#" class="pagination__btn pagination__btn__prev pagination__btn--disabled">Previous</a>
-        <a href="/{{version}}/documentation/streams/core-concepts" class="pagination__btn pagination__btn__next">Next</a>
+        <a href="/{{version}}/documentation/streams/quickstart" class="pagination__btn pagination__btn__next">Next</a>
     </div>
 </script>
 
@@ -84,6 +223,7 @@
     </div>
 </div>
 <!--#include virtual="../../includes/_footer.htm" -->
+
 <script>
 $(function() {
   // Show selected style on nav item
@@ -91,5 +231,12 @@ $(function() {
 
   // Display docs subnav items
   $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+
+  // Show selected code example
+  $('.btn-group a').click(function(){
+      var targetClass = '.b-' + $(this).data().section;
+      $('.code-example__snippet, .btn-group a').removeClass('selected');
+      $(targetClass).addClass('selected');
+  });
 });
 </script>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/streams/quickstart.html
----------------------------------------------------------------------
diff --git a/0110/streams/quickstart.html b/0110/streams/quickstart.html
new file mode 100644
index 0000000..977fa5f
--- /dev/null
+++ b/0110/streams/quickstart.html
@@ -0,0 +1,352 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<script><!--#include virtual="../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <h1>Play with a Streams Application</h1>
+
+<p>
+  This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data. However, if you have already started Kafka and
+  Zookeeper, feel free to skip the first two steps.
+</p>
+
+  <p>
+ Kafka Streams is a client library for building mission-critical real-time applications and microservices,
+  where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of
+  writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's
+  server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed,
+ and much more.
+  </p>
+  <p>
+This quickstart example will demonstrate how to run a streaming application coded in this library. Here is the gist
+of the <code><a href="https://github.com/apache/kafka/blob/{{dotVersion}}/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java">WordCountDemo</a></code> example code (converted to use Java 8 lambda expressions for easy reading).
+</p>
+<pre class="brush: java;">
+// Serializers/deserializers (serde) for String and Long types
+final Serde&lt;String&gt; stringSerde = Serdes.String();
+final Serde&lt;Long&gt; longSerde = Serdes.Long();
+
+// Construct a `KStream` from the input topic "streams-plaintext-input", where message values
+// represent lines of text (for the sake of this example, we ignore whatever may be stored
+// in the message keys).
+KStream&lt;String, String&gt; textLines = builder.stream(stringSerde, stringSerde, "streams-plaintext-input");
+
+KTable&lt;String, Long&gt; wordCounts = textLines
+    // Split each text line, by whitespace, into words.
+    .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
+
+    // Group the text words as message keys
+    .groupBy((key, value) -> value)
+
+    // Count the occurrences of each word (message key).
+    .count("Counts")
+
+// Store the running counts as a changelog stream to the output topic.
+wordCounts.to(stringSerde, longSerde, "streams-wordcount-output");
+</pre>
+
+<p>
+It implements the WordCount
+algorithm, which computes a word occurrence histogram from the input text. However, unlike other WordCount examples
+you might have seen before that operate on bounded data, the WordCount demo application behaves slightly differently because it is
+designed to operate on an <b>infinite, unbounded stream</b> of data. Similar to the bounded variant, it is a stateful algorithm that
+tracks and updates the counts of words. However, since it must assume potentially
+unbounded input data, it will periodically output its current state and results while continuing to process more data
+because it cannot know when it has processed "all" the input data.
+</p>
+<p>
+  As the first step, we will start Kafka (unless you already have it started) and then we will
+  prepare input data to a Kafka topic, which will subsequently be processed by a Kafka Streams application.
+</p>
+
+<h4><a id="quickstart_streams_download" href="#quickstart_streams_download">Step 1: Download the code</a></h4>
+
+<a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/{{fullDotVersion}}/kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz" title="Kafka downloads">Download</a> the {{fullDotVersion}} release and un-tar it.
+Note that there are multiple downloadable Scala versions and we choose to use the recommended version ({{scalaVersion}}) here:
+
+<pre class="brush: bash;">
+&gt; tar -xzf kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz
+&gt; cd kafka_{{scalaVersion}}-{{fullDotVersion}}
+</pre>
+
+<h4><a id="quickstart_streams_startserver" href="#quickstart_streams_startserver">Step 2: Start the Kafka server</a></h4>
+
+<p>
+Kafka uses <a href="https://zookeeper.apache.org/">ZooKeeper</a> so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
+</p>
+
+<pre class="brush: bash;">
+&gt; bin/zookeeper-server-start.sh config/zookeeper.properties
+[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
+...
+</pre>
+
+<p>Now start the Kafka server:</p>
+<pre class="brush: bash;">
+&gt; bin/kafka-server-start.sh config/server.properties
+[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
+[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
+...
+</pre>
+
+
+<h4><a id="quickstart_streams_prepare" href="#quickstart_streams_prepare">Step 3: Prepare input topic and start Kafka producer</a></h4>
+
+<!--
+
+<pre class="brush: bash;">
+&gt; echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > file-input.txt
+</pre>
+Or on Windows:
+<pre class="brush: bash;">
+&gt; echo all streams lead to kafka> file-input.txt
+&gt; echo hello kafka streams>> file-input.txt
+&gt; echo|set /p=join kafka summit>> file-input.txt
+</pre>
+
+-->
+
+Next, we create the input topic named <b>streams-plaintext-input</b> and the output topic named <b>streams-wordcount-output</b>:
+
+<pre class="brush: bash;">
+&gt; bin/kafka-topics.sh --create \
+    --zookeeper localhost:2181 \
+    --replication-factor 1 \
+    --partitions 1 \
+    --topic streams-plaintext-input
+Created topic "streams-plaintext-input".
+
+&gt; bin/kafka-topics.sh --create \
+    --zookeeper localhost:2181 \
+    --replication-factor 1 \
+    --partitions 1 \
+    --topic streams-wordcount-output
+Created topic "streams-wordcount-output".
+</pre>
+
+The created topic can be described with the same <b>kafka-topics</b> tool:
+
+<pre class="brush: bash;">
+&gt; bin/kafka-topics.sh --zookeeper localhost:2181 --describe
+
+Topic:streams-plaintext-input	PartitionCount:1	ReplicationFactor:1	Configs:
+    Topic: streams-plaintext-input	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
+Topic:streams-wordcount-output	PartitionCount:1	ReplicationFactor:1	Configs:
+	Topic: streams-wordcount-output	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
+</pre>
+
+<h4><a id="quickstart_streams_start" href="#quickstart_streams_start">Step 4: Start the Wordcount Application</a></h4>
+
+The following command starts the WordCount demo application:
+
+<pre class="brush: bash;">
+&gt; bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo
+</pre>
+
+<p>
+The demo application will read from the input topic <b>streams-plaintext-input</b>, perform the computations of the WordCount algorithm on each of the read messages,
+and continuously write its current results to the output topic <b>streams-wordcount-output</b>.
+Hence there won't be any STDOUT output except log entries as the results are written back into in Kafka.
+</p>
+
+Now we can start the console producer in a separate terminal to write some input data to this topic:
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
+</pre>
+
+and inspect the output of the WordCount demo application by reading from its output topic with the console consumer in a separate terminal:
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
+    --topic streams-wordcount-output \
+    --from-beginning \
+    --formatter kafka.tools.DefaultMessageFormatter \
+    --property print.key=true \
+    --property print.value=true \
+    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
+    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
+</pre>
+
+
+<h4><a id="quickstart_streams_process" href="#quickstart_streams_process">Step 5: Process some data</a></h4>
+
+Now let's write some message with the console producer into the input topic <b>streams-plaintext-input</b> by entering a single line of text and then hit &lt;RETURN&gt;.
+This will send a new message to the input topic, where the message key is null and the message value is the string encoded text line that you just entered
+(in practice, input data for applications will typically be streaming continuously into Kafka, rather than being manually entered as we do in this quickstart):
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
+all streams lead to kafka
+</pre>
+
+<p>
+This message will be processed by the Wordcount application and the following output data will be written to the <b>streams-wordcount-output</b> topic and printed by the console consumer:
+</p>
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
+    --topic streams-wordcount-output \
+    --from-beginning \
+    --formatter kafka.tools.DefaultMessageFormatter \
+    --property print.key=true \
+    --property print.value=true \
+    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
+    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
+
+all	    1
+streams	1
+lead	1
+to	    1
+kafka	1
+</pre>
+
+<p>
+Here, the first column is the Kafka message key in <code>java.lang.String</code> format and represents a word that is being counted, and the second column is the message value in <code>java.lang.Long</code>format, representing the word's latest count.
+</p>
+
+Now let's continue writing one more message with the console producer into the input topic <b>streams-plaintext-input</b>.
+Enter the text line "hello kafka streams" and hit &lt;RETURN&gt;.
+Your terminal should look as follows:
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
+all streams lead to kafka
+hello kafka streams
+</pre>
+
+In your other terminal in which the console consumer is running, you will observe that the WordCount application wrote new output data:
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
+    --topic streams-wordcount-output \
+    --from-beginning \
+    --formatter kafka.tools.DefaultMessageFormatter \
+    --property print.key=true \
+    --property print.value=true \
+    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
+    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
+
+all	    1
+streams	1
+lead	1
+to	    1
+kafka	1
+hello	1
+kafka	2
+streams	2
+</pre>
+
+Here the last printed lines <b>kafka 2</b> and <b>streams 2</b> indicate updates to the keys <b>kafka</b> and <b>streams</b> whose counts have been incremented from <b>1</b> to <b>2</b>.
+Whenever you write further input messages to the input topic, you will observe new messages being added to the <b>streams-wordcount-output</b> topic,
+representing the most recent word counts as computed by the WordCount application.
+Let's enter one final input text line "join kafka summit" and hit &lt;RETURN&gt; in the console producer to the input topic <b>streams-wordcount-input</b> before we wrap up this quickstart:
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-wordcount-input
+all streams lead to kafka
+hello kafka streams
+join kafka summit
+</pre>
+
+The <b>streams-wordcount-output</b> topic will subsequently show the corresponding updated word counts (see last three lines):
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
+    --topic streams-wordcount-output \
+    --from-beginning \
+    --formatter kafka.tools.DefaultMessageFormatter \
+    --property print.key=true \
+    --property print.value=true \
+    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
+    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
+
+all	    1
+streams	1
+lead	1
+to	    1
+kafka	1
+hello	1
+kafka	2
+streams	2
+join	1
+kafka	3
+summit	1
+</pre>
+
+As one can see, outputs of the Wordcount application is actually a continuous stream of updates, where each output record (i.e. each line in the original output above) is
+an updated count of a single word, aka record key such as "kafka". For multiple records with the same key, each later record is an update of the previous one.
+
+<p>
+The two diagrams below illustrate what is essentially happening behind the scenes.
+The first column shows the evolution of the current state of the <code>KTable&lt;String, Long&gt;</code> that is counting word occurrences for <code>count</code>.
+The second column shows the change records that result from state updates to the KTable and that are being sent to the output Kafka topic <b>streams-wordcount-output</b>.
+</p>
+
+<img src="/{{version}}/images/streams-table-updates-02.png" style="float: right; width: 25%;">
+<img src="/{{version}}/images/streams-table-updates-01.png" style="float: right; width: 25%;">
+
+<p>
+First the text line "all streams lead to kafka" is being processed.
+The <code>KTable</code> is being built up as each new word results in a new table entry (highlighted with a green background), and a corresponding change record is sent to the downstream <code>KStream</code>.
+</p>
+<p>
+When the second text line "hello kafka streams" is processed, we observe, for the first time, that existing entries in the <code>KTable</code> are being updated (here: for the words "kafka" and for "streams"). And again, change records are being sent to the output topic.
+</p>
+<p>
+And so on (we skip the illustration of how the third line is being processed). This explains why the output topic has the contents we showed above, because it contains the full record of changes.
+</p>
+
+<p>
+Looking beyond the scope of this concrete example, what Kafka Streams is doing here is to leverage the duality between a table and a changelog stream (here: table = the KTable, changelog stream = the downstream KStream): you can publish every change of the table to a stream, and if you consume the entire changelog stream from beginning to end, you can reconstruct the contents of the table.
+</p>
+
+<h4><a id="quickstart_streams_stop" href="#quickstart_streams_stop">Step 6: Teardown the application</a></h4>
+
+<p>You can now stop the console consumer, the console producer, the Wordcount application, the Kafka broker and the Zookeeper server in order via <b>Ctrl-C</b>.</p>
+
+ <div class="pagination">
+        <a href="/{{version}}/documentation/streams" class="pagination__btn pagination__btn__prev">Previous</a>
+        <a href="/{{version}}/documentation/streams/tutorial" class="pagination__btn pagination__btn__next">Next</a>
+    </div>
+</script>
+
+<div class="p-quickstart-streams"></div>
+
+<!--#include virtual="../../includes/_header.htm" -->
+<!--#include virtual="../../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+    <!--#include virtual="../../includes/_nav.htm" -->
+    <div class="right">
+        <!--#include virtual="../../includes/_docs_banner.htm" -->
+        <ul class="breadcrumbs">
+            <li><a href="/documentation">Documentation</a></li>
+            <li><a href="/documentation/streams">Streams</a></li>
+        </ul>
+        <div class="p-content"></div>
+    </div>
+</div>
+<!--#include virtual="../../includes/_footer.htm" -->
+<script>
+$(function() {
+  // Show selected style on nav item
+  $('.b-nav__streams').addClass('selected');
+
+  // Display docs subnav items
+  $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+});
+</script>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/streams/tutorial.html
----------------------------------------------------------------------
diff --git a/0110/streams/tutorial.html b/0110/streams/tutorial.html
new file mode 100644
index 0000000..a1520de
--- /dev/null
+++ b/0110/streams/tutorial.html
@@ -0,0 +1,526 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<script><!--#include virtual="../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+    <h1>Write your own Streams Applications</h1>
+
+    <p>
+        In this guide we will start from scratch on setting up your own project to write a stream processing application using Kafka's Streams API.
+        It is highly recommended to read the <a href="/{{version}}/documentation/streams/quickstart">quickstart</a> first on how to run a Streams application written in Kafka Streams if you have not done so.
+    </p>
+
+    <h4><a id="tutorial_maven_setup" href="#tutorial_maven_setup">Setting up a Maven Project</a></h4>
+
+    <p>
+        We are going to use a Kafka Streams Maven Archetype for creating a Streams project structure with the following commands:
+    </p>
+
+    <pre class="brush: bash;">
+        mvn archetype:generate \
+            -DarchetypeGroupId=org.apache.kafka \
+            -DarchetypeArtifactId=streams-quickstart-java \
+            -DarchetypeVersion={{fullDotVersion}} \
+            -DgroupId=streams.examples \
+            -DartifactId=streams.examples \
+            -Dversion=0.1 \
+            -Dpackage=myapps
+    </pre>
+
+    <p>
+        You can use a different value for <code>groupId</code>, <code>artifactId</code> and <code>package</code> parameters if you like.
+        Assuming the above parameter values are used, this command will create a project structure that looks like this:
+    </p>
+
+    <pre class="brush: bash;">
+        &gt; tree streams.examples
+        streams-quickstart
+        |-- pom.xml
+        |-- src
+            |-- main
+                |-- java
+                |   |-- myapps
+                |       |-- LineSplit.java
+                |       |-- Pipe.java
+                |       |-- WordCount.java
+                |-- resources
+                    |-- log4j.properties
+    </pre>
+
+    <p>
+        The <code>pom.xml</code> file included in the project already has the Streams dependency defined,
+        and there are already several example programs written with Streams library under <code>src/main/java</code>.
+        Since we are going to start writing such programs from scratch, we can now delete these examples:
+    </p>
+
+    <pre class="brush: bash;">
+        &gt; cd streams-quickstart
+        &gt; rm src/main/java/myapps/*.java
+    </pre>
+
+    <h4><a id="tutorial_code_pipe" href="#tutorial_code_pipe">Writing a first Streams application: Pipe</a></h4>
+
+    It's coding time now! Feel free to open your favorite IDE and import this Maven project, or simply open a text editor and create a java file under <code>src/main/java</code>.
+    Let's name it <code>Pipe.java</code>:
+
+    <pre class="brush: java;">
+        package myapps;
+
+        public class Pipe {
+
+            public static void main(String[] args) throws Exception {
+
+            }
+        }
+    </pre>
+
+    <p>
+        We are going to fill in the <code>main</code> function to write this pipe program. Note that we will not list the import statements as we go since IDEs can usually add them automatically.
+        However if you are using a text editor you need to manually add the imports, and at the end of this section we'll show the complete code snippet with import statement for you.
+    </p>
+
+    <p>
+        The first step to write a Streams application is to create a <code>java.util.Properties</code> map to specify different Streams execution configuration values as defined in <code>StreamsConfig</code>.
+        A couple of important configuration values you need to set are: <code>StreamsConfig.BOOTSTRAP_SERVERS_CONFIG</code>, which specifies a list of host/port pairs to use for establishing the initial connection to the Kafka cluster,
+        and <code>StreamsConfig.APPLICATION_ID_CONFIG</code>, which gives the unique identifier of your Streams application to distinguish itself with other applications talking to the same Kafka cluster:
+    </p>
+
+    <pre class="brush: java;">
+        Properties props = new Properties();
+        props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
+        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");    // assuming that the Kafka broker this application is talking to runs on local machine with port 9092
+    </pre>
+
+    <p>
+        In addition, you can customize other configurations in the same map, for example, default serialization and deserialization libraries for the record key-value pairs:
+    </p>
+
+    <pre class="brush: java;">
+        props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+        props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+    </pre>
+
+    <p>
+        For a full list of configurations of Kafka Streams please refer to this <a href="/{{version}}/documentation/#streamsconfigs">table</a>.
+    </p>
+
+    <p>
+        Next we will define the computational logic of our Streams application.
+        In Kafka Streams this computational logic is defined as a <code>topology</code> of connected processor nodes.
+        We can use a topology builder to construct such a topology,
+    </p>
+
+    <pre class="brush: java;">
+        final KStreamBuilder builder = new KStreamBuilder();
+    </pre>
+
+    <p>
+        And then create a source stream from a Kafka topic named <code>streams-plaintext-input</code> using this topology builder:
+    </p>
+
+    <pre class="brush: java;">
+        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+    </pre>
+
+    <p>
+        Now we get a <code>KStream</code> that is continuously generating records from its source Kafka topic <code>streams-plaintext-input</code>.
+        The records are organized as <code>String</code> typed key-value pairs.
+        The simplest thing we can do with this stream is to write it into another Kafka topic, say it's named <code>streams-pipe-output</code>:
+    </p>
+
+    <pre class="brush: java;">
+        source.to("streams-pipe-output");
+    </pre>
+
+    <p>
+        Note that we can also concatenate the above two lines into a single line as:
+    </p>
+
+    <pre class="brush: java;">
+        builder.stream("streams-plaintext-input").to("streams-pipe-output");
+    </pre>
+
+    <p>
+        we can now construct the Streams client with the two components we have just constructed above: the configuration map and the topology builder object
+        (one can also construct a <code>StreamsConfig</code> object from the <code>props</code> map and then pass that object to the constructor,
+        <code>KafkaStreams</code> have overloaded constructor functions to takes either type).
+    </p>
+
+    <pre class="brush: java;">
+        final KafkaStreams streams = new KafkaStreams(builder, props);
+    </pre>
+
+    <p>
+        By calling its <code>start()</code> function we can trigger the execution of this client.
+        The execution won't stop until <code>close()</code> is called on this client.
+        We can, for example, add a shutdown hook with a countdown latch to capture a user interrupt and close the client upon terminating this program:
+    </p>
+
+    <pre class="brush: java;">
+        final CountDownLatch latch = new CountDownLatch(1);
+
+        // attach shutdown handler to catch control-c
+        Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
+            @Override
+            public void run() {
+                streams.close();
+                latch.countDown();
+            }
+        });
+
+        try {
+            streams.start();
+            latch.await();
+        } catch (Throwable e) {
+            System.exit(1);
+        }
+        System.exit(0);
+    </pre>
+
+    <p>
+        The complete code so far looks like this:
+    </p>
+
+    <pre class="brush: java;">
+        package myapps;
+
+        import org.apache.kafka.common.serialization.Serdes;
+        import org.apache.kafka.streams.KafkaStreams;
+        import org.apache.kafka.streams.StreamsConfig;
+        import org.apache.kafka.streams.kstream.KStreamBuilder;
+
+        import java.util.Properties;
+        import java.util.concurrent.CountDownLatch;
+
+        public class Pipe {
+
+            public static void main(String[] args) throws Exception {
+                Properties props = new Properties();
+                props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
+                props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
+                props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+                props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+
+                final KStreamBuilder builder = new KStreamBuilder();
+
+                builder.stream("streams-plaintext-input").to("streams-pipe-output");
+
+                final KafkaStreams streams = new KafkaStreams(builder, props);
+                final CountDownLatch latch = new CountDownLatch(1);
+
+                // attach shutdown handler to catch control-c
+                Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
+                    @Override
+                    public void run() {
+                        streams.close();
+                        latch.countDown();
+                    }
+                });
+
+                try {
+                    streams.start();
+                    latch.await();
+                } catch (Throwable e) {
+                    System.exit(1);
+                }
+                System.exit(0);
+            }
+        }
+    </pre>
+
+    <p>
+        If you already have the Kafka broker up and running at <code>localhost:9092</code>,
+        and the topics <code>streams-plaintext-input</code> and <code>streams-pipe-output</code> created on that broker,
+        you can run this code in your IDE or on the command line, using Maven:
+    </p>
+
+    <pre class="brush: brush;">
+        &gt; mvn clean package
+        &gt; mvn exec:java -Dexec.mainClass=myapps.Pipe
+    </pre>
+
+    <p>
+        For detailed instructions on how to run a Streams application and observe its computing results,
+        please read the <a href="/{{version}}/documentation/streams/quickstart">Play with a Streams Application</a> section.
+        We will not talk about this in the rest of this section.
+    </p>
+
+    <h4><a id="tutorial_code_linesplit" href="#tutorial_code_linesplit">Writing a second Streams application: Line Split</a></h4>
+
+    <p>
+        We have learned how to construct a Streams client with its two key components: the <code>StreamsConfig</code> and <code>TopologyBuilder</code>.
+        Now let's move on to add some real processing logic by augmenting the current topology.
+        We can first create another program by first copy the existing <code>Pipe.java</code> class:
+    </p>
+
+    <pre class="brush: brush;">
+        &gt; cp src/main/java/myapps/Pipe.java src/main/java/myapps/LineSplit.java
+    </pre>
+
+    <p>
+        And change its class name as well as the application id config to distinguish with the original program:
+    </p>
+
+    <pre class="brush: java;">
+        public class Pipe {
+
+            public static void main(String[] args) throws Exception {
+                Properties props = new Properties();
+                props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-linesplit");
+                // ...
+            }
+        }
+    </pre>
+
+    <p>
+        Since each of the source stream's record is a <code>String</code> typed key-value pair,
+        let's treat the value string as a text line and split it into words with a <code>FlatMapValues</code> operator:
+    </p>
+
+    <pre class="brush: java;">
+        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+        KStream&lt;String, String&gt; words = builder.flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
+                    @Override
+                    public Iterable&lt;String&gt; apply(String value) {
+                        return Arrays.asList(value.split("\\W+"));
+                    }
+                });
+    </pre>
+
+    <p>
+        The operator will take the <code>source</code> stream as its input, and generate a new stream named <code>words</code>
+        by processing each record from its source stream in order and breaking its value string into a list of words, and producing
+        each word as a new record to the output <code>words</code> stream.
+        This is a stateless operator that does not need to keep track of any previously received records or processed results.
+        Note if you are using JDK 8 you can use lambda expression and simplify the above code as:
+    </p>
+
+    <pre class="brush: java;">
+        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+        KStream&lt;String, String&gt; words = source.flatMapValues(value -> Arrays.asList(value.split("\\W+")));
+    </pre>
+
+    <p>
+        And finally we can write the word stream back into another Kafka topic, say <code>streams-linesplit-output</code>.
+        Again, these two steps can be concatenated as the following (assuming lambda expression is used):
+    </p>
+
+    <pre class="brush: java;">
+        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+        source.flatMapValues(value -> Arrays.asList(value.split("\\W+")))
+              .to("streams-linesplit-output");
+    </pre>
+
+    <p>
+        The complete code looks like this (assuming lambda expression is used):
+    </p>
+
+    <pre class="brush: java;">
+        package myapps;
+
+        import org.apache.kafka.common.serialization.Serdes;
+        import org.apache.kafka.streams.KafkaStreams;
+        import org.apache.kafka.streams.StreamsConfig;
+        import org.apache.kafka.streams.kstream.KStreamBuilder;
+        import org.apache.kafka.streams.kstream.ValueMapper;
+
+        import java.util.Arrays;
+        import java.util.Properties;
+        import java.util.concurrent.CountDownLatch;
+
+        public class LineSplit {
+
+            public static void main(String[] args) throws Exception {
+                Properties props = new Properties();
+                props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-linesplit");
+                props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
+                props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+                props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+
+                final KStreamBuilder builder = new KStreamBuilder();
+
+                KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+                source.flatMapValues(value -> Arrays.asList(value.split("\\W+")))
+                      .to("streams-linesplit-output");
+
+                final KafkaStreams streams = new KafkaStreams(builder, props);
+                final CountDownLatch latch = new CountDownLatch(1);
+
+                // ... same as Pipe.java below
+            }
+        }
+    </pre>
+
+    <h4><a id="tutorial_code_wordcount" href="#tutorial_code_wordcount">Writing a third Streams application: Wordcount</a></h4>
+
+    <p>
+        Let's now take a step further to add some "stateful" computations to the topology by counting the occurrence of the words split from the source text stream.
+        Following similar steps let's create another program based on the <code>LineSplit.java</code> class:
+    </p>
+
+    <pre class="brush: java;">
+        public class WordCount {
+
+            public static void main(String[] args) throws Exception {
+                Properties props = new Properties();
+                props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount");
+                // ...
+            }
+        }
+    </pre>
+
+    <p>
+        In order to count the words we can first modify the <code>flatMapValues</code> operator to treat all of them as lower case (assuming lambda expression is used):
+    </p>
+
+    <pre class="brush: java;">
+        source.flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
+                    @Override
+                    public Iterable&lt;String&gt; apply(String value) {
+                        return Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+"));
+                    }
+                });
+    </pre>
+
+    <p>
+        In order to do the counting aggregation we have to first specify that we want to key the stream on the value string, i.e. the lower cased word, with a <code>groupBy</code> operator.
+        This operator generate a new grouped stream, which can then be aggregated by a <code>count</code> operator, which generates a running count on each of the grouped keys:
+    </p>
+
+    <pre class="brush: java;">
+        KTable&lt;String, Long&gt; counts =
+        source.flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt;() {
+                    @Override
+                    public Iterable&lt;String&gt; apply(String value) {
+                        return Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+"));
+                    }
+                })
+              .groupBy(new KeyValueMapper&lt;String, String, String&gt;() {
+                   @Override
+                   public String apply(String key, String value) {
+                       return value;
+                   }
+                })
+              .count("Counts");
+    </pre>
+
+    <p>
+        Note that the <code>count</code> operator has a <code>String</code> typed parameter <code>Counts</code>,
+        which stores the running counts that keep being updated as more records are piped and processed from the source Kafka topic.
+        This <code>Counts</code> store can be queried in real-time, with details described in the <a href="/{{version}}/documentation/streams/developer-guide#streams_interactive_queries">Developer Manual</a>.
+    </p>
+
+    <p>
+        We can also write the <code>counts</code> KTable's changelog stream back into another Kafka topic, say <code>streams-wordcount-output</code>.
+        Note that this time the value type is no longer <code>String</code> but <code>Long</code>, so the default serialization classes are not viable for writing it to Kafka anymore.
+        We need to provide overridden serialization methods for <code>Long</code> types, otherwise a runtime exception will be thrown:
+    </p>
+
+    <pre class="brush: java;">
+        counts.to(Serdes.String(), Serdes.Long(), "streams-wordcount-output");
+    </pre>
+
+    <p>
+        Note that in order to read the changelog stream from topic <code>streams-wordcount-output</code>,
+        one needs to set the value deserialization as <code>org.apache.kafka.common.serialization.LongDeserializer</code>.
+        Details of this can be found in the <a href="/{{version}}/documentation/streams/quickstart">Play with a Streams Application</a> section.
+        Assuming lambda expression from JDK 8 can be used, the above code can be simplified as:
+    </p>
+
+    <pre class="brush: java;">
+        KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+        source.flatMapValues(value -> Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+")))
+              .groupBy((key, value) -> value)
+              .count("Counts")
+              .to(Serdes.String(), Serdes.Long(), "streams-wordcount-output");
+    </pre>
+
+    <p>
+        The complete code looks like this (assuming lambda expression is used):
+    </p>
+
+    <pre class="brush: java;">
+        package myapps;
+
+        import org.apache.kafka.common.serialization.Serdes;
+        import org.apache.kafka.streams.KafkaStreams;
+        import org.apache.kafka.streams.StreamsConfig;
+        import org.apache.kafka.streams.kstream.KeyValueMapper;
+        import org.apache.kafka.streams.kstream.KStreamBuilder;
+        import org.apache.kafka.streams.kstream.ValueMapper;
+
+        import java.util.Arrays;
+        import java.util.Locale;
+        import java.util.Properties;
+        import java.util.concurrent.CountDownLatch;
+
+        public class WordCount {
+
+            public static void main(String[] args) throws Exception {
+                Properties props = new Properties();
+                props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount");
+                props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
+                props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+                props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
+
+                final KStreamBuilder builder = new KStreamBuilder();
+
+                KStream&lt;String, String&gt; source = builder.stream("streams-plaintext-input");
+                source.flatMapValues(value -> Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\\W+")))
+                      .groupBy((key, value) -> value)
+                      .count("Counts")
+                      .to(Serdes.String(), Serdes.Long(), "streams-wordcount-output");
+
+                final KafkaStreams streams = new KafkaStreams(builder, props);
+                final CountDownLatch latch = new CountDownLatch(1);
+
+                // ... same as Pipe.java below
+            }
+        }
+    </pre>
+
+    <div class="pagination">
+        <a href="/{{version}}/documentation/streams/quickstart" class="pagination__btn pagination__btn__prev">Previous</a>
+        <a href="/{{version}}/documentation/streams/developer-guide" class="pagination__btn pagination__btn__next">Next</a>
+    </div>
+</script>
+
+<div class="p-quickstart-streams"></div>
+
+<!--#include virtual="../../includes/_header.htm" -->
+<!--#include virtual="../../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+    <!--#include virtual="../../includes/_nav.htm" -->
+    <div class="right">
+        <!--#include virtual="../../includes/_docs_banner.htm" -->
+        <ul class="breadcrumbs">
+            <li><a href="/documentation">Documentation</a></li>
+            <li><a href="/documentation/streams">Streams</a></li>
+        </ul>
+        <div class="p-content"></div>
+    </div>
+</div>
+<!--#include virtual="../../includes/_footer.htm" -->
+<script>
+$(function() {
+  // Show selected style on nav item
+  $('.b-nav__streams').addClass('selected');
+
+  // Display docs subnav items
+  $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+});
+</script>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/streams/upgrade-guide.html
----------------------------------------------------------------------
diff --git a/0110/streams/upgrade-guide.html b/0110/streams/upgrade-guide.html
index 8ec3e22..7f2c9f6 100644
--- a/0110/streams/upgrade-guide.html
+++ b/0110/streams/upgrade-guide.html
@@ -27,13 +27,13 @@
     </p>
 
     <p>
-        If you want to upgrade from 0.10.1.x to 0.10.2, see the <a href="/{{version}}/documentation/#upgrade_1020_streams">Upgrade Section for 0.10.2</a>.
+        If you want to upgrade from 0.10.1.x to 0.10.2, see the <a href="/{{version}}/documentation/#upgrade_1020_streams"><b>Upgrade Section for 0.10.2</b></a>.
         It highlights incompatible changes you need to consider to upgrade your code and application.
         See <a href="#streams_api_changes_0102">below</a> a complete list of 0.10.2 API and semantical changes that allow you to advance your application and/or simplify your code base, including the usage of new features.
     </p>
 
     <p>
-        If you want to upgrade from 0.10.0.x to 0.10.1, see the <a href="/{{version}}/documentation/#upgrade_1010_streams">Upgrade Section for 0.10.1</a>.
+        If you want to upgrade from 0.10.0.x to 0.10.1, see the <a href="/{{version}}/documentation/#upgrade_1010_streams"><b>Upgrade Section for 0.10.1</b></a>.
         It highlights incompatible changes you need to consider to upgrade your code and application.
         See <a href="#streams_api_changes_0101">below</a> a complete list of 0.10.1 API changes that allow you to advance your application and/or simplify your code base, including the usage of new features.
     </p>
@@ -100,7 +100,7 @@
         <li> at-least-once (default): <code>[client.Id]-StreamThread-[sequence-number]</code> </li>
         <li> exactly-once: <code>[client.Id]-StreamThread-[sequence-number]-[taskId]</code> </li>
     </ul>
-    <p> <code>[client.Id]</code> is either set via Streams configuration parameter <code>client.id<code> or defaults to <code>[application.id]-[processId]</code> (<code>[processId]</code> is a random UUID). </p>
+    <p> <code>[client.Id]</code> is either set via Streams configuration parameter <code>client.id</code> or defaults to <code>[application.id]-[processId]</code> (<code>[processId]</code> is a random UUID). </p>
 
     <h3><a id="streams_api_changes_01021" href="#streams_api_changes_01021">Notable changes in 0.10.2.1</a></h3>
 
@@ -218,7 +218,7 @@
     </ul>
 
     <div class="pagination">
-        <a href="/{{version}}/documentation/streams/developer-guide" class="pagination__btn pagination__btn__prev">Previous</a>
+        <a href="/{{version}}/documentation/streams/architecture" class="pagination__btn pagination__btn__prev">Previous</a>
         <a href="#" class="pagination__btn pagination__btn__next pagination__btn--disabled">Next</a>
     </div>
 </script>
@@ -231,7 +231,7 @@
         <!--#include virtual="../../includes/_docs_banner.htm" -->
         <ul class="breadcrumbs">
             <li><a href="/documentation">Documentation</a></li>
-            <li><a href="/documentation/streams">Streams</a></li>
+            <li><a href="/documentation/streams">Kafka Streams API</a></li>
         </ul>
         <div class="p-content"></div>
     </div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/toc.html
----------------------------------------------------------------------
diff --git a/0110/toc.html b/0110/toc.html
index e26023c..e7d939e 100644
--- a/0110/toc.html
+++ b/0110/toc.html
@@ -33,7 +33,7 @@
             <ul>
                 <li><a href="#producerapi">2.1 Producer API</a>
                 <li><a href="#consumerapi">2.2 Consumer API</a>
-                <li><a href="#streamsapi">2.3 Streams API</a>
+                <li><a href="/{{version}}/documentation/streams">2.3 Streams API</a>
                 <li><a href="#connectapi">2.4 Connect API</a>
                 <li><a href="#adminapi">2.5 AdminClient API</a>
                 <li><a href="#legacyapis">2.6 Legacy APIs</a>
@@ -42,15 +42,16 @@
         <li><a href="#configuration">3. Configuration</a>
             <ul>
                 <li><a href="#brokerconfigs">3.1 Broker Configs</a>
-                <li><a href="#producerconfigs">3.2 Producer Configs</a>
-                <li><a href="#consumerconfigs">3.3 Consumer Configs</a>
+                <li><a href="#topicconfigs">3.2 Topic Configs</a>
+                <li><a href="#producerconfigs">3.3 Producer Configs</a>
+                <li><a href="#consumerconfigs">3.4 Consumer Configs</a>
                     <ul>
-                        <li><a href="#newconsumerconfigs">3.3.1 New Consumer Configs</a>
-                        <li><a href="#oldconsumerconfigs">3.3.2 Old Consumer Configs</a>
+                        <li><a href="#newconsumerconfigs">3.4.1 New Consumer Configs</a>
+                        <li><a href="#oldconsumerconfigs">3.4.2 Old Consumer Configs</a>
                     </ul>
-                <li><a href="#connectconfigs">3.4 Kafka Connect Configs</a>
-                <li><a href="#streamsconfigs">3.5 Kafka Streams Configs</a>
-                <li><a href="#adminclientconfigs">3.6 AdminClient Configs</a>
+                <li><a href="#connectconfigs">3.5 Kafka Connect Configs</a>
+                <li><a href="#streamsconfigs">3.6 Kafka Streams Configs</a>
+                <li><a href="#adminclientconfigs">3.7 AdminClient Configs</a>
             </ul>
         </li>
         <li><a href="#design">4. Design</a>
@@ -142,16 +143,12 @@
         </li>
         <li><a href="/{{version}}/documentation/streams">9. Kafka Streams</a>
             <ul>
-                <li><a href="/{{version}}/documentation/streams#streams_overview">9.1 Overview</a></li>
-                <li><a href="/{{version}}/documentation/streams#streams_concepts">9.2 Core Concepts</a></li>
-                <li><a href="/{{version}}/documentation/streams#streams_architecture">9.3 Architecture</a></li>
-                <li><a href="/{{version}}/documentation/streams#streams_developer">9.4 Developer Guide</a></li>
-                <ul>
-                    <li><a href="/{{version}}/documentation/streams#streams_processor">Low-Level Processor API</a></li>
-                    <li><a href="/{{version}}/documentation/streams#streams_dsl">High-Level Streams DSL</a></li>
-                    <li><a href="/{{version}}/documentation/streams#streams_execute">Application Configuration and Execution</a></li>
-                </ul>
-                <li><a href="/{{version}}/documentation/streams#streams_upgrade_and_api">9.5 Upgrade Guide and API Changes</a></li>
+                <li><a href="/{{version}}/documentation/streams/quickstart">9.1 Play with a Streams Application</a></li>
+                <li><a href="/{{version}}/documentation/streams/tutorial">9.2 Write your own Streams Applications</a></li>
+                <li><a href="/{{version}}/documentation/streams/developer-guide">9.3 Developer Manual</a></li>
+                <li><a href="/{{version}}/documentation/streams/core-concepts">9.4 Core Concepts</a></li>
+                <li><a href="/{{version}}/documentation/streams/architecture">9.5 Architecture</a></li>
+                <li><a href="/{{version}}/documentation/streams/upgrade-guide">9.6 Upgrade Guide</a></li>
             </ul>
         </li>
     </ul>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/0110/upgrade.html
----------------------------------------------------------------------
diff --git a/0110/upgrade.html b/0110/upgrade.html
index 98c749c..9f0dbdf 100644
--- a/0110/upgrade.html
+++ b/0110/upgrade.html
@@ -67,7 +67,7 @@
 <h5><a id="upgrade_1100_notable" href="#upgrade_1100_notable">Notable changes in 0.11.0.0</a></h5>
 <ul>
     <li>Unclean leader election is now disabled by default. The new default favors durability over availability. Users who wish to
-        to retain the previous behavior should set the broker config <code>unclean.leader.election.enabled</code> to <code>false</code>.</li>
+        to retain the previous behavior should set the broker config <code>unclean.leader.election.enable</code> to <code>true</code>.</li>
     <li>Producer configs <code>block.on.buffer.full</code>, <code>metadata.fetch.timeout.ms</code> and <code>timeout.ms</code> have been
         removed. They were initially deprecated in Kafka 0.9.0.0.</li>
     <li>The <code>offsets.topic.replication.factor</code> broker config is now enforced upon auto topic creation. Internal

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a1278d06/downloads.html
----------------------------------------------------------------------
diff --git a/downloads.html b/downloads.html
index 82f81eb..0b2ab87 100644
--- a/downloads.html
+++ b/downloads.html
@@ -4,11 +4,28 @@
 	<!--#include virtual="includes/_nav.htm" -->
 	<div class="right">
 		<h1>Download</h1>
-    <p>0.11.0.0 is the latest release. The current stable version is 0.11.0.0.</p>
+    <p>0.11.0.1 is the latest release. The current stable version is 0.11.0.1.</p>
 
     <p>
     You can verify your download by following these <a href="http://www.apache.org/info/verification.html">procedures</a> and using these <a href="http://kafka.apache.org/KEYS">KEYS</a>.
     </p>
+				<h3>0.11.0.1</h3>
+				<ul>
+						<li>
+								<a href="https://archive.apache.org/dist/kafka/0.11.0.1/RELEASE_NOTES.html">Release Notes</a>
+						</li>
+						<li>
+								Source download: <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/kafka-0.11.0.1-src.tgz">kafka-0.11.0.1-src.tgz</a> (<a href="https://dist.apache.org/repos/dist/release/kafka/0.11.0.1/kafka-0.11.0.1-src.tgz.asc">asc</a>, <a href="https://dist.apache.org/repos/dist/release/kafka/0.11.0.0/kafka-0.11.0.0-src.tgz.md5">md5</a>)
+						</li>
+						<li>
+								Binary downloads:
+								<ul>
+										<li>Scala 2.11 &nbsp;- <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz">kafka_2.11-0.11.0.1.tgz</a> (<a href="https://dist.apache.org/repos/dist/release/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz.asc">asc</a>, <a href="https://dist.apache.org/repos/dist/release/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz.md5">md5</a>)</li>
+										<li>Scala 2.12 &nbsp;- <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/kafka_2.12-0.11.0.1.tgz">kafka_2.12-0.11.0.1.tgz</a> (<a href="https://dist.apache.org/repos/dist/release/kafka/0.11.0.1/kafka_2.12-0.11.0.1.tgz.asc">asc</a>, <a href="https://dist.apache.org/repos/dist/release/kafka/0.11.0.1/kafka_2.12-0.11.0.1.tgz.md5">md5</a>)</li>
+								</ul>
+								We build for multiple versions of Scala. This only matters if you are using Scala and you want a version built for the same Scala version you use. Otherwise any version should work (2.11 is recommended).
+						</li>
+				</ul>
 
         <h3>0.11.0.0</h3>
         <ul>


Mime
View raw message