kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gwens...@apache.org
Subject [5/6] kafka-site git commit: new design
Date Tue, 04 Oct 2016 22:56:21 GMT
new design

more style updates

style tweaks

format tables better

updated logo

revert header on old docs

host js locally

mobile adjustments


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/1b8cdf48
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/1b8cdf48
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/1b8cdf48

Branch: refs/heads/asf-site
Commit: 1b8cdf484d0c6ce8f00b3871e9325b476ba7235f
Parents: 0ba7502
Author: Derrick Or <derrickor@gmail.com>
Authored: Thu Sep 29 21:52:13 2016 -0700
Committer: Derrick Or <derrickor@gmail.com>
Committed: Mon Oct 3 23:15:16 2016 -0700

----------------------------------------------------------------------
 0100/documentation.html     |   4 +-
 0100/introduction.html      | 200 ++++++++------
 0100/protocol.html          | 299 ++++++++++-----------
 0100/quickstart.html        |   8 +-
 0100/uses.html              |   2 +-
 07/documentation.html       |  24 +-
 08/documentation.html       | 174 +++++++------
 081/documentation.html      | 206 +++++++--------
 082/documentation.html      | 206 +++++++--------
 090/documentation.html      | 278 ++++++++++----------
 code.html                   |  47 ++--
 coding-guide.html           | 187 ++++++-------
 committers.html             | 317 +++++++++++-----------
 contact.html                |  69 ++---
 contributing.html           |  95 +++----
 documentation.html          |  15 +-
 downloads.html              | 475 ++++++++++++++++-----------------
 images/kafka_diagram.png    | Bin 0 -> 307631 bytes
 images/kafka_logo.png       | Bin 10648 -> 4584 bytes
 images/logo.png             | Bin 0 -> 16171 bytes
 images/twitter_logo.png     | Bin 0 -> 3927 bytes
 includes/footer.html        |  52 +++-
 includes/header.html        |  65 +----
 includes/nav.html           |  34 +++
 includes/top.html           |   5 +
 index.html                  |  41 ++-
 intro.html                  |  11 +-
 js/jquery.min.js            |   5 +
 js/jquery.sticky-kit.min.js |   9 +
 performance.html            |  14 +-
 project-security.html       |  33 ++-
 project.html                |  63 +++++
 quickstart.html             |  11 +-
 styles.css                  | 548 +++++++++++++++++++++++++++++++--------
 uses.html                   |  10 +
 35 files changed, 2054 insertions(+), 1453 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1b8cdf48/0100/documentation.html
----------------------------------------------------------------------
diff --git a/0100/documentation.html b/0100/documentation.html
index f4f1ddc..b20c0de 100644
--- a/0100/documentation.html
+++ b/0100/documentation.html
@@ -15,9 +15,7 @@
  limitations under the License.
 -->
 
-<!--#include virtual="../includes/header.html" -->
-
-<h1>Kafka 0.10.0 Documentation</h1>
+<h3>Kafka 0.10.0</h3>
 Prior releases: <a href="/07/documentation.html">0.7.x</a>, <a href="/08/documentation.html">0.8.0</a>, <a href="/081/documentation.html">0.8.1.X</a>, <a href="/082/documentation.html">0.8.2.X</a>, <a href="/090/documentation.html">0.9.0.X</a>.
 </ul>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1b8cdf48/0100/introduction.html
----------------------------------------------------------------------
diff --git a/0100/introduction.html b/0100/introduction.html
index 147210f..09df9f6 100644
--- a/0100/introduction.html
+++ b/0100/introduction.html
@@ -14,156 +14,192 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-Kafka is <i>a distributed streaming platform</i>. What exactly does that mean?
-<p>
-We think of a streaming platform as having three key capabilities:
+<h3>Kafka is <i>a distributed streaming platform</i>. What exactly does that mean?</h3>
+<p>We think of a streaming platform as having three key capabilities:</p>
 <ol>
 	<li>It let's you publish and subscribe to streams of records. In this respect it is similar to a message queue or enterprise messaging system.
 	<li>It let's you store streams of records in a fault-tolerant way.
 	<li>It let's you process streams of records as they occur.
 </ol>
-<p>
-What is Kafka good for?
-<p>
-It gets used for two broad classes of application:
+<p>What is Kafka good for?</p>
+<p>It gets used for two broad classes of application:</p>
 <ol>
   <li>Building real-time streaming data pipelines that reliably get data between systems or applications
   <li>Building real-time streaming applications that transform or react to the streams of data
 </ol>
-<p>
-To understand how Kafka does these things, let's dive in and explore Kafka's capabilities from the bottom up.
-<p>
-First a few concepts:
+<p>To understand how Kafka does these things, let's dive in and explore Kafka's capabilities from the bottom up.</p>
+<p>First a few concepts:</p>
 <ul>
 	<li>Kafka is run as a cluster on one or more servers.
     <li>The Kafka cluster stores streams of <i>records</i> in categories called <i>topics</i>.
 	<li>Each record consists of a key, a value, and a timestamp.
 </ul>
-Kafka has four core APIs:
-<div style="float: right">
-  <img src="images/kafka-apis.png" style="width:400px">
+
+<p>Kafka has four core APIs:</p>
+
+<div style="overflow: hidden;">
+	<ul style="float: left; width: 40%;">
+	    <li>The <a href="/documentation.html#producerapi">Producer API</a> allows an application to publish a stream records to one or more Kafka topics.
+	    <li>The <a href="/documentation.html#consumerapi">Consumer API</a> allows an application to subscribe to one or more topics and process the stream of records produced to them.
+		<li>The <a href="/documentation.html#streams">Streams API</a> allows an application to act as a <i>stream processor</i>, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
+		<li>The <a href="/documentation.html#connect">Connector API</a> allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to
+	</ul>
+	<img src="images/kafka-apis.png" style="float: right; width: 50%;">
 </div>
-<ul>
-    <li>The <a href="/documentation.html#producerapi">Producer API</a> allows an application to publish a stream records to one or more Kafka topics.
-    <li>The <a href="/documentation.html#consumerapi">Consumer API</a> allows an application to subscribe to one or more topics and process the stream of records produced to them.
-	<li>The <a href="/documentation.html#streams">Streams API</a> allows an application to act as a <i>stream processor</i>, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
-	<li>The <a href="/documentation.html#connect">Connector API</a> allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to 
-</ul>
+
 <p>
-In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.
+	In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.
+</p>
 
 <h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
-Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the topic.
-<p>
-A topic is a category or feed name to which records are published. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it.
+<p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the topic.</p>
+<p>A topic is a category or feed name to which records are published. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it.</p>
+<p>For each topic, the Kafka cluster maintains a partitioned log that looks like this:</p>
+<img class="centered" src="images/log_anatomy.png">
+
 <p>
-For each topic, the Kafka cluster maintains a partitioned log that looks like this:
-<div style="text-align: center; width: 100%">
-  <img src="images/log_anatomy.png">
-</div>
-Each partition is an ordered, immutable sequence of records that is continually appended to&mdash;a structured commit log. The records in the partitions are each assigned a sequential id number called the <i>offset</i> that uniquely identifies each record within the partition.
+	Each partition is an ordered, immutable sequence of records that is continually appended to&mdash;a structured commit log. The records in the partitions are each assigned a sequential id number called the <i>offset</i> that uniquely identifies each record within the partition.
+</p>
 <p>
-The Kafka cluster retains all published records&mdash;whether or not they have been consumed&mdash;using a configurable retention period. For example if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so storing data for a long time is not a problem.
+	The Kafka cluster retains all published records&mdash;whether or not they have been consumed&mdash;using a configurable retention period. For example if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so storing data for a long time is not a problem.
+</p>
+<img class="centered" src="images/log_consumer.png" style="width:400px">
 <p>
-<div style="float:right">
-  <img src="images/log_consumer.png" style="width:400px">
-</div>
-In fact, the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes. For example a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from "now".
+	In fact, the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes. For example a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from "now".
+</p>
 <p>
-This combination of features means that Kafka consumers are very cheap&mdash;they can come and go without much impact on the cluster or on other consumers. For example, you can use our command line tools to "tail" the contents of any topic without changing what is consumed by any existing consumers.
+	This combination of features means that Kafka consumers are very cheap&mdash;they can come and go without much impact on the cluster or on other consumers. For example, you can use our command line tools to "tail" the contents of any topic without changing what is consumed by any existing consumers.
+</p>
 <p>
-The partitions in the log serve several purposes. First, they allow the log to scale beyond a size that will fit on a single server. Each individual partition must fit on the servers that host it, but a topic may have many partitions so it can handle an arbitrary amount of data. Second they act as the unit of parallelism&mdash;more on that in a bit.
+	The partitions in the log serve several purposes. First, they allow the log to scale beyond a size that will fit on a single server. Each individual partition must fit on the servers that host it, but a topic may have many partitions so it can handle an arbitrary amount of data. Second they act as the unit of parallelism&mdash;more on that in a bit.
+</p>
 
 <h4><a id="intro_distribution" href="#intro_distribution">Distribution</a></h4>
 
-The partitions of the log are distributed over the servers in the Kafka cluster with each server handling data and requests for a share of the partitions. Each partition is replicated across a configurable number of servers for fault tolerance.
 <p>
-Each partition has one server which acts as the "leader" and zero or more servers which act as "followers". The leader handles all read and write requests for the partition while the followers passively replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so load is well balanced within the cluster.
+	The partitions of the log are distributed over the servers in the Kafka cluster with each server handling data and requests for a share of the partitions. Each partition is replicated across a configurable number of servers for fault tolerance.
+</p>
+<p>
+	Each partition has one server which acts as the "leader" and zero or more servers which act as "followers". The leader handles all read and write requests for the partition while the followers passively replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so load is well balanced within the cluster.
+</p>
 
 <h4><a id="intro_producers" href="#intro_producers">Producers</a></h4>
 
-Producers publish data to the topics of their choice. The producer is responsible for choosing which record to assign to which partition within the topic. This can be done in a round-robin fashion simply to balance load or it can be done according to some semantic partition function (say based on some key in the record). More on the use of partitioning in a second!
+<p>
+	Producers publish data to the topics of their choice. The producer is responsible for choosing which record to assign to which partition within the topic. This can be done in a round-robin fashion simply to balance load or it can be done according to some semantic partition function (say based on some key in the record). More on the use of partitioning in a second!
+</p>
 
 <h4><a id="intro_consumers" href="#intro_consumers">Consumers</a></h4>
 
-Consumers label themselves with a <i>consumer group</i> name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
 <p>
-If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.
+	Consumers label themselves with a <i>consumer group</i> name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
+</p>
 <p>
-If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.
-<div style="float: right; margin: 20px; width: 500px" class="caption">
-  <img src="images/consumer-groups.png"><br>
-  A two server Kafka cluster hosting four partitions (P0-P3) with two consumer groups. Consumer group A has two consumer instances and group B has four.
-</div>
+	If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.
+</p>
 <p>
-More commonly, however, we have found that topics have a small number of consumer groups, one for each "logical subscriber". Each group is composed of many consumer instances for scalability and fault tolerance. This is nothing more than publish-subscribe semantics where the subscriber is a cluster of consumers instead of a single process.
+	If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.
+</p>
+<img class="centered" src="images/consumer-groups.png">
 <p>
-The way consumption is implemented in Kafka is by dividing up the partitions in the log over the consumer instances so that each instance is the exclusive consumer of a "fair share" of partitions at any point in time. This process of maintaining membership in the group is handled by the Kafka protocol dynamically. If new instances join the group they will take over some partitions from other members of the group; if an instance dies, its partitions will be distributed to the remaining instances.
-<p>
-Kafka only provides a total order over records <i>within</i> a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.
+	A two server Kafka cluster hosting four partitions (P0-P3) with two consumer groups. Consumer group A has two consumer instances and group B has four.
+</p>
 
+<p>
+	More commonly, however, we have found that topics have a small number of consumer groups, one for each "logical subscriber". Each group is composed of many consumer instances for scalability and fault tolerance. This is nothing more than publish-subscribe semantics where the subscriber is a cluster of consumers instead of a single process.
+</p>
+<p>
+	The way consumption is implemented in Kafka is by dividing up the partitions in the log over the consumer instances so that each instance is the exclusive consumer of a "fair share" of partitions at any point in time. This process of maintaining membership in the group is handled by the Kafka protocol dynamically. If new instances join the group they will take over some partitions from other members of the group; if an instance dies, its partitions will be distributed to the remaining instances.
+</p>
+<p>
+	Kafka only provides a total order over records <i>within</i> a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.
+</p>
 <h4><a id="intro_guarantees" href="#intro_guarantees">Guarantees</a></h4>
 
-At a high-level Kafka gives the following guarantees:
+<p>
+	At a high-level Kafka gives the following guarantees:
+</p>
 <ul>
   <li>Messages sent by a producer to a particular topic partition will be appended in the order they are sent. That is, if a record M1 is sent by the same producer as a record M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log.
   <li>A consumer instance sees records in the order they are stored in the log.
   <li>For a topic with replication factor N, we will tolerate up to N-1 server failures without losing any records committed to the log.
 </ul>
-More details on these guarantees are given in the design section of the documentation.
-
+<p>
+	More details on these guarantees are given in the design section of the documentation.
+</p>
 <h4><a id="kafka_mq" href="#kafka_mq">Kafka as a Messaging System</a></h4>
-
-How does Kafka's notion of streams compare to a traditional enterprise messaging system?
 <p>
-Messaging traditionally has two models: <a href="http://en.wikipedia.org/wiki/Message_queue">queuing</a> and <a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern">publish-subscribe</a>. In a queue, a pool of consumers may read from a server and each record goes to one of them; in publish-subscribe the record is broadcast to all consumers. Each of these two models has a strength and a weakness. The strength of queuing is that it allows you to divide up the processing of data over multiple consumer instances, which lets you scale your processing. Unfortunately queues aren't multi-subscriber&mdash;once one process reads the data it's gone. Publish-subscribe allows you broadcast data to multiple processes, but has no way of scaling processing since every message goes to every subscriber.
+	How does Kafka's notion of streams compare to a traditional enterprise messaging system?
+</p>
 <p>
-The consumer group concept in Kafka generalizes these two concepts. As with a queue the consumer group allows you to divide up processing over a collection of processes (the members of the consumer group). As with publish-subscribe, Kafka allows you to broadcast messages to multiple consumer groups.
+	Messaging traditionally has two models: <a href="http://en.wikipedia.org/wiki/Message_queue">queuing</a> and <a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern">publish-subscribe</a>. In a queue, a pool of consumers may read from a server and each record goes to one of them; in publish-subscribe the record is broadcast to all consumers. Each of these two models has a strength and a weakness. The strength of queuing is that it allows you to divide up the processing of data over multiple consumer instances, which lets you scale your processing. Unfortunately queues aren't multi-subscriber&mdash;once one process reads the data it's gone. Publish-subscribe allows you broadcast data to multiple processes, but has no way of scaling processing since every message goes to every subscriber.
+</p>
 <p>
-The advantage of Kafka's model is that every topic has both these properties&mdash;it can scale processing and is also multi-subscriber&mdash;there is no need to choose one or the other.
+	The consumer group concept in Kafka generalizes these two concepts. As with a queue the consumer group allows you to divide up processing over a collection of processes (the members of the consumer group). As with publish-subscribe, Kafka allows you to broadcast messages to multiple consumer groups.
+</p>
 <p>
-Kafka has stronger ordering guarantees than a traditional messaging system, too.
+	The advantage of Kafka's model is that every topic has both these properties&mdash;it can scale processing and is also multi-subscriber&mdash;there is no need to choose one or the other.
+</p>
 <p>
-A traditional queue retains records in-order on the server, and if multiple consumers consume from the queue then the server hands out records in the order they are stored. However, although the server hands out records in order, the records are delivered asynchronously to consumers, so they may arrive out of order on different consumers. This effectively means the ordering of the records is lost in the presence of parallel consumption. Messaging systems often work around this by having a notion of "exclusive consumer" that allows only one process to consume from a queue, but of course this means that there is no parallelism in processing.
+	Kafka has stronger ordering guarantees than a traditional messaging system, too.
+</p>
 <p>
-Kafka does it better. By having a notion of parallelism&mdash;the partition&mdash;within the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool of consumer processes. This is achieved by assigning the partitions in the topic to the consumers in the consumer group so that each partition is consumed by exactly one consumer in the group. By doing this we ensure that the consumer is the only reader of that partition and consumes the data in order. Since there are many partitions this still balances the load over many consumer instances. Note however that there cannot be more consumer instances in a consumer group than partitions.
-
+	A traditional queue retains records in-order on the server, and if multiple consumers consume from the queue then the server hands out records in the order they are stored. However, although the server hands out records in order, the records are delivered asynchronously to consumers, so they may arrive out of order on different consumers. This effectively means the ordering of the records is lost in the presence of parallel consumption. Messaging systems often work around this by having a notion of "exclusive consumer" that allows only one process to consume from a queue, but of course this means that there is no parallelism in processing.
+</p>
+<p>
+	Kafka does it better. By having a notion of parallelism&mdash;the partition&mdash;within the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool of consumer processes. This is achieved by assigning the partitions in the topic to the consumers in the consumer group so that each partition is consumed by exactly one consumer in the group. By doing this we ensure that the consumer is the only reader of that partition and consumes the data in order. Since there are many partitions this still balances the load over many consumer instances. Note however that there cannot be more consumer instances in a consumer group than partitions.
+</p>
 <h4>Kafka as a Storage System</h4>
-
-Any message queue that allows publishing messages decoupled from consuming them is effectively acting as a storage system for the in-flight messages. What is different about Kafka is that it is a very good storage system.
 <p>
-Data written to Kafka is written to disk and replicated for fault-tolerance. Kafka allows producers to wait on acknowledgement so that a write isn't considered complete until it is fully replicated and guaranteed to persist even if the server written to fails.
+	Any message queue that allows publishing messages decoupled from consuming them is effectively acting as a storage system for the in-flight messages. What is different about Kafka is that it is a very good storage system.
+</p>
 <p>
-The disk structures Kafka uses scale well&mdash;Kafka will perform the same whether you have 50 KB or 50 TB of persistent data on the server.
+	Data written to Kafka is written to disk and replicated for fault-tolerance. Kafka allows producers to wait on acknowledgement so that a write isn't considered complete until it is fully replicated and guaranteed to persist even if the server written to fails.
+</p>
 <p>
-As a result of taking storage seriously and allowing the clients to control their read position, you can think of Kafka as a kind of special purpose distributed filesystem dedicated to high-performance, low-latency commit log storage, replication, and propagation.
-
+	The disk structures Kafka uses scale well&mdash;Kafka will perform the same whether you have 50 KB or 50 TB of persistent data on the server.
+</p>
+<p>
+	As a result of taking storage seriously and allowing the clients to control their read position, you can think of Kafka as a kind of special purpose distributed filesystem dedicated to high-performance, low-latency commit log storage, replication, and propagation.
+</p>
 <h4>Kafka for Stream Processing</h4>
 <p>
-It isn't enough to just read, write, and store streams of data, the purpose is to enable real-time processing of streams.
+	It isn't enough to just read, write, and store streams of data, the purpose is to enable real-time processing of streams.
+</p>
 <p>
-In Kafka a stream processor is anything that takes continual streams of  data from input topics, performs some processing on this input, and produces continual streams of data to output topics.
+	In Kafka a stream processor is anything that takes continual streams of  data from input topics, performs some processing on this input, and produces continual streams of data to output topics.
+</p>
 <p>
-For example a retail application might take in input streams of sales and shipments, and output a stream of reorders and price adjustments computed off this data.
+	For example a retail application might take in input streams of sales and shipments, and output a stream of reorders and price adjustments computed off this data.
+</p>
 <p>
-It is possible to do simple processing directly using the producer and consumer APIs. However for more complex transformations Kafka provides a fully integrated <a href="/streams.html">Streams API</a>. This allows building applications that do non-trivial processing that compute aggregations off of streams or join streams together.
+	It is possible to do simple processing directly using the producer and consumer APIs. However for more complex transformations Kafka provides a fully integrated <a href="/streams.html">Streams API</a>. This allows building applications that do non-trivial processing that compute aggregations off of streams or join streams together.
+</p>
 <p>
-This facility helps solve the hard problems this type of application faces: handling out-of-order data, reprocessing input as code changes, performing stateful computations, etc.
+	This facility helps solve the hard problems this type of application faces: handling out-of-order data, reprocessing input as code changes, performing stateful computations, etc.
+</p>
 <p>
-The streams API builds on the core primitives Kafka provides: it uses the producer and consumer APIs for input, uses Kafka for stateful storage, and uses the same group mechanism for fault tolerance among the stream processor instances.
-
+	The streams API builds on the core primitives Kafka provides: it uses the producer and consumer APIs for input, uses Kafka for stateful storage, and uses the same group mechanism for fault tolerance among the stream processor instances.
+</p>
 <h4>Putting the Pieces Together</h4>
-
-This combination of messaging, storage, and stream processing may seem unusual but it is essential to Kafka's role as a streaming platform.
 <p>
-A distributed file system like HDFS allows storing static files for batch processing. Effectively a system like this allows storing and processing <i>historical</i> data from the past.
+	This combination of messaging, storage, and stream processing may seem unusual but it is essential to Kafka's role as a streaming platform.
+</p>
+<p>
+	A distributed file system like HDFS allows storing static files for batch processing. Effectively a system like this allows storing and processing <i>historical</i> data from the past.
+</p>
 <p>
-A traditional enterprise messaging system allows processing future messages that will arrive after you subscribe. Applications built in this way process future data as it arrives.
+	A traditional enterprise messaging system allows processing future messages that will arrive after you subscribe. Applications built in this way process future data as it arrives.
+</p>
 <p>
-Kafka combines both of these capabilities, and the combination is critical both for Kafka usage as a platform for streaming applications as well as for streaming data pipelines.
+	Kafka combines both of these capabilities, and the combination is critical both for Kafka usage as a platform for streaming applications as well as for streaming data pipelines.
+</p>
 <p>
-By combining storage and low-latency subscriptions, streaming applications can treat both past and future data the same way. That is a single application can process historical, stored data but rather than ending when it reaches the last record it can keep processing as future data arrives. This is a generalized notion of stream processing that subsumes batch processing as well as message-driven applications.
+	By combining storage and low-latency subscriptions, streaming applications can treat both past and future data the same way. That is a single application can process historical, stored data but rather than ending when it reaches the last record it can keep processing as future data arrives. This is a generalized notion of stream processing that subsumes batch processing as well as message-driven applications.
+</p>
 <p>
-Likewise for streaming data pipelines the combination of subscription to real-time events make it possible to use Kafka for very low-latency pipelines; but the ability to store data reliably make it possible to use it for critical data where the delivery of data must be guaranteed or for integration with offline systems that load data only periodically or may go down for extended periods of time for maintenance. The stream processing facilities make it possible to transform data as it arrives.
+	Likewise for streaming data pipelines the combination of subscription to real-time events make it possible to use Kafka for very low-latency pipelines; but the ability to store data reliably make it possible to use it for critical data where the delivery of data must be guaranteed or for integration with offline systems that load data only periodically or may go down for extended periods of time for maintenance. The stream processing facilities make it possible to transform data as it arrives.
+</p>
 <p>
 For more information on the guarantees, apis, and capabilities Kafka provides see the rest of the <a href="/documentation.html">documentation</a>.
+</p>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1b8cdf48/0100/protocol.html
----------------------------------------------------------------------
diff --git a/0100/protocol.html b/0100/protocol.html
index e28b0a8..5b1dd0b 100644
--- a/0100/protocol.html
+++ b/0100/protocol.html
@@ -16,208 +16,215 @@
 -->
 
 <!--#include virtual="../includes/header.html" -->
+<!--#include virtual="../includes/top.html" -->
+<div class="content">
+	<!--#include virtual="../includes/nav.html" -->
+	<div class="right">
+		<h1>Kafka protocol guide</h1>
+    <p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="https://kafka.apache.org/documentation.html#design">here</a></p>
 
-<h3><a id="protocol" href="#protocol">Kafka Wire Protocol</a></h3>
+    <ul class="toc">
+        <li><a href="#protocol_preliminaries">Preliminaries</a>
+            <ul>
+                <li><a href="#protocol_network">Network</a>
+                <li><a href="#protocol_partitioning">Partitioning and bootstrapping</a>
+                <li><a href="#protocol_partitioning_strategies">Partitioning Strategies</a>
+                <li><a href="#protocol_batching">Batching</a>
+                <li><a href="#protocol_compatibility">Versioning and Compatibility</a>
+            </ul>
+        </li>
+        <li><a href="#protocol_details">The Protocol</a>
+            <ul>
+                <li><a href="#protocol_types">Protocol Primitive Types</a>
+                <li><a href="#protocol_grammar">Notes on reading the request format grammars</a>
+                <li><a href="#protocol_common">Common Request and Response Structure</a>
+                <li><a href="#protocol_message_sets">Message Sets</a>
+            </ul>
+        </li>
+        <li><a href="#protocol_constants">Constants</a>
+            <ul>
+                <li><a href="#protocol_error_codes">Error Codes</a>
+                <li><a href="#protocol_api_keys">Api Keys</a>
+            </ul>
+        </li>
+        <li><a href="#protocol_messages">The Messages</a></li>
+        <li><a href="#protocol_philosophy">Some Common Philosophical Questions</a></li>
+    </ul>
+
+    <h4><a id="protocol_preliminaries" href="#protocol_preliminaries">Preliminaries</a></h4>
+
+    <h5><a id="protocol_network" href="#protocol_network">Network</a></h5>
+
+    <p>Kafka uses a binary protocol over TCP. The protocol defines all apis as request response message pairs. All messages are size delimited and are made up of the following primitive types.</p>
+
+    <p>The client initiates a socket connection and then writes a sequence of request messages and reads back the corresponding response message. No handshake is required on connection or disconnection. TCP is happier if you maintain persistent connections used for many requests to amortize the cost of the TCP handshake, but beyond this penalty connecting is pretty cheap.</p>
 
-<p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="https://kafka.apache.org/documentation.html#design">here</a></p>
+    <p>The client will likely need to maintain a connection to multiple brokers, as data is partitioned and the clients will need to talk to the server that has their data. However it should not generally be necessary to maintain multiple connections to a single broker from a single client instance (i.e. connection pooling).</p>
 
-<ul class="toc">
-    <li><a href="#protocol_preliminaries">Preliminaries</a>
-        <ul>
-            <li><a href="#protocol_network">Network</a>
-            <li><a href="#protocol_partitioning">Partitioning and bootstrapping</a>
-            <li><a href="#protocol_partitioning_strategies">Partitioning Strategies</a>
-            <li><a href="#protocol_batching">Batching</a>
-            <li><a href="#protocol_compatibility">Versioning and Compatibility</a>
-        </ul>
-    </li>
-    <li><a href="#protocol_details">The Protocol</a>
-        <ul>
-            <li><a href="#protocol_types">Protocol Primitive Types</a>
-            <li><a href="#protocol_grammar">Notes on reading the request format grammars</a>
-            <li><a href="#protocol_common">Common Request and Response Structure</a>
-            <li><a href="#protocol_message_sets">Message Sets</a>
-        </ul>
-    </li>
-    <li><a href="#protocol_constants">Constants</a>
-        <ul>
-            <li><a href="#protocol_error_codes">Error Codes</a>
-            <li><a href="#protocol_api_keys">Api Keys</a>
-        </ul>
-    </li>
-    <li><a href="#protocol_messages">The Messages</a></li>
-    <li><a href="#protocol_philosophy">Some Common Philosophical Questions</a></li>
-</ul>
+    <p>The server guarantees that on a single TCP connection, requests will be processed in the order they are sent and responses will return in that order as well. The broker's request processing allows only a single in-flight request per connection in order to guarantee this ordering. Note that clients can (and ideally should) use non-blocking IO to implement request pipelining and achieve higher throughput. i.e., clients can send requests even while awaiting responses for preceding requests since the outstanding requests will be buffered in the underlying OS socket buffer. All requests are initiated by the client, and result in a corresponding response message from the server except where noted.</p>
 
-<h4><a id="protocol_preliminaries" href="#protocol_preliminaries">Preliminaries</a></h4>
+    <p>The server has a configurable maximum limit on request size and any request that exceeds this limit will result in the socket being disconnected.</p>
 
-<h5><a id="protocol_network" href="#protocol_network">Network</a></h5>
+    <h5><a id="protocol_partitioning" href="#protocol_partitioning">Partitioning and bootstrapping</a></h5>
 
-<p>Kafka uses a binary protocol over TCP. The protocol defines all apis as request response message pairs. All messages are size delimited and are made up of the following primitive types.</p>
+    <p>Kafka is a partitioned system so not all servers have the complete data set. Instead recall that topics are split into a pre-defined number of partitions, P, and each partition is replicated with some replication factor, N. Topic partitions themselves are just ordered "commit logs" numbered 0, 1, ..., P.</p>
 
-<p>The client initiates a socket connection and then writes a sequence of request messages and reads back the corresponding response message. No handshake is required on connection or disconnection. TCP is happier if you maintain persistent connections used for many requests to amortize the cost of the TCP handshake, but beyond this penalty connecting is pretty cheap.</p>
+    <p>All systems of this nature have the question of how a particular piece of data is assigned to a particular partition. Kafka clients directly control this assignment, the brokers themselves enforce no particular semantics of which messages should be published to a particular partition. Rather, to publish messages the client directly addresses messages to a particular partition, and when fetching messages, fetches from a particular partition. If two clients want to use the same partitioning scheme they must use the same method to compute the mapping of key to partition.</p>
 
-<p>The client will likely need to maintain a connection to multiple brokers, as data is partitioned and the clients will need to talk to the server that has their data. However it should not generally be necessary to maintain multiple connections to a single broker from a single client instance (i.e. connection pooling).</p>
+    <p>These requests to publish or fetch data must be sent to the broker that is currently acting as the leader for a given partition. This condition is enforced by the broker, so a request for a particular partition to the wrong broker will result in an the NotLeaderForPartition error code (described below).</p>
 
-<p>The server guarantees that on a single TCP connection, requests will be processed in the order they are sent and responses will return in that order as well. The broker's request processing allows only a single in-flight request per connection in order to guarantee this ordering. Note that clients can (and ideally should) use non-blocking IO to implement request pipelining and achieve higher throughput. i.e., clients can send requests even while awaiting responses for preceding requests since the outstanding requests will be buffered in the underlying OS socket buffer. All requests are initiated by the client, and result in a corresponding response message from the server except where noted.</p>
+    <p>How can the client find out which topics exist, what partitions they have, and which brokers currently host those partitions so that it can direct its requests to the right hosts? This information is dynamic, so you can't just configure each client with some static mapping file. Instead all Kafka brokers can answer a metadata request that describes the current state of the cluster: what topics there are, which partitions those topics have, which broker is the leader for those partitions, and the host and port information for these brokers.</p>
 
-<p>The server has a configurable maximum limit on request size and any request that exceeds this limit will result in the socket being disconnected.</p>
+    <p>In other words, the client needs to somehow find one broker and that broker will tell the client about all the other brokers that exist and what partitions they host. This first broker may itself go down so the best practice for a client implementation is to take a list of two or three urls to bootstrap from. The user can then choose to use a load balancer or just statically configure two or three of their kafka hosts in the clients.</p>
 
-<h5><a id="protocol_partitioning" href="#protocol_partitioning">Partitioning and bootstrapping</a></h5>
+    <p>The client does not need to keep polling to see if the cluster has changed; it can fetch metadata once when it is instantiated cache that metadata until it receives an error indicating that the metadata is out of date. This error can come in two forms: (1) a socket error indicating the client cannot communicate with a particular broker, (2) an error code in the response to a request indicating that this broker no longer hosts the partition for which data was requested.</p>
+    <ol>
+        <li>Cycle through a list of "bootstrap" kafka urls until we find one we can connect to. Fetch cluster metadata.</li>
+        <li>Process fetch or produce requests, directing them to the appropriate broker based on the topic/partitions they send to or fetch from.</li>
+        <li>If we get an appropriate error, refresh the metadata and try again.</li>
+    </ol>
 
-<p>Kafka is a partitioned system so not all servers have the complete data set. Instead recall that topics are split into a pre-defined number of partitions, P, and each partition is replicated with some replication factor, N. Topic partitions themselves are just ordered "commit logs" numbered 0, 1, ..., P.</p>
+    <h5><a id="protocol_partitioning_strategies" href="#protocol_partitioning_strategies">Partitioning Strategies</a></h5>
 
-<p>All systems of this nature have the question of how a particular piece of data is assigned to a particular partition. Kafka clients directly control this assignment, the brokers themselves enforce no particular semantics of which messages should be published to a particular partition. Rather, to publish messages the client directly addresses messages to a particular partition, and when fetching messages, fetches from a particular partition. If two clients want to use the same partitioning scheme they must use the same method to compute the mapping of key to partition.</p>
+    <p>As mentioned above the assignment of messages to partitions is something the producing client controls. That said, how should this functionality be exposed to the end-user?</p>
 
-<p>These requests to publish or fetch data must be sent to the broker that is currently acting as the leader for a given partition. This condition is enforced by the broker, so a request for a particular partition to the wrong broker will result in an the NotLeaderForPartition error code (described below).</p>
+    <p>Partitioning really serves two purposes in Kafka:</p>
+    <ol>
+        <li>It balances data and request load over brokers</li>
+        <li>It serves as a way to divvy up processing among consumer processes while allowing local state and preserving order within the partition. We call this semantic partitioning.</li>
+    </ol>
 
-<p>How can the client find out which topics exist, what partitions they have, and which brokers currently host those partitions so that it can direct its requests to the right hosts? This information is dynamic, so you can't just configure each client with some static mapping file. Instead all Kafka brokers can answer a metadata request that describes the current state of the cluster: what topics there are, which partitions those topics have, which broker is the leader for those partitions, and the host and port information for these brokers.</p>
+    <p>For a given use case you may care about only one of these or both.</p>
 
-<p>In other words, the client needs to somehow find one broker and that broker will tell the client about all the other brokers that exist and what partitions they host. This first broker may itself go down so the best practice for a client implementation is to take a list of two or three urls to bootstrap from. The user can then choose to use a load balancer or just statically configure two or three of their kafka hosts in the clients.</p>
+    <p>To accomplish simple load balancing a simple approach would be for the client to just round robin requests over all brokers. Another alternative, in an environment where there are many more producers than brokers, would be to have each client chose a single partition at random and publish to that. This later strategy will result in far fewer TCP connections.</p>
 
-<p>The client does not need to keep polling to see if the cluster has changed; it can fetch metadata once when it is instantiated cache that metadata until it receives an error indicating that the metadata is out of date. This error can come in two forms: (1) a socket error indicating the client cannot communicate with a particular broker, (2) an error code in the response to a request indicating that this broker no longer hosts the partition for which data was requested.</p>
-<ol>
-    <li>Cycle through a list of "bootstrap" kafka urls until we find one we can connect to. Fetch cluster metadata.</li>
-    <li>Process fetch or produce requests, directing them to the appropriate broker based on the topic/partitions they send to or fetch from.</li>
-    <li>If we get an appropriate error, refresh the metadata and try again.</li>
-</ol>
+    <p>Semantic partitioning means using some key in the message to assign messages to partitions. For example if you were processing a click message stream you might want to partition the stream by the user id so that all data for a particular user would go to a single consumer. To accomplish this the client can take a key associated with the message and use some hash of this key to choose the partition to which to deliver the message.</p>
 
-<h5><a id="protocol_partitioning_strategies" href="#protocol_partitioning_strategies">Partitioning Strategies</a></h5>
+    <h5><a id="protocol_batching" href="#protocol_batching">Batching</a></h5>
 
-<p>As mentioned above the assignment of messages to partitions is something the producing client controls. That said, how should this functionality be exposed to the end-user?</p>
+    <p>Our apis encourage batching small things together for efficiency. We have found this is a very significant performance win. Both our API to send messages and our API to fetch messages always work with a sequence of messages not a single message to encourage this. A clever client can make use of this and support an "asynchronous" mode in which it batches together messages sent individually and sends them in larger clumps. We go even further with this and allow the batching across multiple topics and partitions, so a produce request may contain data to append to many partitions and a fetch request may pull data from many partitions all at once.</p>
 
-<p>Partitioning really serves two purposes in Kafka:</p>
-<ol>
-    <li>It balances data and request load over brokers</li>
-    <li>It serves as a way to divvy up processing among consumer processes while allowing local state and preserving order within the partition. We call this semantic partitioning.</li>
-</ol>
+    <p>The client implementer can choose to ignore this and send everything one at a time if they like.</p>
 
-<p>For a given use case you may care about only one of these or both.</p>
+    <h5><a id="protocol_compatibility" href="#protocol_compatibility">Versioning and Compatibility</a></h5>
 
-<p>To accomplish simple load balancing a simple approach would be for the client to just round robin requests over all brokers. Another alternative, in an environment where there are many more producers than brokers, would be to have each client chose a single partition at random and publish to that. This later strategy will result in far fewer TCP connections.</p>
+    <p>The protocol is designed to enable incremental evolution in a backward compatible fashion. Our versioning is on a per-api basis, each version consisting of a request and response pair. Each request contains an API key that identifies the API being invoked and a version number that indicates the format of the request and the expected format of the response.</p>
 
-<p>Semantic partitioning means using some key in the message to assign messages to partitions. For example if you were processing a click message stream you might want to partition the stream by the user id so that all data for a particular user would go to a single consumer. To accomplish this the client can take a key associated with the message and use some hash of this key to choose the partition to which to deliver the message.</p>
+    <p>The intention is that clients would implement a particular version of the protocol, and indicate this version in their requests. Our goal is primarily to allow API evolution in an environment where downtime is not allowed and clients and servers cannot all be changed at once.</p>
 
-<h5><a id="protocol_batching" href="#protocol_batching">Batching</a></h5>
+    <p>The server will reject requests with a version it does not support, and will always respond to the client with exactly the protocol format it expects based on the version it included in its request. The intended upgrade path is that new features would first be rolled out on the server (with the older clients not making use of them) and then as newer clients are deployed these new features would gradually be taken advantage of.</p>
 
-<p>Our apis encourage batching small things together for efficiency. We have found this is a very significant performance win. Both our API to send messages and our API to fetch messages always work with a sequence of messages not a single message to encourage this. A clever client can make use of this and support an "asynchronous" mode in which it batches together messages sent individually and sends them in larger clumps. We go even further with this and allow the batching across multiple topics and partitions, so a produce request may contain data to append to many partitions and a fetch request may pull data from many partitions all at once.</p>
+    <p>Currently all versions are baselined at 0, as we evolve these APIs we will indicate the format for each version individually.</p>
 
-<p>The client implementer can choose to ignore this and send everything one at a time if they like.</p>
+    <h5><a id="api_versions" href="#api_versions">Retrieving Supported API versions</a></h5>
+    <p>In order for a client to successfully talk to a broker, it must use request versions supported by the broker. Clients
+        may work against multiple broker versions, however to do so the clients need to know what versions of various APIs a
+        broker supports. Starting from 0.10.0.0, brokers provide information on various versions of APIs they support. Details
+        of this new capability can be found <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version">here</a>.
+        Clients may use the supported API versions information to take appropriate actions such as propagating an unsupported
+        API version error to application or choose an API request/response version supported by both the client and broker.
+        The following sequence maybe used by a client to obtain supported API versions from a broker.</p>
+    <ol>
+        <li>Client sends <code>ApiVersionsRequest</code> to a broker after connection has been established with the broker. If SSL is enabled,
+            this happens after SSL connection has been established.</li>
+        <li>On receiving <code>ApiVersionsRequest</code>, a broker returns its full list of supported ApiKeys and
+            versions regardless of current authentication state (e.g., before SASL authentication on an SASL listener, do note that no
+            Kafka protocol requests may take place on a SSL listener before the SSL handshake is finished). If this is considered to
+            leak information about the broker version a workaround is to use SSL with client authentication which is performed at an
+            earlier stage of the connection where the <code>ApiVersionRequest</code> is not available. Also, note that broker versions older
+            than 0.10.0.0 do not support this API and will either ignore the request or close connection in response to the request.</li>
+        <li>If multiple versions of an API are supported by broker and client, clients are recommended to use the latest version supported
+            by the broker and itself.</li>
+        <li>Deprecation of a protocol version is done by marking an API version as deprecated in protocol documentation.</li>
+        <li>Supported API versions obtained from a broker, is valid only for current connection on which that information is obtained.
+            In the event of disconnection, the client should obtain the information from broker again, as the broker might have
+            upgraded/downgraded in the mean time.</li>
+    </ol>
 
-<h5><a id="protocol_compatibility" href="#protocol_compatibility">Versioning and Compatibility</a></h5>
 
-<p>The protocol is designed to enable incremental evolution in a backward compatible fashion. Our versioning is on a per-api basis, each version consisting of a request and response pair. Each request contains an API key that identifies the API being invoked and a version number that indicates the format of the request and the expected format of the response.</p>
+    <h5><a id="sasl_handshake" href="#sasl_handshake">SASL Authentication Sequence</a></h5>
+    <p>The following sequence is used for SASL authentication:
+    <ol>
+      <li>Kafka <code>ApiVersionsRequest</code> may be sent by the client to obtain the version ranges of requests supported by the broker. This is optional.</li>
+      <li>Kafka <code>SaslHandshakeRequest</code> containing the SASL mechanism for authentication is sent by the client. If the requested mechanism is not enabled
+        in the server, the server responds with the list of supported mechanisms and closes the client connection. If the mechanism is enabled
+        in the server, the server sends a successful response and continues with SASL authentication.
+      <li>The actual SASL authentication is now performed. A series of SASL client and server tokens corresponding to the mechanism are sent as opaque
+        packets. These packets contain a 32-bit size followed by the token as defined by the protocol for the SASL mechanism.
+      <li>If authentication succeeds, subsequent packets are handled as Kafka API requests. Otherwise, the client connection is closed.
+    </ol>
+    <p>For interoperability with 0.9.0.x clients, the first packet received by the server is handled as a SASL/GSSAPI client token if it is not a valid
+    Kafka request. SASL/GSSAPI authentication is performed starting with this packet, skipping the first two steps above.</p>
 
-<p>The intention is that clients would implement a particular version of the protocol, and indicate this version in their requests. Our goal is primarily to allow API evolution in an environment where downtime is not allowed and clients and servers cannot all be changed at once.</p>
 
-<p>The server will reject requests with a version it does not support, and will always respond to the client with exactly the protocol format it expects based on the version it included in its request. The intended upgrade path is that new features would first be rolled out on the server (with the older clients not making use of them) and then as newer clients are deployed these new features would gradually be taken advantage of.</p>
+    <h4><a id="protocol_details" href="#protocol_details">The Protocol</a></h4>
 
-<p>Currently all versions are baselined at 0, as we evolve these APIs we will indicate the format for each version individually.</p>
+    <h5><a id="protocol_types" href="#protocol_types">Protocol Primitive Types</a></h5>
 
-<h5><a id="api_versions" href="#api_versions">Retrieving Supported API versions</a></h5>
-<p>In order for a client to successfully talk to a broker, it must use request versions supported by the broker. Clients
-    may work against multiple broker versions, however to do so the clients need to know what versions of various APIs a
-    broker supports. Starting from 0.10.0.0, brokers provide information on various versions of APIs they support. Details
-    of this new capability can be found <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version">here</a>.
-    Clients may use the supported API versions information to take appropriate actions such as propagating an unsupported
-    API version error to application or choose an API request/response version supported by both the client and broker.
-    The following sequence maybe used by a client to obtain supported API versions from a broker.</p>
-<ol>
-    <li>Client sends <code>ApiVersionsRequest</code> to a broker after connection has been established with the broker. If SSL is enabled,
-        this happens after SSL connection has been established.</li>
-    <li>On receiving <code>ApiVersionsRequest</code>, a broker returns its full list of supported ApiKeys and
-        versions regardless of current authentication state (e.g., before SASL authentication on an SASL listener, do note that no
-        Kafka protocol requests may take place on a SSL listener before the SSL handshake is finished). If this is considered to
-        leak information about the broker version a workaround is to use SSL with client authentication which is performed at an
-        earlier stage of the connection where the <code>ApiVersionRequest</code> is not available. Also, note that broker versions older
-        than 0.10.0.0 do not support this API and will either ignore the request or close connection in response to the request.</li>
-    <li>If multiple versions of an API are supported by broker and client, clients are recommended to use the latest version supported
-        by the broker and itself.</li>
-    <li>Deprecation of a protocol version is done by marking an API version as deprecated in protocol documentation.</li>
-    <li>Supported API versions obtained from a broker, is valid only for current connection on which that information is obtained.
-        In the event of disconnection, the client should obtain the information from broker again, as the broker might have
-        upgraded/downgraded in the mean time.</li>
-</ol>
+    <p>The protocol is built out of the following primitive types.</p>
 
+    <p><b>Fixed Width Primitives</b><p>
 
-<h5><a id="sasl_handshake" href="#sasl_handshake">SASL Authentication Sequence</a></h5>
-<p>The following sequence is used for SASL authentication:
-<ol>
-  <li>Kafka <code>ApiVersionsRequest</code> may be sent by the client to obtain the version ranges of requests supported by the broker. This is optional.</li>
-  <li>Kafka <code>SaslHandshakeRequest</code> containing the SASL mechanism for authentication is sent by the client. If the requested mechanism is not enabled
-    in the server, the server responds with the list of supported mechanisms and closes the client connection. If the mechanism is enabled
-    in the server, the server sends a successful response and continues with SASL authentication.
-  <li>The actual SASL authentication is now performed. A series of SASL client and server tokens corresponding to the mechanism are sent as opaque
-    packets. These packets contain a 32-bit size followed by the token as defined by the protocol for the SASL mechanism.
-  <li>If authentication succeeds, subsequent packets are handled as Kafka API requests. Otherwise, the client connection is closed.
-</ol>
-<p>For interoperability with 0.9.0.x clients, the first packet received by the server is handled as a SASL/GSSAPI client token if it is not a valid
-Kafka request. SASL/GSSAPI authentication is performed starting with this packet, skipping the first two steps above.</p>
+    <p>int8, int16, int32, int64 - Signed integers with the given precision (in bits) stored in big endian order.</p>
 
+    <p><b>Variable Length Primitives</b><p>
 
-<h4><a id="protocol_details" href="#protocol_details">The Protocol</a></h4>
+    <p>bytes, string - These types consist of a signed integer giving a length N followed by N bytes of content. A length of -1 indicates null. string uses an int16 for its size, and bytes uses an int32.</p>
 
-<h5><a id="protocol_types" href="#protocol_types">Protocol Primitive Types</a></h5>
+    <p><b>Arrays</b><p>
 
-<p>The protocol is built out of the following primitive types.</p>
+    <p>This is a notation for handling repeated structures. These will always be encoded as an int32 size containing the length N followed by N repetitions of the structure which can itself be made up of other primitive types. In the BNF grammars below we will show an array of a structure foo as [foo].</p>
 
-<p><b>Fixed Width Primitives</b><p>
+    <h5><a id="protocol_grammar" href="#protocol_grammar">Notes on reading the request format grammars</a></h5>
 
-<p>int8, int16, int32, int64 - Signed integers with the given precision (in bits) stored in big endian order.</p>
+    <p>The <a href="https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form">BNF</a>s below give an exact context free grammar for the request and response binary format. The BNF is intentionally not compact in order to give human-readable name. As always in a BNF a sequence of productions indicates concatenation. When there are multiple possible productions these are separated with '|' and may be enclosed in parenthesis for grouping. The top-level definition is always given first and subsequent sub-parts are indented.</p>
 
-<p><b>Variable Length Primitives</b><p>
+    <h5><a id="protocol_common" href="#protocol_common">Common Request and Response Structure</a></h5>
 
-<p>bytes, string - These types consist of a signed integer giving a length N followed by N bytes of content. A length of -1 indicates null. string uses an int16 for its size, and bytes uses an int32.</p>
+    <p>All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:</p>
 
-<p><b>Arrays</b><p>
+    <pre>
+    RequestOrResponse => Size (RequestMessage | ResponseMessage)
+    Size => int32
+    </pre>
 
-<p>This is a notation for handling repeated structures. These will always be encoded as an int32 size containing the length N followed by N repetitions of the structure which can itself be made up of other primitive types. In the BNF grammars below we will show an array of a structure foo as [foo].</p>
+    <table class="data-table"><tbody>
+    <tr><th>Field</th><th>Description</th></tr>
+    <tr><td>message_size</td><td>The message_size field gives the size of the subsequent request or response message in bytes. The client can read requests by first reading this 4 byte size as an integer N, and then reading and parsing the subsequent N bytes of the request.</td></tr>
+    </table>
 
-<h5><a id="protocol_grammar" href="#protocol_grammar">Notes on reading the request format grammars</a></h5>
+    <h5><a id="protocol_message_sets" href="#protocol_message_sets">Message Sets</a></h5>
 
-<p>The <a href="https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form">BNF</a>s below give an exact context free grammar for the request and response binary format. The BNF is intentionally not compact in order to give human-readable name. As always in a BNF a sequence of productions indicates concatenation. When there are multiple possible productions these are separated with '|' and may be enclosed in parenthesis for grouping. The top-level definition is always given first and subsequent sub-parts are indented.</p>
+    <p>A description of the message set format can be found <a href="https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-Messagesets">here</a>. (KAFKA-3368)</p>
 
-<h5><a id="protocol_common" href="#protocol_common">Common Request and Response Structure</a></h5>
+    <h4><a id="protocol_constants" href="#protocol_constants">Constants</a></h4>
 
-<p>All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:</p>
+    <h5><a id="protocol_error_codes" href="#protocol_error_codes">Error Codes</a></h5>
+    <p>We use numeric codes to indicate what problem occurred on the server. These can be translated by the client into exceptions or whatever the appropriate error handling mechanism in the client language. Here is a table of the error codes currently in use:</p>
+    <!--#include virtual="generated/protocol_errors.html" -->
 
-<pre>
-RequestOrResponse => Size (RequestMessage | ResponseMessage)
-Size => int32
-</pre>
+    <h5><a id="protocol_api_keys" href="#protocol_api_keys">Api Keys</a></h5>
+    <p>The following are the numeric codes that the ApiKey in the request can take for each of the below request types.</p>
+    <!--#include virtual="generated/protocol_api_keys.html" -->
 
-<table class="data-table"><tbody>
-<tr><th>Field</th><th>Description</th></tr>
-<tr><td>message_size</td><td>The message_size field gives the size of the subsequent request or response message in bytes. The client can read requests by first reading this 4 byte size as an integer N, and then reading and parsing the subsequent N bytes of the request.</td></tr>
-</table>
+    <h4><a id="protocol_messages" href="#protocol_messages">The Messages</a></h4>
 
-<h5><a id="protocol_message_sets" href="#protocol_message_sets">Message Sets</a></h5>
+    <p>This section gives details on each of the individual API Messages, their usage, their binary format, and the meaning of their fields.</p>
+    <!--#include virtual="generated/protocol_messages.html" -->
 
-<p>A description of the message set format can be found <a href="https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-Messagesets">here</a>. (KAFKA-3368)</p>
+    <h4><a id="protocol_philosophy" href="#protocol_philosophy">Some Common Philosophical Questions</a></h4>
 
-<h4><a id="protocol_constants" href="#protocol_constants">Constants</a></h4>
+    <p>Some people have asked why we don't use HTTP. There are a number of reasons, the best is that client implementors can make use of some of the more advanced TCP features--the ability to multiplex requests, the ability to simultaneously poll many connections, etc. We have also found HTTP libraries in many languages to be surprisingly shabby.</p>
 
-<h5><a id="protocol_error_codes" href="#protocol_error_codes">Error Codes</a></h5>
-<p>We use numeric codes to indicate what problem occurred on the server. These can be translated by the client into exceptions or whatever the appropriate error handling mechanism in the client language. Here is a table of the error codes currently in use:</p>
-<!--#include virtual="generated/protocol_errors.html" -->
+    <p>Others have asked if maybe we shouldn't support many different protocols. Prior experience with this was that it makes it very hard to add and test new features if they have to be ported across many protocol implementations. Our feeling is that most users don't really see multiple protocols as a feature, they just want a good reliable client in the language of their choice.</p>
 
-<h5><a id="protocol_api_keys" href="#protocol_api_keys">Api Keys</a></h5>
-<p>The following are the numeric codes that the ApiKey in the request can take for each of the below request types.</p>
-<!--#include virtual="generated/protocol_api_keys.html" -->
+    <p>Another question is why we don't adopt XMPP, STOMP, AMQP or an existing protocol. The answer to this varies by protocol, but in general the problem is that the protocol does determine large parts of the implementation and we couldn't do what we are doing if we didn't have control over the protocol. Our belief is that it is possible to do better than existing messaging systems have in providing a truly distributed messaging system, and to do this we need to build something that works differently.</p>
 
-<h4><a id="protocol_messages" href="#protocol_messages">The Messages</a></h4>
+    <p>A final question is why we don't use a system like Protocol Buffers or Thrift to define our request messages. These packages excel at helping you to managing lots and lots of serialized messages. However we have only a few messages. Support across languages is somewhat spotty (depending on the package). Finally the mapping between binary log format and wire protocol is something we manage somewhat carefully and this would not be possible with these systems. Finally we prefer the style of versioning APIs explicitly and checking this to inferring new values as nulls as it allows more nuanced control of compatibility.</p>
 
-<p>This section gives details on each of the individual API Messages, their usage, their binary format, and the meaning of their fields.</p>
-<!--#include virtual="generated/protocol_messages.html" -->
-
-<h4><a id="protocol_philosophy" href="#protocol_philosophy">Some Common Philosophical Questions</a></h4>
-
-<p>Some people have asked why we don't use HTTP. There are a number of reasons, the best is that client implementors can make use of some of the more advanced TCP features--the ability to multiplex requests, the ability to simultaneously poll many connections, etc. We have also found HTTP libraries in many languages to be surprisingly shabby.</p>
-
-<p>Others have asked if maybe we shouldn't support many different protocols. Prior experience with this was that it makes it very hard to add and test new features if they have to be ported across many protocol implementations. Our feeling is that most users don't really see multiple protocols as a feature, they just want a good reliable client in the language of their choice.</p>
-
-<p>Another question is why we don't adopt XMPP, STOMP, AMQP or an existing protocol. The answer to this varies by protocol, but in general the problem is that the protocol does determine large parts of the implementation and we couldn't do what we are doing if we didn't have control over the protocol. Our belief is that it is possible to do better than existing messaging systems have in providing a truly distributed messaging system, and to do this we need to build something that works differently.</p>
-
-<p>A final question is why we don't use a system like Protocol Buffers or Thrift to define our request messages. These packages excel at helping you to managing lots and lots of serialized messages. However we have only a few messages. Support across languages is somewhat spotty (depending on the package). Finally the mapping between binary log format and wire protocol is something we manage somewhat carefully and this would not be possible with these systems. Finally we prefer the style of versioning APIs explicitly and checking this to inferring new values as nulls as it allows more nuanced control of compatibility.</p>
+  <script>
+	// Show selected style on nav item
+	$(function() { $('.b-nav__project').addClass('selected'); });
+	</script>
 
 <!--#include virtual="../includes/footer.html" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1b8cdf48/0100/quickstart.html
----------------------------------------------------------------------
diff --git a/0100/quickstart.html b/0100/quickstart.html
index 1d28f04..fc3e765 100644
--- a/0100/quickstart.html
+++ b/0100/quickstart.html
@@ -15,7 +15,7 @@
  limitations under the License.
 -->
 
-This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data.
+<p>This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data.</p>
 
 <h4><a id="quickstart_download" href="#quickstart_download">Step 1: Download the code</a></h4>
 
@@ -365,9 +365,9 @@ Note that the output is actually a continuous stream of updates, where each data
 an updated count of a single word, aka record key such as "kafka". For multiple records with the same key, each later record is an update of the previous one.
 
 <p>
-Now you can write more input messages to the <b>streams-file-input</b> topic and observe additional messages added 
-to <b>streams-wordcount-output</b> topic, reflecting updated word counts (e.g., using the console producer and the 
+Now you can write more input messages to the <b>streams-file-input</b> topic and observe additional messages added
+to <b>streams-wordcount-output</b> topic, reflecting updated word counts (e.g., using the console producer and the
 console consumer, as described above).
 </p>
 
-<p>You can stop the console consumer via <b>Ctrl-C</b>.</p>
\ No newline at end of file
+<p>You can stop the console consumer via <b>Ctrl-C</b>.</p>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1b8cdf48/0100/uses.html
----------------------------------------------------------------------
diff --git a/0100/uses.html b/0100/uses.html
index 6214ee6..7b52a59 100644
--- a/0100/uses.html
+++ b/0100/uses.html
@@ -15,7 +15,7 @@
  limitations under the License.
 -->
 
-Here is a description of a few of the popular use cases for Apache Kafka. For an overview of a number of these areas in action, see <a href="http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying">this blog post</a>.
+<p>Here is a description of a few of the popular use cases for Apache Kafka. For an overview of a number of these areas in action, see <a href="http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying">this blog post</a>.</p>
 
 <h4><a id="uses_messaging" href="#uses_messaging">Messaging</a></h4>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1b8cdf48/07/documentation.html
----------------------------------------------------------------------
diff --git a/07/documentation.html b/07/documentation.html
index da4a9e6..6125857 100644
--- a/07/documentation.html
+++ b/07/documentation.html
@@ -1,13 +1,17 @@
 <!--#include virtual="../includes/header.html" -->
+<!--#include virtual="../includes/top.html" -->
+<div class="content">
+	<!--#include virtual="../includes/nav.html" -->
+	<div class="right">
+		<h1>Documentation</h1>
+		<h3>Kafka 0.7</h3>
 
-<h1>Kafka 0.7 Documentation</h1>
+		<ul>
+			<li><a href="/07/quickstart.html">Quickstart</a> &ndash; Get up and running quickly.
+			<li><a href="/07/configuration.html">Configuration</a> &ndash; All the knobs.
+			<li><a href="/07/performance.html">Performance</a> &ndash; Some performance results.
+			<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations">Operations</a> &ndash; Notes on running the system.
+			<li><a href="http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs">API Docs</a> &ndash; Scaladoc for the api.
+		</ul>
 
-<ul>
-	<li><a href="/07/quickstart.html">Quickstart</a> &ndash; Get up and running quickly.
-	<li><a href="/07/configuration.html">Configuration</a> &ndash; All the knobs.
-	<li><a href="/07/performance.html">Performance</a> &ndash; Some performance results.
-	<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations">Operations</a> &ndash; Notes on running the system.
-	<li><a href="http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs">API Docs</a> &ndash; Scaladoc for the api.
-</ul>
-
-<!--#include virtual="../includes/footer.html" -->
\ No newline at end of file
+<!--#include virtual="../includes/footer.html" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1b8cdf48/08/documentation.html
----------------------------------------------------------------------
diff --git a/08/documentation.html b/08/documentation.html
index 6720754..1404239 100644
--- a/08/documentation.html
+++ b/08/documentation.html
@@ -1,101 +1,105 @@
 <!--#include virtual="../includes/header.html" -->
+<!--#include virtual="../includes/top.html" -->
+<div class="content">
+	<!--#include virtual="../includes/nav.html" -->
+	<div class="right">
+		<h1>Documentation</h1>
+    <h3>Kafka 0.8.0</h3>
+    <i>Documentation for the 0.7 release is <a href="/07/documentation.html">here</a>.</i>
+    <ul class="toc">
+        <li><a href="#gettingStarted">1. Getting Started</a>
+             <ul>
+                 <li><a href="#introduction">1.1 Introduction</a>
+    	         <li><a href="#uses">1.2 Use Cases</a>
+                 <li><a href="#quickstart">1.3 Quick Start</a>
+             </ul>
+        <li><a href="#api">2. API</a>
+    	      <ul>
+    		      <li><a href="#producerapi">2.1 Producer API</a>
+    			  <li><a href="#highlevelconsumerapi">2.2 High Level Consumer API</a>
+    			  <li><a href="#simpleconsumerapi">2.3 Simple Consumer API</a>
+    			  <li><a href="#kafkahadoopconsumerapi">2.4 Kafka Hadoop Consumer API</a>
+    		  </ul>
+        <li><a href="#configuration">3. Configuration</a>
+    	    <ul>
+    		     <li><a href="#brokerconfigs">3.1 Broker Configs</a>
+    			 <li><a href="#consumerconfigs">3.2 Consumer Configs</a>
+    		     <li><a href="#producerconfigs">3.3 Producer Configs</a>
+    		</ul>
+        <li><a href="#design">4. Design</a>
+    	    <ul>
+    		     <li><a href="#majordesignelements">4.1 Motivation</a>
+    			 <li><a href="#persistence">4.2 Persistence</a>
+    			 <li><a href="#maximizingefficiency">4.3 Efficiency</a>
+    			 <li><a href="#theproducer">4.4 The Producer</a>
+    			 <li><a href="#theconsumer">4.5 The Consumer</a>
+    			 <li><a href="#semantics">4.6 Message Delivery Semantics</a>
+    			 <li><a href="#replication">4.7 Replication</a>
+    		</ul>
+    	<li><a href="#implementation">5. Implementation</a>
+    		<ul>
+    			  <li><a href="#apidesign">5.1 API Design</a>
+    			  <li><a href="#networklayer">5.2 Network Layer</a>
+    			  <li><a href="#messages">5.3 Messages</a>
+    			  <li><a href="#messageformat">5.4 Message format</a>
+    			  <li><a href="#log">5.5 Log</a>
+    			  <li><a href="#distributionimpl">5.6 Distribution</a>
+    		</ul>
+    	<li><a href="#operations">6. Operations</a>
+    		<ul>
+    			  <li><a href="#datacenters">6.1 Datacenters</a>
+    			  <li><a href="#config">6.2 Config</a>
+    				 <ul>
+    					 <li><a href="#serverconfig">Important Server Configs</a>
+    					 <li><a href="#clientconfig">Important Client Configs</a>
+    					 <li><a href="#prodconfig">A Production Server Configs</a>
+            		 </ul>
+         		  <li><a href="#java">6.3 Java Version</a>
+    	 		  <li><a href="#hwandos">6.4 Hardware and OS</a>
+    				<ul>
+    					<li><a href="#os">OS</a>
+    					<li><a href="#diskandfs">Disks and Filesystems</a>
+    					<li><a href="#appvsosflush">Application vs OS Flush Management</a>
+    					<li><a href="#linuxflush">Linux Flush Behavior</a>
+    					<li><a href="#ext4">Ext4 Notes</a>
+    				</ul>
+    			  <li><a href="#monitoring">6.5 Monitoring</a>
+    			  <li><a href="#zk">6.6 Zookeeper</a>
+    				<ul>
+    					<li><a href="#zkversion">Stable Version</a>
+    					<li><a href="#zkops">Operationalization</a>
+    				</ul>
+    		</ul>
+    	<li><a href="#tools">7. Tools</a>
+    </ul>
 
-<h1>Kafka 0.8.0 Documentation</h1>
-<i>Documentation for the 0.7 release is <a href="/07/documentation.html">here</a>.</i>
-<ul class="toc">
-    <li><a href="#gettingStarted">1. Getting Started</a>
-         <ul>
-             <li><a href="#introduction">1.1 Introduction</a>
-	         <li><a href="#uses">1.2 Use Cases</a>
-             <li><a href="#quickstart">1.3 Quick Start</a>
-         </ul>
-    <li><a href="#api">2. API</a>
-	      <ul>
-		      <li><a href="#producerapi">2.1 Producer API</a>
-			  <li><a href="#highlevelconsumerapi">2.2 High Level Consumer API</a>
-			  <li><a href="#simpleconsumerapi">2.3 Simple Consumer API</a>
-			  <li><a href="#kafkahadoopconsumerapi">2.4 Kafka Hadoop Consumer API</a>
-		  </ul>
-    <li><a href="#configuration">3. Configuration</a>
-	    <ul>
-		     <li><a href="#brokerconfigs">3.1 Broker Configs</a>
-			 <li><a href="#consumerconfigs">3.2 Consumer Configs</a>
-		     <li><a href="#producerconfigs">3.3 Producer Configs</a>
-		</ul>
-    <li><a href="#design">4. Design</a>
-	    <ul>
-		     <li><a href="#majordesignelements">4.1 Motivation</a>
-			 <li><a href="#persistence">4.2 Persistence</a>
-			 <li><a href="#maximizingefficiency">4.3 Efficiency</a>
-			 <li><a href="#theproducer">4.4 The Producer</a>
-			 <li><a href="#theconsumer">4.5 The Consumer</a>
-			 <li><a href="#semantics">4.6 Message Delivery Semantics</a>
-			 <li><a href="#replication">4.7 Replication</a>
-		</ul>
-	<li><a href="#implementation">5. Implementation</a>
-		<ul>
-			  <li><a href="#apidesign">5.1 API Design</a>
-			  <li><a href="#networklayer">5.2 Network Layer</a>
-			  <li><a href="#messages">5.3 Messages</a>
-			  <li><a href="#messageformat">5.4 Message format</a>
-			  <li><a href="#log">5.5 Log</a>
-			  <li><a href="#distributionimpl">5.6 Distribution</a>
-		</ul>
-	<li><a href="#operations">6. Operations</a>
-		<ul>
-			  <li><a href="#datacenters">6.1 Datacenters</a>
-			  <li><a href="#config">6.2 Config</a>
-				 <ul>
-					 <li><a href="#serverconfig">Important Server Configs</a>
-					 <li><a href="#clientconfig">Important Client Configs</a>
-					 <li><a href="#prodconfig">A Production Server Configs</a>
-        		 </ul>
-     		  <li><a href="#java">6.3 Java Version</a>
-	 		  <li><a href="#hwandos">6.4 Hardware and OS</a>
-				<ul>
-					<li><a href="#os">OS</a>
-					<li><a href="#diskandfs">Disks and Filesystems</a>
-					<li><a href="#appvsosflush">Application vs OS Flush Management</a>
-					<li><a href="#linuxflush">Linux Flush Behavior</a>
-					<li><a href="#ext4">Ext4 Notes</a>
-				</ul>
-			  <li><a href="#monitoring">6.5 Monitoring</a>
-			  <li><a href="#zk">6.6 Zookeeper</a>
-				<ul>
-					<li><a href="#zkversion">Stable Version</a>
-					<li><a href="#zkops">Operationalization</a>
-				</ul>
-		</ul>
-	<li><a href="#tools">7. Tools</a>
-</ul>
+    <h2><a id="gettingStarted">1. Getting Started</a></h2>
+    <!--#include virtual="introduction.html" -->
+    <!--#include virtual="uses.html" -->
+    <!--#include virtual="quickstart.html" -->
 
-<h2><a id="gettingStarted">1. Getting Started</a></h2>
-<!--#include virtual="introduction.html" -->
-<!--#include virtual="uses.html" -->
-<!--#include virtual="quickstart.html" -->
+    <h2><a id="api">2. API</a></h2>
 
-<h2><a id="api">2. API</a></h2>
+    <!--#include virtual="api.html" -->
 
-<!--#include virtual="api.html" -->
+    <h2><a id="configuration">3. Configuration</a></h2>
 
-<h2><a id="configuration">3. Configuration</a></h2>
+    <!--#include virtual="configuration.html" -->
 
-<!--#include virtual="configuration.html" -->
+    <h2><a id="design">4. Design</a></h2>
 
-<h2><a id="design">4. Design</a></h2>
+    <!--#include virtual="design.html" -->
 
-<!--#include virtual="design.html" -->
+    <h2><a id="implementation">5. Implementation</a></h2>
 
-<h2><a id="implementation">5. Implementation</a></h2>
+    <!--#include virtual="implementation.html" -->
 
-<!--#include virtual="implementation.html" -->
+    <h2><a id="operations">6. Operations</a></h2>
 
-<h2><a id="operations">6. Operations</a></h2>
+    <!--#include virtual="ops.html" -->
 
-<!--#include virtual="ops.html" -->
+    <h2><a id="tools">7. Tools</a></h2>
 
-<h2><a id="tools">7. Tools</a></h2>
-
-<!--#include virtual="tools.html" -->
+    <!--#include virtual="tools.html" -->
 
 <!--#include virtual="../includes/footer.html" -->


Mime
View raw message