kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject [4/8] kafka-site git commit: Extract Streams from documentation as standalone docs
Date Wed, 14 Dec 2016 01:35:06 GMT
http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/0101/introduction.html
----------------------------------------------------------------------
diff --git a/0101/introduction.html b/0101/introduction.html
index c9ab696..394a415 100644
--- a/0101/introduction.html
+++ b/0101/introduction.html
@@ -14,186 +14,193 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-<h3>Kafka&trade; is <i>a distributed streaming platform</i>. What exactly
does that mean?</h3>
-<p>We think of a streaming platform as having three key capabilities:</p>
-<ol>
-	<li>It lets you publish and subscribe to streams of records. In this respect it is
similar to a message queue or enterprise messaging system.
-	<li>It lets you store streams of records in a fault-tolerant way.
-	<li>It lets you process streams of records as they occur.
-</ol>
-<p>What is Kafka good for?</p>
-<p>It gets used for two broad classes of application:</p>
-<ol>
-  <li>Building real-time streaming data pipelines that reliably get data between systems
or applications
-  <li>Building real-time streaming applications that transform or react to the streams
of data
-</ol>
-<p>To understand how Kafka does these things, let's dive in and explore Kafka's capabilities
from the bottom up.</p>
-<p>First a few concepts:</p>
-<ul>
-	<li>Kafka is run as a cluster on one or more servers.
-    <li>The Kafka cluster stores streams of <i>records</i> in categories
called <i>topics</i>.
-	<li>Each record consists of a key, a value, and a timestamp.
-</ul>
-<p>Kafka has four core APIs:</p>
-<div style="overflow: hidden;">
-    <ul style="float: left; width: 40%;">
-    <li>The <a href="/documentation.html#producerapi">Producer API</a>
allows an application to publish a stream records to one or more Kafka topics.
-    <li>The <a href="/documentation.html#consumerapi">Consumer API</a>
allows an application to subscribe to one or more topics and process the stream of records
produced to them.
-	<li>The <a href="/documentation.html#streams">Streams API</a> allows an
application to act as a <i>stream processor</i>, consuming an input stream from
one or more topics and producing an output stream to one or more output topics, effectively
transforming the input streams to output streams.
-	<li>The <a href="/documentation.html#connect">Connector API</a> allows
building and running reusable producers or consumers that connect Kafka topics to existing
applications or data systems. For example, a connector to a relational database might capture
every change to a table.
-</ul>
-    <img src="images/kafka-apis.png" style="float: right; width: 50%;">
-    </div>
-<p>
-In Kafka the communication between the clients and the servers is done with a simple, high-performance,
language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>.
This protocol is versioned and maintains backwards compatibility with older version. We provide
a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many
languages</a>.</p>
 
-<h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
-<p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the
topic.</p>
-<p>A topic is a category or feed name to which records are published. Topics in Kafka
are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe
to the data written to it.</p>
-<p> For each topic, the Kafka cluster maintains a partitioned log that looks like this:
</p>
-<img class="centered" src="images/log_anatomy.png">
+<script><!--#include virtual="js/templateData.js" --></script>
 
-<p> Each partition is an ordered, immutable sequence of records that is continually
appended to&mdash;a structured commit log. The records in the partitions are each assigned
a sequential id number called the <i>offset</i> that uniquely identifies each
record within the partition.
-</p>
-<p>
-The Kafka cluster retains all published records&mdash;whether or not they have been consumed&mdash;using
a configurable retention period. For example, if the retention policy is set to two days,
then for the two days after a record is published, it is available for consumption, after
which it will be discarded to free up space. Kafka's performance is effectively constant with
respect to data size so storing data for a long time is not a problem.
-</p>
-<img class="centered" src="images/log_consumer.png" style="width:400px">
-<p>
-In fact, the only metadata retained on a per-consumer basis is the offset or position of
that consumer in the log. This offset is controlled by the consumer: normally a consumer will
advance its offset linearly as it reads records, but, in fact, since the position is controlled
by the consumer it can consume records in any order it likes. For example a consumer can reset
to an older offset to reprocess data from the past or skip ahead to the most recent record
and start consuming from "now".
-</p>
-<p>
-This combination of features means that Kafka consumers are very cheap&mdash;they can
come and go without much impact on the cluster or on other consumers. For example, you can
use our command line tools to "tail" the contents of any topic without changing what is consumed
by any existing consumers.
-</p>
-<p>
-The partitions in the log serve several purposes. First, they allow the log to scale beyond
a size that will fit on a single server. Each individual partition must fit on the servers
that host it, but a topic may have many partitions so it can handle an arbitrary amount of
data. Second they act as the unit of parallelism&mdash;more on that in a bit.
-</p>
+<script id="introduction-template" type="text/x-handlebars-template">
+  <h3>Kafka&trade; is <i>a distributed streaming platform</i>. What
exactly does that mean?</h3>
+  <p>We think of a streaming platform as having three key capabilities:</p>
+  <ol>
+    <li>It lets you publish and subscribe to streams of records. In this respect it
is similar to a message queue or enterprise messaging system.
+    <li>It lets you store streams of records in a fault-tolerant way.
+    <li>It lets you process streams of records as they occur.
+  </ol>
+  <p>What is Kafka good for?</p>
+  <p>It gets used for two broad classes of application:</p>
+  <ol>
+    <li>Building real-time streaming data pipelines that reliably get data between
systems or applications
+    <li>Building real-time streaming applications that transform or react to the streams
of data
+  </ol>
+  <p>To understand how Kafka does these things, let's dive in and explore Kafka's capabilities
from the bottom up.</p>
+  <p>First a few concepts:</p>
+  <ul>
+    <li>Kafka is run as a cluster on one or more servers.
+      <li>The Kafka cluster stores streams of <i>records</i> in categories
called <i>topics</i>.
+    <li>Each record consists of a key, a value, and a timestamp.
+  </ul>
+  <p>Kafka has four core APIs:</p>
+  <div style="overflow: hidden;">
+      <ul style="float: left; width: 40%;">
+      <li>The <a href="/documentation.html#producerapi">Producer API</a>
allows an application to publish a stream records to one or more Kafka topics.
+      <li>The <a href="/documentation.html#consumerapi">Consumer API</a>
allows an application to subscribe to one or more topics and process the stream of records
produced to them.
+    <li>The <a href="/documentation.html#streams">Streams API</a> allows
an application to act as a <i>stream processor</i>, consuming an input stream
from one or more topics and producing an output stream to one or more output topics, effectively
transforming the input streams to output streams.
+    <li>The <a href="/documentation.html#connect">Connector API</a> allows
building and running reusable producers or consumers that connect Kafka topics to existing
applications or data systems. For example, a connector to a relational database might capture
every change to a table.
+  </ul>
+      <img src="/{{version}}/images/kafka-apis.png" style="float: right; width: 50%;">
+      </div>
+  <p>
+  In Kafka the communication between the clients and the servers is done with a simple, high-performance,
language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>.
This protocol is versioned and maintains backwards compatibility with older version. We provide
a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many
languages</a>.</p>
 
-<h4><a id="intro_distribution" href="#intro_distribution">Distribution</a></h4>
+  <h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
+  <p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the
topic.</p>
+  <p>A topic is a category or feed name to which records are published. Topics in Kafka
are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe
to the data written to it.</p>
+  <p> For each topic, the Kafka cluster maintains a partitioned log that looks like
this: </p>
+  <img class="centered" src="/{{version}}/images/log_anatomy.png">
 
-<p>
-The partitions of the log are distributed over the servers in the Kafka cluster with each
server handling data and requests for a share of the partitions. Each partition is replicated
across a configurable number of servers for fault tolerance.
-</p>
-<p>
-Each partition has one server which acts as the "leader" and zero or more servers which act
as "followers". The leader handles all read and write requests for the partition while the
followers passively replicate the leader. If the leader fails, one of the followers will automatically
become the new leader. Each server acts as a leader for some of its partitions and a follower
for others so load is well balanced within the cluster.
-</p>
+  <p> Each partition is an ordered, immutable sequence of records that is continually
appended to&mdash;a structured commit log. The records in the partitions are each assigned
a sequential id number called the <i>offset</i> that uniquely identifies each
record within the partition.
+  </p>
+  <p>
+  The Kafka cluster retains all published records&mdash;whether or not they have been
consumed&mdash;using a configurable retention period. For example, if the retention policy
is set to two days, then for the two days after a record is published, it is available for
consumption, after which it will be discarded to free up space. Kafka's performance is effectively
constant with respect to data size so storing data for a long time is not a problem.
+  </p>
+  <img class="centered" src="/{{version}}/images/log_consumer.png" style="width:400px">
+  <p>
+  In fact, the only metadata retained on a per-consumer basis is the offset or position of
that consumer in the log. This offset is controlled by the consumer: normally a consumer will
advance its offset linearly as it reads records, but, in fact, since the position is controlled
by the consumer it can consume records in any order it likes. For example a consumer can reset
to an older offset to reprocess data from the past or skip ahead to the most recent record
and start consuming from "now".
+  </p>
+  <p>
+  This combination of features means that Kafka consumers are very cheap&mdash;they can
come and go without much impact on the cluster or on other consumers. For example, you can
use our command line tools to "tail" the contents of any topic without changing what is consumed
by any existing consumers.
+  </p>
+  <p>
+  The partitions in the log serve several purposes. First, they allow the log to scale beyond
a size that will fit on a single server. Each individual partition must fit on the servers
that host it, but a topic may have many partitions so it can handle an arbitrary amount of
data. Second they act as the unit of parallelism&mdash;more on that in a bit.
+  </p>
 
-<h4><a id="intro_producers" href="#intro_producers">Producers</a></h4>
-<p>
-Producers publish data to the topics of their choice. The producer is responsible for choosing
which record to assign to which partition within the topic. This can be done in a round-robin
fashion simply to balance load or it can be done according to some semantic partition function
(say based on some key in the record). More on the use of partitioning in a second!
-</p>
+  <h4><a id="intro_distribution" href="#intro_distribution">Distribution</a></h4>
 
-<h4><a id="intro_consumers" href="#intro_consumers">Consumers</a></h4>
+  <p>
+  The partitions of the log are distributed over the servers in the Kafka cluster with each
server handling data and requests for a share of the partitions. Each partition is replicated
across a configurable number of servers for fault tolerance.
+  </p>
+  <p>
+  Each partition has one server which acts as the "leader" and zero or more servers which
act as "followers". The leader handles all read and write requests for the partition while
the followers passively replicate the leader. If the leader fails, one of the followers will
automatically become the new leader. Each server acts as a leader for some of its partitions
and a follower for others so load is well balanced within the cluster.
+  </p>
 
-<p>
-Consumers label themselves with a <i>consumer group</i> name, and each record
published to a topic is delivered to one consumer instance within each subscribing consumer
group. Consumer instances can be in separate processes or on separate machines.
-</p>
-<p>
-If all the consumer instances have the same consumer group, then the records will effectively
be load balanced over the consumer instances.</p>
-<p>
-If all the consumer instances have different consumer groups, then each record will be broadcast
to all the consumer processes.
-</p>
-<img class="centered" src="images/consumer-groups.png">
-<p>
-  A two server Kafka cluster hosting four partitions (P0-P3) with two consumer groups. Consumer
group A has two consumer instances and group B has four.
-</p>
+  <h4><a id="intro_producers" href="#intro_producers">Producers</a></h4>
+  <p>
+  Producers publish data to the topics of their choice. The producer is responsible for choosing
which record to assign to which partition within the topic. This can be done in a round-robin
fashion simply to balance load or it can be done according to some semantic partition function
(say based on some key in the record). More on the use of partitioning in a second!
+  </p>
 
-<p>
-More commonly, however, we have found that topics have a small number of consumer groups,
one for each "logical subscriber". Each group is composed of many consumer instances for scalability
and fault tolerance. This is nothing more than publish-subscribe semantics where the subscriber
is a cluster of consumers instead of a single process.
-</p>
-<p>
-The way consumption is implemented in Kafka is by dividing up the partitions in the log over
the consumer instances so that each instance is the exclusive consumer of a "fair share" of
partitions at any point in time. This process of maintaining membership in the group is handled
by the Kafka protocol dynamically. If new instances join the group they will take over some
partitions from other members of the group; if an instance dies, its partitions will be distributed
to the remaining instances.
-</p>
-<p>
-Kafka only provides a total order over records <i>within</i> a partition, not
between different partitions in a topic. Per-partition ordering combined with the ability
to partition data by key is sufficient for most applications. However, if you require a total
order over records this can be achieved with a topic that has only one partition, though this
will mean only one consumer process per consumer group.
-</p>
-<h4><a id="intro_guarantees" href="#intro_guarantees">Guarantees</a></h4>
-<p>
-At a high-level Kafka gives the following guarantees:
-</p>
-<ul>
-  <li>Messages sent by a producer to a particular topic partition will be appended
in the order they are sent. That is, if a record M1 is sent by the same producer as a record
M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the
log.
-  <li>A consumer instance sees records in the order they are stored in the log.
-  <li>For a topic with replication factor N, we will tolerate up to N-1 server failures
without losing any records committed to the log.
-</ul>
-<p>
-More details on these guarantees are given in the design section of the documentation.
-</p>
-<h4><a id="kafka_mq" href="#kafka_mq">Kafka as a Messaging System</a></h4>
-<p>
-How does Kafka's notion of streams compare to a traditional enterprise messaging system?
-</p>
-<p>
-Messaging traditionally has two models: <a href="http://en.wikipedia.org/wiki/Message_queue">queuing</a>
and <a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern">publish-subscribe</a>.
In a queue, a pool of consumers may read from a server and each record goes to one of them;
in publish-subscribe the record is broadcast to all consumers. Each of these two models has
a strength and a weakness. The strength of queuing is that it allows you to divide up the
processing of data over multiple consumer instances, which lets you scale your processing.
Unfortunately, queues aren't multi-subscriber&mdash;once one process reads the data it's
gone. Publish-subscribe allows you broadcast data to multiple processes, but has no way of
scaling processing since every message goes to every subscriber.
-</p>
-<p>
-The consumer group concept in Kafka generalizes these two concepts. As with a queue the consumer
group allows you to divide up processing over a collection of processes (the members of the
consumer group). As with publish-subscribe, Kafka allows you to broadcast messages to multiple
consumer groups.
-</p>
-<p>
-The advantage of Kafka's model is that every topic has both these properties&mdash;it
can scale processing and is also multi-subscriber&mdash;there is no need to choose one
or the other.
-</p>
-<p>
-Kafka has stronger ordering guarantees than a traditional messaging system, too.
-</p>
-<p>
-A traditional queue retains records in-order on the server, and if multiple consumers consume
from the queue then the server hands out records in the order they are stored. However, although
the server hands out records in order, the records are delivered asynchronously to consumers,
so they may arrive out of order on different consumers. This effectively means the ordering
of the records is lost in the presence of parallel consumption. Messaging systems often work
around this by having a notion of "exclusive consumer" that allows only one process to consume
from a queue, but of course this means that there is no parallelism in processing.
-</p>
-<p>
-Kafka does it better. By having a notion of parallelism&mdash;the partition&mdash;within
the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool
of consumer processes. This is achieved by assigning the partitions in the topic to the consumers
in the consumer group so that each partition is consumed by exactly one consumer in the group.
By doing this we ensure that the consumer is the only reader of that partition and consumes
the data in order. Since there are many partitions this still balances the load over many
consumer instances. Note however that there cannot be more consumer instances in a consumer
group than partitions.
-</p>
+  <h4><a id="intro_consumers" href="#intro_consumers">Consumers</a></h4>
 
-<h4>Kafka as a Storage System</h4>
+  <p>
+  Consumers label themselves with a <i>consumer group</i> name, and each record
published to a topic is delivered to one consumer instance within each subscribing consumer
group. Consumer instances can be in separate processes or on separate machines.
+  </p>
+  <p>
+  If all the consumer instances have the same consumer group, then the records will effectively
be load balanced over the consumer instances.</p>
+  <p>
+  If all the consumer instances have different consumer groups, then each record will be
broadcast to all the consumer processes.
+  </p>
+  <img class="centered" src="/{{version}}/images/consumer-groups.png">
+  <p>
+    A two server Kafka cluster hosting four partitions (P0-P3) with two consumer groups.
Consumer group A has two consumer instances and group B has four.
+  </p>
 
-<p>
-Any message queue that allows publishing messages decoupled from consuming them is effectively
acting as a storage system for the in-flight messages. What is different about Kafka is that
it is a very good storage system.
-</p>
-<p>
-Data written to Kafka is written to disk and replicated for fault-tolerance. Kafka allows
producers to wait on acknowledgement so that a write isn't considered complete until it is
fully replicated and guaranteed to persist even if the server written to fails.
-</p>
-<p>
-The disk structures Kafka uses scale well&mdash;Kafka will perform the same whether you
have 50 KB or 50 TB of persistent data on the server.
-</p>
-<p>
-As a result of taking storage seriously and allowing the clients to control their read position,
you can think of Kafka as a kind of special purpose distributed filesystem dedicated to high-performance,
low-latency commit log storage, replication, and propagation.
-</p>
-<h4>Kafka for Stream Processing</h4>
-<p>
-It isn't enough to just read, write, and store streams of data, the purpose is to enable
real-time processing of streams.
-</p>
-<p>
-In Kafka a stream processor is anything that takes continual streams of  data from input
topics, performs some processing on this input, and produces continual streams of data to
output topics.
-</p>
-<p>
-For example, a retail application might take in input streams of sales and shipments, and
output a stream of reorders and price adjustments computed off this data.
-</p>
-<p>
-It is possible to do simple processing directly using the producer and consumer APIs. However
for more complex transformations Kafka provides a fully integrated <a href="/documentation.html#streams">Streams
API</a>. This allows building applications that do non-trivial processing that compute
aggregations off of streams or join streams together.
-</p>
-<p>
-This facility helps solve the hard problems this type of application faces: handling out-of-order
data, reprocessing input as code changes, performing stateful computations, etc.
-</p>
-<p>
-The streams API builds on the core primitives Kafka provides: it uses the producer and consumer
APIs for input, uses Kafka for stateful storage, and uses the same group mechanism for fault
tolerance among the stream processor instances.
-</p>
-<h4>Putting the Pieces Together</h4>
-<p>
-This combination of messaging, storage, and stream processing may seem unusual but it is
essential to Kafka's role as a streaming platform.
-</p>
-<p>
-A distributed file system like HDFS allows storing static files for batch processing. Effectively
a system like this allows storing and processing <i>historical</i> data from the
past.
-</p>
-<p>
-A traditional enterprise messaging system allows processing future messages that will arrive
after you subscribe. Applications built in this way process future data as it arrives.
-</p>
-<p>
-Kafka combines both of these capabilities, and the combination is critical both for Kafka
usage as a platform for streaming applications as well as for streaming data pipelines.
-</p>
-<p>
-By combining storage and low-latency subscriptions, streaming applications can treat both
past and future data the same way. That is a single application can process historical, stored
data but rather than ending when it reaches the last record it can keep processing as future
data arrives. This is a generalized notion of stream processing that subsumes batch processing
as well as message-driven applications.
-</p>
-<p>
-Likewise for streaming data pipelines the combination of subscription to real-time events
make it possible to use Kafka for very low-latency pipelines; but the ability to store data
reliably make it possible to use it for critical data where the delivery of data must be guaranteed
or for integration with offline systems that load data only periodically or may go down for
extended periods of time for maintenance. The stream processing facilities make it possible
to transform data as it arrives.
-</p>
-<p>
-For more information on the guarantees, apis, and capabilities Kafka provides see the rest
of the <a href="/documentation.html">documentation</a>.
-</p>
+  <p>
+  More commonly, however, we have found that topics have a small number of consumer groups,
one for each "logical subscriber". Each group is composed of many consumer instances for scalability
and fault tolerance. This is nothing more than publish-subscribe semantics where the subscriber
is a cluster of consumers instead of a single process.
+  </p>
+  <p>
+  The way consumption is implemented in Kafka is by dividing up the partitions in the log
over the consumer instances so that each instance is the exclusive consumer of a "fair share"
of partitions at any point in time. This process of maintaining membership in the group is
handled by the Kafka protocol dynamically. If new instances join the group they will take
over some partitions from other members of the group; if an instance dies, its partitions
will be distributed to the remaining instances.
+  </p>
+  <p>
+  Kafka only provides a total order over records <i>within</i> a partition, not
between different partitions in a topic. Per-partition ordering combined with the ability
to partition data by key is sufficient for most applications. However, if you require a total
order over records this can be achieved with a topic that has only one partition, though this
will mean only one consumer process per consumer group.
+  </p>
+  <h4><a id="intro_guarantees" href="#intro_guarantees">Guarantees</a></h4>
+  <p>
+  At a high-level Kafka gives the following guarantees:
+  </p>
+  <ul>
+    <li>Messages sent by a producer to a particular topic partition will be appended
in the order they are sent. That is, if a record M1 is sent by the same producer as a record
M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the
log.
+    <li>A consumer instance sees records in the order they are stored in the log.
+    <li>For a topic with replication factor N, we will tolerate up to N-1 server failures
without losing any records committed to the log.
+  </ul>
+  <p>
+  More details on these guarantees are given in the design section of the documentation.
+  </p>
+  <h4><a id="kafka_mq" href="#kafka_mq">Kafka as a Messaging System</a></h4>
+  <p>
+  How does Kafka's notion of streams compare to a traditional enterprise messaging system?
+  </p>
+  <p>
+  Messaging traditionally has two models: <a href="http://en.wikipedia.org/wiki/Message_queue">queuing</a>
and <a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern">publish-subscribe</a>.
In a queue, a pool of consumers may read from a server and each record goes to one of them;
in publish-subscribe the record is broadcast to all consumers. Each of these two models has
a strength and a weakness. The strength of queuing is that it allows you to divide up the
processing of data over multiple consumer instances, which lets you scale your processing.
Unfortunately, queues aren't multi-subscriber&mdash;once one process reads the data it's
gone. Publish-subscribe allows you broadcast data to multiple processes, but has no way of
scaling processing since every message goes to every subscriber.
+  </p>
+  <p>
+  The consumer group concept in Kafka generalizes these two concepts. As with a queue the
consumer group allows you to divide up processing over a collection of processes (the members
of the consumer group). As with publish-subscribe, Kafka allows you to broadcast messages
to multiple consumer groups.
+  </p>
+  <p>
+  The advantage of Kafka's model is that every topic has both these properties&mdash;it
can scale processing and is also multi-subscriber&mdash;there is no need to choose one
or the other.
+  </p>
+  <p>
+  Kafka has stronger ordering guarantees than a traditional messaging system, too.
+  </p>
+  <p>
+  A traditional queue retains records in-order on the server, and if multiple consumers consume
from the queue then the server hands out records in the order they are stored. However, although
the server hands out records in order, the records are delivered asynchronously to consumers,
so they may arrive out of order on different consumers. This effectively means the ordering
of the records is lost in the presence of parallel consumption. Messaging systems often work
around this by having a notion of "exclusive consumer" that allows only one process to consume
from a queue, but of course this means that there is no parallelism in processing.
+  </p>
+  <p>
+  Kafka does it better. By having a notion of parallelism&mdash;the partition&mdash;within
the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool
of consumer processes. This is achieved by assigning the partitions in the topic to the consumers
in the consumer group so that each partition is consumed by exactly one consumer in the group.
By doing this we ensure that the consumer is the only reader of that partition and consumes
the data in order. Since there are many partitions this still balances the load over many
consumer instances. Note however that there cannot be more consumer instances in a consumer
group than partitions.
+  </p>
+
+  <h4>Kafka as a Storage System</h4>
+
+  <p>
+  Any message queue that allows publishing messages decoupled from consuming them is effectively
acting as a storage system for the in-flight messages. What is different about Kafka is that
it is a very good storage system.
+  </p>
+  <p>
+  Data written to Kafka is written to disk and replicated for fault-tolerance. Kafka allows
producers to wait on acknowledgement so that a write isn't considered complete until it is
fully replicated and guaranteed to persist even if the server written to fails.
+  </p>
+  <p>
+  The disk structures Kafka uses scale well&mdash;Kafka will perform the same whether
you have 50 KB or 50 TB of persistent data on the server.
+  </p>
+  <p>
+  As a result of taking storage seriously and allowing the clients to control their read
position, you can think of Kafka as a kind of special purpose distributed filesystem dedicated
to high-performance, low-latency commit log storage, replication, and propagation.
+  </p>
+  <h4>Kafka for Stream Processing</h4>
+  <p>
+  It isn't enough to just read, write, and store streams of data, the purpose is to enable
real-time processing of streams.
+  </p>
+  <p>
+  In Kafka a stream processor is anything that takes continual streams of  data from input
topics, performs some processing on this input, and produces continual streams of data to
output topics.
+  </p>
+  <p>
+  For example, a retail application might take in input streams of sales and shipments, and
output a stream of reorders and price adjustments computed off this data.
+  </p>
+  <p>
+  It is possible to do simple processing directly using the producer and consumer APIs. However
for more complex transformations Kafka provides a fully integrated <a href="/documentation.html#streams">Streams
API</a>. This allows building applications that do non-trivial processing that compute
aggregations off of streams or join streams together.
+  </p>
+  <p>
+  This facility helps solve the hard problems this type of application faces: handling out-of-order
data, reprocessing input as code changes, performing stateful computations, etc.
+  </p>
+  <p>
+  The streams API builds on the core primitives Kafka provides: it uses the producer and
consumer APIs for input, uses Kafka for stateful storage, and uses the same group mechanism
for fault tolerance among the stream processor instances.
+  </p>
+  <h4>Putting the Pieces Together</h4>
+  <p>
+  This combination of messaging, storage, and stream processing may seem unusual but it is
essential to Kafka's role as a streaming platform.
+  </p>
+  <p>
+  A distributed file system like HDFS allows storing static files for batch processing. Effectively
a system like this allows storing and processing <i>historical</i> data from the
past.
+  </p>
+  <p>
+  A traditional enterprise messaging system allows processing future messages that will arrive
after you subscribe. Applications built in this way process future data as it arrives.
+  </p>
+  <p>
+  Kafka combines both of these capabilities, and the combination is critical both for Kafka
usage as a platform for streaming applications as well as for streaming data pipelines.
+  </p>
+  <p>
+  By combining storage and low-latency subscriptions, streaming applications can treat both
past and future data the same way. That is a single application can process historical, stored
data but rather than ending when it reaches the last record it can keep processing as future
data arrives. This is a generalized notion of stream processing that subsumes batch processing
as well as message-driven applications.
+  </p>
+  <p>
+  Likewise for streaming data pipelines the combination of subscription to real-time events
make it possible to use Kafka for very low-latency pipelines; but the ability to store data
reliably make it possible to use it for critical data where the delivery of data must be guaranteed
or for integration with offline systems that load data only periodically or may go down for
extended periods of time for maintenance. The stream processing facilities make it possible
to transform data as it arrives.
+  </p>
+  <p>
+  For more information on the guarantees, apis, and capabilities Kafka provides see the rest
of the <a href="/documentation.html">documentation</a>.
+  </p>
+</script>
+
+<div class="p-introduction"></div>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/0101/js/templateData.js
----------------------------------------------------------------------
diff --git a/0101/js/templateData.js b/0101/js/templateData.js
new file mode 100644
index 0000000..40c5da1
--- /dev/null
+++ b/0101/js/templateData.js
@@ -0,0 +1,21 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Define variables for doc templates
+var context={
+    "version": "0101"
+};
\ No newline at end of file


Mime
View raw message