kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject kafka git commit: MINOR: Fix few documentation errors in streams quickstart
Date Tue, 05 Jul 2016 02:56:13 GMT
Repository: kafka
Updated Branches:
  refs/heads/trunk d7de59a57 -> 87b3ce16c

MINOR: Fix few documentation errors in streams quickstart

Plus a minor enhancement

Author: glikson <glikson@il.ibm.com>

Reviewers: Guozhang Wang <wangguoz@gmail.com>

Closes #1571 from glikson/patch-1

Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/87b3ce16
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/87b3ce16
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/87b3ce16

Branch: refs/heads/trunk
Commit: 87b3ce16c05eb105f1ee0da26b090e02b6c18678
Parents: d7de59a
Author: Alex Glikson <glikson@il.ibm.com>
Authored: Mon Jul 4 19:56:09 2016 -0700
Committer: Guozhang Wang <wangguoz@gmail.com>
Committed: Mon Jul 4 19:56:09 2016 -0700

 docs/quickstart.html | 28 ++++++++++++++++++++++------
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/docs/quickstart.html b/docs/quickstart.html
index b60812e..7cb1f2d 100644
--- a/docs/quickstart.html
+++ b/docs/quickstart.html
@@ -304,7 +304,16 @@ stream data will likely be flowing continuously into Kafka where the
-&gt; <b>cat /tmp/file-input.txt | ./bin/kafka-console-producer --broker-list localhost:9092
--topic streams-file-input</b>
+&gt; <b>bin/kafka-topics.sh --create \</b>
+            <b>--zookeeper localhost:2181 \</b>
+            <b>--replication-factor 1 \</b>
+            <b>--partitions 1 \</b>
+            <b>--topic streams-file-input</b>
+&gt; <b>cat file-input.txt | bin/kafka-console-producer.sh --broker-list localhost:9092
--topic streams-file-input</b>
@@ -312,7 +321,7 @@ We can now run the WordCount demo application to process the input data:
-&gt; <b>./bin/kafka-run-class org.apache.kafka.streams.examples.wordcount.WordCountDemo</b>
+&gt; <b>bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo</b>
@@ -324,18 +333,18 @@ We can now inspect the output of the WordCount demo application by reading
-&gt; <b>./bin/kafka-console-consumer --zookeeper localhost:2181 \</b>
+&gt; <b>bin/kafka-console-consumer.sh --zookeeper localhost:2181 \</b>
             <b>--topic streams-wordcount-output \</b>
             <b>--from-beginning \</b>
             <b>--formatter kafka.tools.DefaultMessageFormatter \</b>
             <b>--property print.key=true \</b>
-            <b>--property print.key=true \</b>
+            <b>--property print.value=true \</b>
             <b>--property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
             <b>--property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer</b>
-with the following output data being printed to the console (You can stop the console consumer
via <b>Ctrl-C</b>):
+with the following output data being printed to the console:
@@ -350,7 +359,6 @@ streams 2
 join    1
 kafka   3
 summit  1
@@ -358,3 +366,11 @@ Here, the first column is the Kafka message key, and the second column
is the me
 Note that the output is actually a continuous stream of updates, where each data record (i.e.
each line in the original output above) is
 an updated count of a single word, aka record key such as "kafka". For multiple records with
the same key, each later record is an update of the previous one.
+Now you can write more input messages to the <b>streams-file-input</b> topic
and observe additional messages added 
+to <b>streams-wordcount-output</b> topic, reflecting updated word counts (e.g.,
using the console producer and the 
+console consumer, as described above).
+<p>You can stop the console consumer via <b>Ctrl-C</b>.</p>

View raw message