kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject kafka git commit: resolve conflicts
Date Tue, 05 Jul 2016 02:58:12 GMT
Repository: kafka
Updated Branches:
  refs/heads/0.10.0 cdf019a82 -> 00d5becba

resolve conflicts

Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/00d5becb
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/00d5becb
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/00d5becb

Branch: refs/heads/0.10.0
Commit: 00d5becbadd397a595072f91fc687182fe41543a
Parents: cdf019a
Author: Alex Glikson <glikson@il.ibm.com>
Authored: Mon Jul 4 19:56:09 2016 -0700
Committer: Guozhang Wang <wangguoz@gmail.com>
Committed: Mon Jul 4 19:58:05 2016 -0700

 docs/quickstart.html | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/docs/quickstart.html b/docs/quickstart.html
index 73e5d6f..6c090d0 100644
--- a/docs/quickstart.html
+++ b/docs/quickstart.html
@@ -304,7 +304,16 @@ stream data will likely be flowing continuously into Kafka where the
-&gt; <b>cat /tmp/file-input.txt | ./bin/kafka-console-producer --broker-list localhost:9092
--topic streams-file-input</b>
+&gt; <b>bin/kafka-topics.sh --create \</b>
+            <b>--zookeeper localhost:2181 \</b>
+            <b>--replication-factor 1 \</b>
+            <b>--partitions 1 \</b>
+            <b>--topic streams-file-input</b>
+&gt; <b>cat file-input.txt | bin/kafka-console-producer.sh --broker-list localhost:9092
--topic streams-file-input</b>
@@ -312,7 +321,7 @@ We can now run the WordCount demo application to process the input data:
-&gt; <b>./bin/kafka-run-class org.apache.kafka.streams.examples.wordcount.WordCountDemo</b>
+&gt; <b>bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo</b>
@@ -324,18 +333,18 @@ We can now inspect the output of the WordCount demo application by reading
-&gt; <b>./bin/kafka-console-consumer --zookeeper localhost:2181 \</b>
+&gt; <b>bin/kafka-console-consumer.sh --zookeeper localhost:2181 \</b>
             <b>--topic streams-wordcount-output \</b>
             <b>--from-beginning \</b>
             <b>--formatter kafka.tools.DefaultMessageFormatter \</b>
             <b>--property print.key=true \</b>
-            <b>--property print.key=true \</b>
+            <b>--property print.value=true \</b>
             <b>--property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
             <b>--property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer</b>
-with the following output data being printed to the console (You can stop the console consumer
via <b>Ctrl-C</b>):
+with the following output data being printed to the console:
@@ -350,11 +359,17 @@ streams 2
 join    1
 kafka   3
 summit  1
 Here, the first column is the Kafka message key, and the second column is the message value,
both in in <code>java.lang.String</code> format.
 Note that the output is actually a continuous stream of updates, where each data record (i.e.
each line in the original output above) is
 an updated count of a single word, aka record key such as "kafka". For multiple records with
the same key, each later record is an update of the previous one.
\ No newline at end of file
+Now you can write more input messages to the <b>streams-file-input</b> topic
and observe additional messages added 
+to <b>streams-wordcount-output</b> topic, reflecting updated word counts (e.g.,
using the console producer and the 
+console consumer, as described above).
+<p>You can stop the console consumer via <b>Ctrl-C</b>.</p>
\ No newline at end of file

View raw message