kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From j...@apache.org
Subject kafka git commit: MINOR: Update Quickstart in documentation to account for Windows platforms
Date Sun, 09 Oct 2016 17:32:00 GMT
Repository: kafka
Updated Branches:
  refs/heads/trunk 44d18d273 -> 179d06985


MINOR: Update Quickstart in documentation to account for Windows platforms

Author: Vahid Hashemian <vahidhashemian@us.ibm.com>

Reviewers: Jason Gustafson <jason@confluent.io>

Closes #1990 from vahidhashemian/doc/quickstart_update_windows


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/179d0698
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/179d0698
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/179d0698

Branch: refs/heads/trunk
Commit: 179d0698573bfc75b2c91185bdcb8b4d2f3038e0
Parents: 44d18d2
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Authored: Sun Oct 9 10:30:52 2016 -0700
Committer: Jason Gustafson <jason@confluent.io>
Committed: Sun Oct 9 10:32:01 2016 -0700

----------------------------------------------------------------------
 docs/quickstart.html | 21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/179d0698/docs/quickstart.html
----------------------------------------------------------------------
diff --git a/docs/quickstart.html b/docs/quickstart.html
index 32d6125..3066960 100644
--- a/docs/quickstart.html
+++ b/docs/quickstart.html
@@ -18,6 +18,7 @@
 <h3><a id="quickstart" href="#quickstart">1.3 Quick Start</a></h3>
 
 This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data.
+Since Kafka console scripts are different for Unix-based and Windows platforms, on Windows
platforms use <code>bin\windows\</code> instead of <code>bin/</code>,
and change the script extension to <code>.bat</code>.
 
 <h4><a id="quickstart_download" href="#quickstart_download">Step 1: Download
the code</a></h4>
 
@@ -93,7 +94,7 @@ All of the command line tools have additional options; running the command
with
 
 So far we have been running against a single broker, but that's no fun. For Kafka, a single
broker is just a cluster of size one, so nothing much changes other than starting a few more
broker instances. But just to get feel for it, let's expand our cluster to three nodes (still
all on our local machine).
 <p>
-First we make a config file for each of the brokers:
+First we make a config file for each of the brokers (on Windows use the <code>copy</code>
command instead):
 <pre>
 &gt; <b>cp config/server.properties config/server-1.properties</b>
 &gt; <b>cp config/server.properties config/server-2.properties</b>
@@ -173,6 +174,13 @@ Now let's test out fault-tolerance. Broker 1 was acting as the leader
so let's k
 &gt; <b>kill -9 7564</b>
 </pre>
 
+On Windows use:
+<pre>
+&gt; <b>wmic process get processid,caption,commandline | find "java.exe" | find
"server-1.properties"</b>
+java.exe    java  -Xmx1G -Xms1G -server -XX:+UseG1GC ... build\libs\kafka_2.10-0.10.1.0.jar"
 kafka.Kafka config\server-1.properties    <i>644</i>
+&gt; <b>taskkill /pid 644 /f</b>
+</pre>
+
 Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica
set:
 <pre>
 &gt; <b>bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic</b>
@@ -297,6 +305,12 @@ We will now prepare input data to a Kafka topic, which will subsequently
process
 <pre>
 &gt; <b>echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit"
> file-input.txt</b>
 </pre>
+Or on Windows:
+<pre>
+&gt; <b>echo all streams lead to kafka> file-input.txt</b>
+&gt; <b>echo hello kafka streams>> file-input.txt</b>
+&gt; <b>echo|set /p=join kafka summit>> file-input.txt</b>
+</pre>
 
 <p>
 Next, we send this input data to the input topic named <b>streams-file-input</b>
using the console producer (in practice,
@@ -313,7 +327,7 @@ stream data will likely be flowing continuously into Kafka where the application
 
 
 <pre>
-&gt; <b>cat file-input.txt | bin/kafka-console-producer.sh --broker-list localhost:9092
--topic streams-file-input</b>
+&gt; <b>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-file-input
< file-input.txt</b>
 </pre>
 
 <p>
@@ -349,12 +363,9 @@ with the following output data being printed to the console:
 
 <pre>
 all     1
-streams 1
 lead    1
 to      1
-kafka   1
 hello   1
-kafka   2
 streams 2
 join    1
 kafka   3


Mime
View raw message