kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject kafka git commit: MINOR: Fix Streams examples in documentation
Date Tue, 13 Dec 2016 17:44:17 GMT
Repository: kafka
Updated Branches:
  refs/heads/trunk 21d7e6f19 -> 6626b058c


MINOR: Fix Streams examples in documentation

Performed minor cleanup and escaped `<` and `>` so code examples are shown correctly
in the browser.

Author: Vahid Hashemian <vahidhashemian@us.ibm.com>

Reviewers: Matthias J. Sax, Guozhang Wang

Closes #2247 from vahidhashemian/doc/fix_streams_doc


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/6626b058
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/6626b058
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/6626b058

Branch: refs/heads/trunk
Commit: 6626b058c7893ce1192456b3fc7617f98da6f3cc
Parents: 21d7e6f
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Authored: Tue Dec 13 09:44:11 2016 -0800
Committer: Guozhang Wang <wangguoz@gmail.com>
Committed: Tue Dec 13 09:44:11 2016 -0800

----------------------------------------------------------------------
 docs/streams.html | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/6626b058/docs/streams.html
----------------------------------------------------------------------
diff --git a/docs/streams.html b/docs/streams.html
index 74620ec..306b2a5 100644
--- a/docs/streams.html
+++ b/docs/streams.html
@@ -279,8 +279,8 @@ from a single topic).
 <pre>
     KStreamBuilder builder = new KStreamBuilder();
 
-    KStream<String, GenericRecord> source1 = builder.stream("topic1", "topic2");
-    KTable<String, GenericRecord> source2 = builder.table("topic3", "stateStoreName");
+    KStream&lt;String, GenericRecord&gt; source1 = builder.stream("topic1", "topic2");
+    KTable&lt;String, GenericRecord&gt; source2 = builder.table("topic3", "stateStoreName");
 </pre>
 
 <h5><a id="streams_dsl_windowing" href="#streams_dsl_windowing">Windowing a stream</a></h5>
@@ -301,7 +301,7 @@ A <b>join</b> operation merges two streams based on the keys
of their data recor
   </ul>
 
 Depending on the operands the following join operations are supported: <b>inner joins</b>,
<b>outer joins</b> and <b>left joins</b>. Their semantics are similar
to the corresponding operators in relational databases.
-a
+
 <h5><a id="streams_dsl_transform" href="#streams_dsl_transform">Transform a stream</a></h5>
 
 <p>
@@ -323,12 +323,12 @@ where users can usually pass a customized function to these functions
as a param
 
 <pre>
     // written in Java 8+, using lambda expressions
-    KStream<String, GenericRecord> mapped = source1.mapValue(record -> record.get("category"));
+    KStream&lt;String, GenericRecord&gt; mapped = source1.mapValue(record -> record.get("category"));
 </pre>
 
 <p>
 Stateless transformations, by definition, do not depend on any state for processing, and
hence implementation-wise
-they do not require a state store associated with the stream processor; Stateful transformations,
on the other hand,
+they do not require a state store associated with the stream processor; stateful transformations,
on the other hand,
 require accessing an associated state for processing and producing outputs.
 For example, in <code>join</code> and <code>aggregate</code> operations,
a windowing state is usually used to store all the received records
 within the defined window boundary so far. The operators can then access these accumulated
records in the store and compute
@@ -337,14 +337,14 @@ based on them.
 
 <pre>
     // written in Java 8+, using lambda expressions
-    KTable<Windowed<String>, Long> counts = source1.groupByKey().aggregate(
+    KTable&lt;Windowed&lt;String&gt;, Long&gt; counts = source1.groupByKey().aggregate(
         () -> 0L,  // initial value
         (aggKey, value, aggregate) -> aggregate + 1L,   // aggregating value
         TimeWindows.of("counts", 5000L).advanceBy(1000L), // intervals in milliseconds
         Serdes.Long() // serde for aggregated value
     );
 
-    KStream<String, String> joined = source1.leftJoin(source2,
+    KStream&lt;String, String&gt; joined = source1.leftJoin(source2,
         (record1, record2) -> record1.get("user") + "-" + record2.get("region");
     );
 </pre>
@@ -369,7 +369,7 @@ Kafka Streams provides a convenience method called <code>through</code>:
     //
     // joined.to("topic4");
     // materialized = builder.stream("topic4");
-    KStream<String, String> materialized = joined.through("topic4");
+    KStream&lt;String, String&gt; materialized = joined.through("topic4");
 </pre>
 
 


Mime
View raw message