kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gwens...@apache.org
Subject kafka git commit: MINOR: Fix typos in docs
Date Thu, 03 Mar 2016 19:46:21 GMT
Repository: kafka
Updated Branches:
  refs/heads/trunk 0d49a5426 -> 291f430d9


MINOR: Fix typos in docs

Author: Sasaki Toru <sasakitoa@nttdata.co.jp>

Reviewers: Gwen Shapira

Closes #1003 from sasakitoa/Fix_typo_in_docs


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/291f430d
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/291f430d
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/291f430d

Branch: refs/heads/trunk
Commit: 291f430d9c13227cc30a11939cc550057c583b20
Parents: 0d49a54
Author: Sasaki Toru <sasakitoa@nttdata.co.jp>
Authored: Thu Mar 3 11:46:16 2016 -0800
Committer: Gwen Shapira <cshapi@gmail.com>
Committed: Thu Mar 3 11:46:16 2016 -0800

----------------------------------------------------------------------
 docs/api.html            | 4 ++--
 docs/connect.html        | 6 +++---
 docs/implementation.html | 2 +-
 3 files changed, 6 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/291f430d/docs/api.html
----------------------------------------------------------------------
diff --git a/docs/api.html b/docs/api.html
index 861eec3..2541553 100644
--- a/docs/api.html
+++ b/docs/api.html
@@ -15,7 +15,7 @@
  limitations under the License.
 -->
 
-Apache Kafka includes new java clients (in the org.apache.kafka.clients package). These are
meant to supplant the older Scala clients, but for compatability they will co-exist for some
time. These clients are available in a seperate jar with minimal dependencies, while the old
Scala clients remain packaged with the server.
+Apache Kafka includes new java clients (in the org.apache.kafka.clients package). These are
meant to supplant the older Scala clients, but for compatability they will co-exist for some
time. These clients are available in a separate jar with minimal dependencies, while the old
Scala clients remain packaged with the server.
 
 <h3><a id="producerapi" href="#producerapi">2.1 Producer API</a></h3>
 
@@ -59,7 +59,7 @@ class Consumer {
 
 /**
  *  V: type of the message
- *  K: type of the optional key assciated with the message
+ *  K: type of the optional key associated with the message
  */
 public interface kafka.javaapi.consumer.ConsumerConnector {
   /**

http://git-wip-us.apache.org/repos/asf/kafka/blob/291f430d/docs/connect.html
----------------------------------------------------------------------
diff --git a/docs/connect.html b/docs/connect.html
index 0a1a867..dc6ad6e 100644
--- a/docs/connect.html
+++ b/docs/connect.html
@@ -121,7 +121,7 @@ In addition to the key and value, records (both those generated by sources
and t
 
 <h5><a id="connect_dynamicconnectors" href="#connect_dynamicconnectors">Dynamic
Connectors</a></h5>
 
-Not all jobs are static, so <code>Connector</code> implementations are also responsible
for monitoring the external system for any changes that might require reconfiguration. For
example, in the <code>JDBCSourceConnector</code> example, the <code>Connector</code>
might assign a set of tables to each <code>Task</code>. When a new table is created,
it must discover this so it can assign the new table to one of the <code>Tasks</code>
by updating its configuration. When it notices a change that requires reconfiguration (or
a change in the number of <code>Tasks</code>), it notifies the framework and the
framework updates anycorresponding <code>Tasks</code>.
+Not all jobs are static, so <code>Connector</code> implementations are also responsible
for monitoring the external system for any changes that might require reconfiguration. For
example, in the <code>JDBCSourceConnector</code> example, the <code>Connector</code>
might assign a set of tables to each <code>Task</code>. When a new table is created,
it must discover this so it can assign the new table to one of the <code>Tasks</code>
by updating its configuration. When it notices a change that requires reconfiguration (or
a change in the number of <code>Tasks</code>), it notifies the framework and the
framework updates any corresponding <code>Tasks</code>.
 
 
 <h4><a id="connect_developing" href="#connect_developing">Developing a Simple
Connector</a></h4>
@@ -240,9 +240,9 @@ public List&lt;SourceRecord&gt; poll() throws InterruptedException
{
 }
 </pre>
 
-Again, we've omitted some details, but we can see the important steps: the <code>poll()</code>
method is going to be called repeatedly, and for each call it will loop trying to read records
from the file. For each line it reads, it also tracks the file offset. It uses this information
to create an output <code>SourceRecord</code> with four pieces of information:
the source partition (there is only one, the single file being read), source offset (byte
offset in the file), output topic name, and output value (the line, and we include a schema
indicating this value will always be a string). Other variants of the <code>SourceRecord</code>
constructor can also inclue a specific output partition and a key.
+Again, we've omitted some details, but we can see the important steps: the <code>poll()</code>
method is going to be called repeatedly, and for each call it will loop trying to read records
from the file. For each line it reads, it also tracks the file offset. It uses this information
to create an output <code>SourceRecord</code> with four pieces of information:
the source partition (there is only one, the single file being read), source offset (byte
offset in the file), output topic name, and output value (the line, and we include a schema
indicating this value will always be a string). Other variants of the <code>SourceRecord</code>
constructor can also include a specific output partition and a key.
 
-Note that this implementation uses the normal Java <code>InputStream</code>interface
and may sleep if data is not avaiable. This is acceptable because Kafka Connect provides each
task with a dedicated thread. While task implementations have to conform to the basic <code>poll()</code>interface,
they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation
would be more efficient, but this simple approach works, is quick to implement, and is compatible
with older versions of Java.
+Note that this implementation uses the normal Java <code>InputStream</code>interface
and may sleep if data is not available. This is acceptable because Kafka Connect provides
each task with a dedicated thread. While task implementations have to conform to the basic
<code>poll()</code>interface, they have a lot of flexibility in how they are implemented.
In this case, an NIO-based implementation would be more efficient, but this simple approach
works, is quick to implement, and is compatible with older versions of Java.
 
 <h5><a id="connect_sinktasks" href="#connect_sinktasks">Sink Tasks</a></h5>
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/291f430d/docs/implementation.html
----------------------------------------------------------------------
diff --git a/docs/implementation.html b/docs/implementation.html
index 21cae93..ecd99e7 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -178,7 +178,7 @@ Messages consist of a fixed-size header and variable length opaque byte
array pa
 A log for a topic named "my_topic" with two partitions consists of two directories (namely
<code>my_topic_0</code> and <code>my_topic_1</code>) populated with
data files containing the messages for that topic. The format of the log files is a sequence
of "log entries""; each log entry is a 4 byte integer <i>N</i> storing the message
length which is followed by the <i>N</i> message bytes. Each message is uniquely
identified by a 64-bit integer <i>offset</i> giving the byte position of the start
of this message in the stream of all messages ever sent to that topic on that partition. The
on-disk format of each message is given below. Each log file is named with the offset of the
first message it contains. So the first file created will be 00000000000.kafka, and each additional
file will have an integer name roughly <i>S</i> bytes from the previous file where
<i>S</i> is the max log file size given in the configuration.
 </p>
 <p>
-The exact binary format for messages is versioned and maintained as a standard interface
so message sets can be transfered between producer, broker, and client without recopying or
conversion when desirable. This format is as follows:
+The exact binary format for messages is versioned and maintained as a standard interface
so message sets can be transferred between producer, broker, and client without recopying
or conversion when desirable. This format is as follows:
 </p>
 <pre>
 On-disk format of a message


Mime
View raw message