kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ewe...@apache.org
Subject kafka git commit: KAFKA-5410; Fix taskClass() method name in Connector and flush() signature in SinkTask
Date Fri, 21 Jul 2017 02:51:31 GMT
Repository: kafka
Updated Branches:
  refs/heads/0.11.0 b0ff6ea9a -> 3b00f8b83


KAFKA-5410; Fix taskClass() method name in Connector and flush() signature in SinkTask

Author: ppatierno <ppatierno@live.com>

Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>

Closes #3269 from ppatierno/connect-doc

(cherry picked from commit 8a81566214a6028f58d35c04cde8ccc7edc71960)
Signed-off-by: Ewen Cheslack-Postava <me@ewencp.org>


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/3b00f8b8
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/3b00f8b8
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/3b00f8b8

Branch: refs/heads/0.11.0
Commit: 3b00f8b83f5d0ce0e2d3d7281164675872331225
Parents: b0ff6ea
Author: ppatierno <ppatierno@live.com>
Authored: Thu Jul 20 19:51:09 2017 -0700
Committer: Ewen Cheslack-Postava <me@ewencp.org>
Committed: Thu Jul 20 19:51:26 2017 -0700

----------------------------------------------------------------------
 docs/connect.html | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/3b00f8b8/docs/connect.html
----------------------------------------------------------------------
diff --git a/docs/connect.html b/docs/connect.html
index 57505d6..7cc72b6 100644
--- a/docs/connect.html
+++ b/docs/connect.html
@@ -253,11 +253,11 @@
         private String topic;
     </pre>
 
-    The easiest method to fill in is <code>getTaskClass()</code>, which defines
the class that should be instantiated in worker processes to actually read the data:
+    The easiest method to fill in is <code>taskClass()</code>, which defines
the class that should be instantiated in worker processes to actually read the data:
 
     <pre class="brush: java;">
     @Override
-    public Class&lt;? extends Task&gt; getTaskClass() {
+    public Class&lt;? extends Task&gt; taskClass() {
         return FileStreamSourceTask.class;
     }
     </pre>
@@ -372,8 +372,9 @@
         }
 
         public abstract void put(Collection&lt;SinkRecord&gt; records);
-        
-        public abstract void flush(Map&lt;TopicPartition, Long&gt; offsets);
+
+        public void flush(Map&lt;TopicPartition, OffsetAndMetadata&gt; currentOffsets)
{
+        }
     </pre>
 
     The <code>SinkTask</code> documentation contains full details, but this interface
is nearly as simple as the <code>SourceTask</code>. The <code>put()</code>
method should contain most of the implementation, accepting sets of <code>SinkRecords</code>,
performing any required translation, and storing them in the destination system. This method
does not need to ensure the data has been fully written to the destination system before returning.
In fact, in many cases internal buffering will be useful so an entire batch of records can
be sent at once, reducing the overhead of inserting events into the downstream data store.
The <code>SinkRecords</code> contain essentially the same information as <code>SourceRecords</code>:
Kafka topic, partition, offset and the event key and value.


Mime
View raw message