kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bbej...@apache.org
Subject [kafka-site] branch asf-site updated: KAFKA-8208: Fix broken link for out of order data (#197)
Date Sat, 13 Apr 2019 22:56:39 GMT
This is an automated email from the ASF dual-hosted git repository.

bbejeck pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 044b919  KAFKA-8208: Fix broken link for out of order data (#197)
044b919 is described below

commit 044b919f13357213c5ff0eec9c527ac95138df70
Author: Bill Bejeck <bbejeck@gmail.com>
AuthorDate: Sat Apr 13 18:56:35 2019 -0400

    KAFKA-8208: Fix broken link for out of order data (#197)
    
    Matthias J. Sax <mjsax@apache.org>, Michael Drogalis <michael.drogalis@confluent.io>,
Victoria Bialas <vicky@confluent.io>
---
 21/streams/core-concepts.html | 2 +-
 22/streams/core-concepts.html | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/21/streams/core-concepts.html b/21/streams/core-concepts.html
index f630772..1e1aeb7 100644
--- a/21/streams/core-concepts.html
+++ b/21/streams/core-concepts.html
@@ -224,7 +224,7 @@
 
     <p>
         Besides the guarantee that each record will be processed exactly-once, another issue
that many stream processing application will face is how to
-        handle <a href="tbd">out-of-order data</a> that may impact their business
logic. In Kafka Streams, there are two causes that could potentially
+        handle <a href="https://dl.acm.org/citation.cfm?id=3242155">out-of-order data</a>
that may impact their business logic. In Kafka Streams, there are two causes that could potentially
         result in out-of-order data arrivals with respect to their timestamps:
     </p>
 
diff --git a/22/streams/core-concepts.html b/22/streams/core-concepts.html
index 79f5b82..1e1aeb7 100644
--- a/22/streams/core-concepts.html
+++ b/22/streams/core-concepts.html
@@ -224,7 +224,7 @@
 
     <p>
         Besides the guarantee that each record will be processed exactly-once, another issue
that many stream processing application will face is how to
-        handle <a href="https://www.confluent.io/wp-content/uploads/streams-tables-two-sides-same-coin.pdf">out-of-order
data</a> that may impact their business logic. In Kafka Streams, there are two causes
that could potentially
+        handle <a href="https://dl.acm.org/citation.cfm?id=3242155">out-of-order data</a>
that may impact their business logic. In Kafka Streams, there are two causes that could potentially
         result in out-of-order data arrivals with respect to their timestamps:
     </p>
 


Mime
View raw message