kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mj...@apache.org
Subject [kafka-site] branch asf-site updated: MINOR: fix typos and mailto-links (#225)
Date Sat, 03 Aug 2019 20:54:51 GMT
This is an automated email from the ASF dual-hosted git repository.

mjsax pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new de31245  MINOR: fix typos and mailto-links (#225)
de31245 is described below

commit de312459502df5d68feae0a38d251594eea8215e
Author: Victoria Bialas <londoncalling@users.noreply.github.com>
AuthorDate: Sat Aug 3 13:54:46 2019 -0700

    MINOR: fix typos and mailto-links (#225)
    
    Reviewers: Jim Galasyn <jim.galasyn@confluent.io>, Matthias J. Sax <matthias@confluent.io>
---
 0100/uses.html    | 2 +-
 0101/uses.html    | 2 +-
 0102/uses.html    | 2 +-
 0110/uses.html    | 2 +-
 081/ops.html      | 2 +-
 10/uses.html      | 2 +-
 11/uses.html      | 2 +-
 20/uses.html      | 2 +-
 21/uses.html      | 2 +-
 22/uses.html      | 2 +-
 23/uses.html      | 2 +-
 contact.html      | 8 ++++----
 contributing.html | 4 ++--
 13 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/0100/uses.html b/0100/uses.html
index 7b52a59..1fe40ce 100644
--- a/0100/uses.html
+++ b/0100/uses.html
@@ -43,7 +43,7 @@ In comparison to log-centric systems like Scribe or Flume, Kafka offers
equally
 
 <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processing</a></h4>
 
-Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed
into new topics for further consumption or follow-up processing. For example, a processing
pipeline for recommending news articles might crawl article content from RSS feeds and publish
it to an "articles" topic; further processing might normalize or deduplicate this content
and published the cle [...]
+Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed
into new topics for further consumption or follow-up processing. For example, a processing
pipeline for recommending news articles might crawl article content from RSS feeds and publish
it to an "articles" topic; further processing might normalize or deduplicate this content
and publish the clean [...]
 
 <h4><a id="uses_eventsourcing" href="#uses_eventsourcing">Event Sourcing</a></h4>
 
diff --git a/0101/uses.html b/0101/uses.html
index 7c8630b..d7f6904 100644
--- a/0101/uses.html
+++ b/0101/uses.html
@@ -43,7 +43,7 @@ In comparison to log-centric systems like Scribe or Flume, Kafka offers
equally
 
 <h4><a id="uses_streamprocessing" href="#uses_streamprocessing">Stream Processing</a></h4>
 
-Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed
into new topics for further consumption or follow-up processing. For example, a processing
pipeline for recommending news articles might crawl article content from RSS feeds and publish
it to an "articles" topic; further processing might normalize or deduplicate this content
and published the cle [...]
+Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed
into new topics for further consumption or follow-up processing. For example, a processing
pipeline for recommending news articles might crawl article content from RSS feeds and publish
it to an "articles" topic; further processing might normalize or deduplicate this content
and publish the clean [...]
 
 <h4><a id="uses_eventsourcing" href="#uses_eventsourcing">Event Sourcing</a></h4>
 
diff --git a/0102/uses.html b/0102/uses.html
index 4e88859..e19873f 100644
--- a/0102/uses.html
+++ b/0102/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or
follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content
from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed
article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article
content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a
href="/{{version}}/documentation/streams">Kafka Streams</a>
diff --git a/0110/uses.html b/0110/uses.html
index bf134fc..20ca3bc 100644
--- a/0110/uses.html
+++ b/0110/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or
follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content
from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed
article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article
content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a
href="/documentation/streams">Kafka Streams</a>
diff --git a/081/ops.html b/081/ops.html
index 88c5a25..04ad74e 100644
--- a/081/ops.html
+++ b/081/ops.html
@@ -42,7 +42,7 @@ And finally deleting a topic:
 <pre>
  &gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --delete --topic my_topic_name
 </pre>
-WARNING: Delete topic functionality is beta in 0.8.1. Please report any bugs that you encounter
on the <a href="mailto: users@kafka.apache.org">mailing list</a> or <a href="https://issues.apache.org/jira/browse/KAFKA">JIRA</a>.
+WARNING: Delete topic functionality is beta in 0.8.1. Please report any bugs that you encounter
on the <a href="mailto:users@kafka.apache.org">mailing list</a> or <a href="https://issues.apache.org/jira/browse/KAFKA">JIRA</a>.
 <p>
 Kafka does not currently support reducing the number of partitions for a topic or changing
the replication factor.
 
diff --git a/10/uses.html b/10/uses.html
index f1c8407..2978c44 100644
--- a/10/uses.html
+++ b/10/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or
follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content
from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed
article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article
content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a
href="/documentation/streams">Kafka Streams</a>
diff --git a/11/uses.html b/11/uses.html
index 945b896..09bc45f 100644
--- a/11/uses.html
+++ b/11/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or
follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content
from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed
article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article
content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a
href="/documentation/streams">Kafka Streams</a>
diff --git a/20/uses.html b/20/uses.html
index f1c8407..2978c44 100644
--- a/20/uses.html
+++ b/20/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or
follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content
from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed
article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article
content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a
href="/documentation/streams">Kafka Streams</a>
diff --git a/21/uses.html b/21/uses.html
index 945b896..09bc45f 100644
--- a/21/uses.html
+++ b/21/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or
follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content
from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed
article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article
content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a
href="/documentation/streams">Kafka Streams</a>
diff --git a/22/uses.html b/22/uses.html
index 945b896..09bc45f 100644
--- a/22/uses.html
+++ b/22/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or
follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content
from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed
article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article
content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a
href="/documentation/streams">Kafka Streams</a>
diff --git a/23/uses.html b/23/uses.html
index 945b896..09bc45f 100644
--- a/23/uses.html
+++ b/23/uses.html
@@ -60,7 +60,7 @@ and much lower end-to-end latency.
 Many users of Kafka process data in processing pipelines consisting of multiple stages, where
raw input data is consumed from Kafka topics and then
 aggregated, enriched, or otherwise transformed into new topics for further consumption or
follow-up processing.
 For example, a processing pipeline for recommending news articles might crawl article content
from RSS feeds and publish it to an "articles" topic;
-further processing might normalize or deduplicate this content and published the cleansed
article content to a new topic;
+further processing might normalize or deduplicate this content and publish the cleansed article
content to a new topic;
 a final processing stage might attempt to recommend this content to users.
 Such processing pipelines create graphs of real-time data flows based on the individual topics.
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a
href="/documentation/streams">Kafka Streams</a>
diff --git a/contact.html b/contact.html
index 744e6cc..973b83a 100644
--- a/contact.html
+++ b/contact.html
@@ -11,16 +11,16 @@
 		</p>
 		<ul>
 			<li>
-				<b>User mailing list</b>: A list for general user questions about Kafka&reg;.
To subscribe, send an email to <a href="mailto: users-subscribe@kafka.apache.org">users-subscribe@kafka.apache.org</a>.
Once subscribed, send your emails to <a href="mailto: users@kafka.apache.org">users@kafka.apache.org</a>.
Archives are available <a href="https://lists.apache.org/list.html?users@kafka.apache.org">here</a>.
+				<b>User mailing list</b>: A list for general user questions about Kafka&reg;.
To subscribe, send an email to <a href="mailto:users-subscribe@kafka.apache.org">users-subscribe@kafka.apache.org</a>.
Once subscribed, send your emails to <a href="mailto:users@kafka.apache.org">users@kafka.apache.org</a>.
Archives are available <a href="https://lists.apache.org/list.html?users@kafka.apache.org">here</a>.
 			</li>
 			<li>
-				<b>Developer mailing list</b>: A list for discussion on Kafka&reg; development.
To subscribe, send an email to <a href="mailto: dev-subscribe@kafka.apache.org">dev-subscribe@kafka.apache.org</a>.
Once subscribed, send your emails to <a href="mailto: dev@kafka.apache.org">dev@kafka.apache.org</a>.
Archives are available <a href="https://lists.apache.org/list.html?dev@kafka.apache.org">here</a>.
+				<b>Developer mailing list</b>: A list for discussion on Kafka&reg; development.
To subscribe, send an email to <a href="mailto:dev-subscribe@kafka.apache.org">dev-subscribe@kafka.apache.org</a>.
Once subscribed, send your emails to <a href="mailto:dev@kafka.apache.org">dev@kafka.apache.org</a>.
Archives are available <a href="https://lists.apache.org/list.html?dev@kafka.apache.org">here</a>.
 			</li>
 			<li>
-				<b>JIRA mailing list</b>: A list to track Kafka&reg; <a href="https://issues.apache.org/jira/projects/KAFKA">JIRA</a>
notifications. To subscribe, send an email to <a href="mailto: jira-subscribe@kafka.apache.org">jira-subscribe@kafka.apache.org</a>.
Archives are available <a href="https://lists.apache.org/list.html?jira@kafka.apache.org">here</a>.
+				<b>JIRA mailing list</b>: A list to track Kafka&reg; <a href="https://issues.apache.org/jira/projects/KAFKA">JIRA</a>
notifications. To subscribe, send an email to <a href="mailto:jira-subscribe@kafka.apache.org">jira-subscribe@kafka.apache.org</a>.
Archives are available <a href="https://lists.apache.org/list.html?jira@kafka.apache.org">here</a>.
 			</li>
 			<li>
-				<b>Commit mailing list</b>: A list to track Kafka&reg; commits. To subscribe,
send an email to <a href="mailto: commits-subscribe@kafka.apache.org">commits-subscribe@kafka.apache.org</a>.
Archives are available <a href="http://mail-archives.apache.org/mod_mbox/kafka-commits">here</a>.
+				<b>Commit mailing list</b>: A list to track Kafka&reg; commits. To subscribe,
send an email to <a href="mailto:commits-subscribe@kafka.apache.org">commits-subscribe@kafka.apache.org</a>.
Archives are available <a href="http://mail-archives.apache.org/mod_mbox/kafka-commits">here</a>.
 			</li>
 		</ul>
 
diff --git a/contributing.html b/contributing.html
index 9a78416..5ed674d 100644
--- a/contributing.html
+++ b/contributing.html
@@ -27,7 +27,7 @@
 			<li>Make sure you have observed the recommendations in the <a href="coding-guide.html">style
guide</a>.</li>
 			<li>Follow the detailed instructions in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes">Contributing
Code Changes</a>.</li>
 			<li>Note that if the change is related to user-facing protocols / interface / configs,
etc, you need to make the corresponding change on the documentation as well. For wiki page
changes feel free to edit the page content directly (you may need to contact us to get the
permission first if it is your first time to edit on wiki);  website docs live in the code
repo under `docs` so that changes to that can be done in the same PR as changes to the code.
Website doc change instructions are  [...]
-			<li>It is our job to follow up on patches in a timely fashion. <a href="mailto:
dev@kafka.apache.org">Nag us</a> if we aren't doing our job (sometimes we drop things).</li>
+			<li>It is our job to follow up on patches in a timely fashion. <a href="mailto:dev@kafka.apache.org">Nag
us</a> if we aren't doing our job (sometimes we drop things).</li>
 		</ul>
 
 		<h2>Contributing A Change To The Website</h2>
@@ -36,7 +36,7 @@
 
 		<ul>
 		       <li>Follow the instructions in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes">Contributing
Website Changes</a>.</li>
-		       <li>It is our job to follow up on patches in a timely fashion. <a href="mailto:
dev@kafka.apache.org">Nag us</a> if we aren't doing our job (sometimes we drop things).
If the patch needs improvement, the reviewer will mark the jira back to "In Progress" after
reviewing.</li>
+		       <li>It is our job to follow up on patches in a timely fashion. <a href="mailto:dev@kafka.apache.org">Nag
us</a> if we aren't doing our job (sometimes we drop things). If the patch needs improvement,
the reviewer will mark the jira back to "In Progress" after reviewing.</li>
 		</ul>
 
 		<h2>Finding A Project To Work On</h2>


Mime
View raw message