kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject kafka git commit: MINOR: fixup typo in ops.html
Date Wed, 21 Dec 2016 19:37:36 GMT
Repository: kafka
Updated Branches:
  refs/heads/trunk baab43f0d -> 05d766431


MINOR: fixup typo in ops.html

pretty boring docfix, "no" -> "not"

Author: Scott Ferguson <smferguson@gmail.com>

Reviewers: Guozhang Wang <wangguoz@gmail.com>

Closes #2269 from smferguson/fixup_typo_in_add_remove_topics


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/05d76643
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/05d76643
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/05d76643

Branch: refs/heads/trunk
Commit: 05d76643174e64b183cb9c315e9dad94d5775d4f
Parents: baab43f
Author: Scott Ferguson <smferguson@gmail.com>
Authored: Wed Dec 21 11:37:32 2016 -0800
Committer: Guozhang Wang <wangguoz@gmail.com>
Committed: Wed Dec 21 11:37:32 2016 -0800

----------------------------------------------------------------------
 docs/ops.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/05d76643/docs/ops.html
----------------------------------------------------------------------
diff --git a/docs/ops.html b/docs/ops.html
index a034b49..b500d69 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -33,7 +33,7 @@
   </pre>
   The replication factor controls how many servers will replicate each message that is written.
If you have a replication factor of 3 then up to 2 servers can fail before you will lose access
to your data. We recommend you use a replication factor of 2 or 3 so that you can transparently
bounce machines without interrupting data consumption.
   <p>
-  The partition count controls how many logs the topic will be sharded into. There are several
impacts of the partition count. First each partition must fit entirely on a single server.
So if you have 20 partitions the full data set (and read and write load) will be handled by
no more than 20 servers (no counting replicas). Finally the partition count impacts the maximum
parallelism of your consumers. This is discussed in greater detail in the <a href="#intro_consumers">concepts
section</a>.
+  The partition count controls how many logs the topic will be sharded into. There are several
impacts of the partition count. First each partition must fit entirely on a single server.
So if you have 20 partitions the full data set (and read and write load) will be handled by
no more than 20 servers (noi counting replicas). Finally the partition count impacts the maximum
parallelism of your consumers. This is discussed in greater detail in the <a href="#intro_consumers">concepts
section</a>.
   <p>
   Each sharded partition log is placed into its own folder under the Kafka log directory.
The name of such folders consists of the topic name, appended by a dash (-) and the partition
id. Since a typical folder name can not be over 255 characters long, there will be a limitation
on the length of topic names. We assume the number of partitions will not ever be above 100,000.
Therefore, topic names cannot be longer than 249 characters. This leaves just enough room
in the folder name for a dash and a potentially 5 digit long partition id.
   <p>


Mime
View raw message