kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From srihar...@apache.org
Subject kafka git commit: MINOR: Fix a documentation typo
Date Wed, 15 Mar 2017 18:51:02 GMT
Repository: kafka
Updated Branches:
  refs/heads/trunk b9f812491 -> be1127281


MINOR: Fix a documentation typo

Author: Vahid Hashemian <vahidhashemian@us.ibm.com>

Reviewers: Sriharsha Chintalapani <harsha@hortonworks.com>

Closes #2674 from vahidhashemian/minor/fix_typos_1703


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/be112728
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/be112728
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/be112728

Branch: refs/heads/trunk
Commit: be11272818a06eea157ab7f6eee8905855f1cede
Parents: b9f8124
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Authored: Wed Mar 15 11:50:57 2017 -0700
Committer: Sriharsha Chintalapani <harsha@hortonworks.com>
Committed: Wed Mar 15 11:50:57 2017 -0700

----------------------------------------------------------------------
 docs/ops.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/be112728/docs/ops.html
----------------------------------------------------------------------
diff --git a/docs/ops.html b/docs/ops.html
index 9232f65..7be9939 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -33,7 +33,7 @@
   </pre>
   The replication factor controls how many servers will replicate each message that is written.
If you have a replication factor of 3 then up to 2 servers can fail before you will lose access
to your data. We recommend you use a replication factor of 2 or 3 so that you can transparently
bounce machines without interrupting data consumption.
   <p>
-  The partition count controls how many logs the topic will be sharded into. There are several
impacts of the partition count. First each partition must fit entirely on a single server.
So if you have 20 partitions the full data set (and read and write load) will be handled by
no more than 20 servers (noi counting replicas). Finally the partition count impacts the maximum
parallelism of your consumers. This is discussed in greater detail in the <a href="#intro_consumers">concepts
section</a>.
+  The partition count controls how many logs the topic will be sharded into. There are several
impacts of the partition count. First each partition must fit entirely on a single server.
So if you have 20 partitions the full data set (and read and write load) will be handled by
no more than 20 servers (not counting replicas). Finally the partition count impacts the maximum
parallelism of your consumers. This is discussed in greater detail in the <a href="#intro_consumers">concepts
section</a>.
   <p>
   Each sharded partition log is placed into its own folder under the Kafka log directory.
The name of such folders consists of the topic name, appended by a dash (-) and the partition
id. Since a typical folder name can not be over 255 characters long, there will be a limitation
on the length of topic names. We assume the number of partitions will not ever be above 100,000.
Therefore, topic names cannot be longer than 249 characters. This leaves just enough room
in the folder name for a dash and a potentially 5 digit long partition id.
   <p>


Mime
View raw message