kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ij...@apache.org
Subject kafka git commit: MINOR: Typo fix in docs ops
Date Sat, 02 Jul 2016 01:20:51 GMT
Repository: kafka
Updated Branches:
  refs/heads/trunk f1323d4ff -> 0f8b67903


MINOR: Typo fix in docs ops

Author: Thanasis Katsadas <thanasis00@gmail.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes #1552 from thanasis00/trunk


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/0f8b6790
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/0f8b6790
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/0f8b6790

Branch: refs/heads/trunk
Commit: 0f8b6790351cb6b078ad3ec73e8d36f5bb3ad270
Parents: f1323d4
Author: Thanasis Katsadas <thanasis00@gmail.com>
Authored: Sat Jul 2 02:20:42 2016 +0100
Committer: Ismael Juma <ismael@juma.me.uk>
Committed: Sat Jul 2 02:20:42 2016 +0100

----------------------------------------------------------------------
 docs/ops.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/0f8b6790/docs/ops.html
----------------------------------------------------------------------
diff --git a/docs/ops.html b/docs/ops.html
index 31a5210..d7d7fd7 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -950,7 +950,7 @@ The current stable branch is 3.4 and the latest release of that branch
is 3.4.8,
 Operationally, we do the following for a healthy ZooKeeper installation:
 <ul>
   <li>Redundancy in the physical/hardware/network layout: try not to put them all in
the same rack, decent (but don't go nuts) hardware, try to keep redundant power and network
paths, etc. A typical ZooKeeper ensemble has 5 or 7 servers, which tolerates 2 and 3 servers
down, respectively. If you have a small deployment, then using 3 servers is acceptable, but
keep in mind that you'll only be able to tolerate 1 server down in this case. </li>
-  <li>I/O segregation: if you do a lot of write type traffic you'll almost definitely
want the transaction logs on a dedicated disk group. Writes to the transaction log are synchronous
(but batched for performance), and consequently, concurrent writes can significantly affect
performance. ZooKeeper snapshots can be one such a source of concurrent writes, and ideally
should be written on a disk group separate from the transaction log. Snapshots are writtent
to disk asynchronously, so it is typically ok to share with the operating system and message
log files. You can configure a server to use a separate disk group with the dataLogDir parameter.</li>
+  <li>I/O segregation: if you do a lot of write type traffic you'll almost definitely
want the transaction logs on a dedicated disk group. Writes to the transaction log are synchronous
(but batched for performance), and consequently, concurrent writes can significantly affect
performance. ZooKeeper snapshots can be one such a source of concurrent writes, and ideally
should be written on a disk group separate from the transaction log. Snapshots are written
to disk asynchronously, so it is typically ok to share with the operating system and message
log files. You can configure a server to use a separate disk group with the dataLogDir parameter.</li>
   <li>Application segregation: Unless you really understand the application patterns
of other apps that you want to install on the same box, it can be a good idea to run ZooKeeper
in isolation (though this can be a balancing act with the capabilities of the hardware).</li>
   <li>Use care with virtualization: It can work, depending on your cluster layout and
read/write patterns and SLAs, but the tiny overheads introduced by the virtualization layer
can add up and throw off ZooKeeper, as it can be very time sensitive</li>
   <li>ZooKeeper configuration: It's java, make sure you give it 'enough' heap space
(We usually run them with 3-5G, but that's mostly due to the data set size we have here).
Unfortunately we don't have a good formula for it, but keep in mind that allowing for more
ZooKeeper state means that snapshots can become large, and large snapshots affect recovery
time. In fact, if the snapshot becomes too large (a few gigabytes), then you may need to increase
the initLimit parameter to give enough time for servers to recover and join the ensemble.</li>



Mime
View raw message