kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gwens...@apache.org
Subject [51/51] [partial] kafka-site git commit: KAFKA-2425: Initial upload of Kafka documentation to git repository with intent to replace SVN
Date Fri, 02 Oct 2015 19:19:49 GMT
KAFKA-2425: Initial upload of Kafka documentation to git repository with intent to replace SVN


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/ba6c994c
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/ba6c994c
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/ba6c994c

Branch: refs/heads/asf-site
Commit: ba6c994ca09629b047ab9175f882877ba03b92da
Parents: 
Author: Gwen Shapira <cshapi@gmail.com>
Authored: Fri Oct 2 12:16:55 2015 -0700
Committer: Gwen Shapira <cshapi@gmail.com>
Committed: Fri Oct 2 12:16:55 2015 -0700

----------------------------------------------------------------------
 .htaccess                                       |    6 +
 07/configuration.html                           |  355 ++
 07/documentation.html                           |   13 +
 07/images/dataSize.jpg                          |  Bin 0 -> 104895 bytes
 07/images/onlyBatchSize.jpg                     |  Bin 0 -> 29385 bytes
 07/images/onlyConsumer.jpg                      |  Bin 0 -> 26828 bytes
 07/images/onlyProducer.jpg                      |  Bin 0 -> 26066 bytes
 07/images/onlyTopic.jpg                         |  Bin 0 -> 31795 bytes
 07/performance.html                             |   83 +
 07/quickstart.html                              |  309 ++
 08/api.html                                     |  149 +
 08/configuration.html                           |  529 +++
 08/design.html                                  |  232 ++
 08/documentation.html                           |  101 +
 08/implementation.html                          |  346 ++
 08/introduction.html                            |   82 +
 08/migration.html                               |   17 +
 08/ops.html                                     |  334 ++
 08/quickstart.html                              |  164 +
 08/tools.html                                   |    7 +
 08/upgrade.html                                 |    2 +
 08/uses.html                                    |   29 +
 081/api.html                                    |  148 +
 081/configuration.html                          |  760 +++++
 081/design.html                                 |  324 ++
 081/documentation.html                          |  116 +
 081/ecosystem.html                              |    3 +
 081/implementation.html                         |  341 ++
 081/introduction.html                           |   82 +
 081/migration.html                              |   17 +
 081/ops.html                                    |  602 ++++
 081/quickstart.html                             |  172 +
 081/upgrade.html                                |    8 +
 081/uses.html                                   |   39 +
 082/api.html                                    |  138 +
 082/configuration.html                          |  905 ++++++
 082/design.html                                 |  338 ++
 082/documentation.html                          |  116 +
 082/ecosystem.html                              |    3 +
 082/implementation.html                         |  370 +++
 082/introduction.html                           |   82 +
 082/javadoc/allclasses-frame.html               |  109 +
 082/javadoc/allclasses-noframe.html             |  109 +
 082/javadoc/constant-values.html                |  304 ++
 082/javadoc/deprecated-list.html                |  144 +
 082/javadoc/help-doc.html                       |  217 ++
 082/javadoc/index-all.html                      |  864 +++++
 082/javadoc/index.html                          |   73 +
 .../producer/BufferExhaustedException.html      |  245 ++
 .../apache/kafka/clients/producer/Callback.html |  216 ++
 .../kafka/clients/producer/KafkaProducer.html   |  497 +++
 .../kafka/clients/producer/MockProducer.html    |  513 +++
 .../apache/kafka/clients/producer/Producer.html |  322 ++
 .../kafka/clients/producer/ProducerConfig.html  |  710 ++++
 .../kafka/clients/producer/ProducerRecord.html  |  397 +++
 .../kafka/clients/producer/RecordMetadata.html  |  302 ++
 .../kafka/clients/producer/package-frame.html   |   64 +
 .../kafka/clients/producer/package-summary.html |  204 ++
 .../kafka/clients/producer/package-tree.html    |  171 +
 .../org/apache/kafka/common/Cluster.html        |  436 +++
 .../org/apache/kafka/common/Configurable.html   |  210 ++
 .../org/apache/kafka/common/KafkaException.html |  290 ++
 082/javadoc/org/apache/kafka/common/Metric.html |  231 ++
 .../org/apache/kafka/common/MetricName.html     |  498 +++
 082/javadoc/org/apache/kafka/common/Node.html   |  368 +++
 .../org/apache/kafka/common/PartitionInfo.html  |  372 +++
 .../org/apache/kafka/common/TopicPartition.html |  341 ++
 .../kafka/common/errors/ApiException.html       |  323 ++
 .../common/errors/CorruptRecordException.html   |  300 ++
 .../common/errors/InvalidMetadataException.html |  302 ++
 .../common/errors/InvalidTopicException.html    |  298 ++
 .../errors/LeaderNotAvailableException.html     |  257 ++
 .../kafka/common/errors/NetworkException.html   |  301 ++
 .../NotEnoughReplicasAfterAppendException.html  |  301 ++
 .../errors/NotEnoughReplicasException.html      |  299 ++
 .../errors/NotLeaderForPartitionException.html  |  300 ++
 .../common/errors/OffsetMetadataTooLarge.html   |  298 ++
 .../errors/OffsetOutOfRangeException.html       |  299 ++
 .../errors/RecordBatchTooLargeException.html    |  298 ++
 .../common/errors/RecordTooLargeException.html  |  298 ++
 .../kafka/common/errors/RetriableException.html |  301 ++
 .../common/errors/SerializationException.html   |  319 ++
 .../kafka/common/errors/TimeoutException.html   |  299 ++
 .../common/errors/UnknownServerException.html   |  299 ++
 .../UnknownTopicOrPartitionException.html       |  299 ++
 .../kafka/common/errors/package-frame.html      |   66 +
 .../kafka/common/errors/package-summary.html    |  228 ++
 .../kafka/common/errors/package-tree.html       |  166 +
 .../org/apache/kafka/common/package-frame.html  |   64 +
 .../apache/kafka/common/package-summary.html    |  203 ++
 .../org/apache/kafka/common/package-tree.html   |  163 +
 .../serialization/ByteArrayDeserializer.html    |  311 ++
 .../serialization/ByteArraySerializer.html      |  311 ++
 .../common/serialization/Deserializer.html      |  258 ++
 .../kafka/common/serialization/Serializer.html  |  258 ++
 .../serialization/StringDeserializer.html       |  316 ++
 .../common/serialization/StringSerializer.html  |  316 ++
 .../common/serialization/package-frame.html     |   51 +
 .../common/serialization/package-summary.html   |  187 ++
 .../common/serialization/package-tree.html      |  160 +
 082/javadoc/overview-frame.html                 |   48 +
 082/javadoc/overview-summary.html               |  166 +
 082/javadoc/overview-tree.html                  |  180 ++
 082/javadoc/package-list                        |    4 +
 082/javadoc/resources/inherit.gif               |  Bin 0 -> 57 bytes
 082/javadoc/serialized-form.html                |  446 +++
 082/javadoc/stylesheet.css                      |   29 +
 082/migration.html                              |   17 +
 082/ops.html                                    |  860 +++++
 082/quickstart.html                             |  172 +
 082/upgrade.html                                |   13 +
 082/uses.html                                   |   39 +
 090/api.html                                    |  138 +
 090/configuration.html                          |  905 ++++++
 090/design.html                                 |  338 ++
 090/documentation.html                          |  116 +
 090/ecosystem.html                              |    3 +
 090/implementation.html                         |  370 +++
 090/introduction.html                           |   82 +
 090/javadoc/allclasses-frame.html               |   66 +
 090/javadoc/allclasses-noframe.html             |   66 +
 090/javadoc/constant-values.html                |  471 +++
 090/javadoc/deprecated-list.html                |  113 +
 090/javadoc/help-doc.html                       |  214 ++
 090/javadoc/index-all.html                      | 1315 ++++++++
 090/javadoc/index.html                          |   67 +
 .../kafka/clients/consumer/CommitType.html      |  319 ++
 .../apache/kafka/clients/consumer/Consumer.html |  435 +++
 .../kafka/clients/consumer/ConsumerConfig.html  |  647 ++++
 .../consumer/ConsumerRebalanceCallback.html     |  287 ++
 .../kafka/clients/consumer/ConsumerRecord.html  |  360 +++
 .../kafka/clients/consumer/ConsumerRecords.html |  315 ++
 .../kafka/clients/consumer/KafkaConsumer.html   | 1002 ++++++
 .../kafka/clients/consumer/MockConsumer.html    |  559 ++++
 .../consumer/NoOffsetForPartitionException.html |  259 ++
 .../kafka/clients/consumer/package-frame.html   |   36 +
 .../kafka/clients/consumer/package-summary.html |  212 ++
 .../kafka/clients/consumer/package-tree.html    |  176 +
 .../producer/BufferExhaustedException.html      |  260 ++
 .../apache/kafka/clients/producer/Callback.html |  215 ++
 .../kafka/clients/producer/KafkaProducer.html   |  484 +++
 .../kafka/clients/producer/MockProducer.html    |  472 +++
 .../apache/kafka/clients/producer/Producer.html |  294 ++
 .../kafka/clients/producer/ProducerConfig.html  |  648 ++++
 .../kafka/clients/producer/ProducerRecord.html  |  380 +++
 .../kafka/clients/producer/RecordMetadata.html  |  294 ++
 .../kafka/clients/producer/package-frame.html   |   32 +
 .../kafka/clients/producer/package-summary.html |  198 ++
 .../kafka/clients/producer/package-tree.html    |  164 +
 .../org/apache/kafka/common/Cluster.html        |  420 +++
 .../org/apache/kafka/common/Configurable.html   |  208 ++
 .../org/apache/kafka/common/KafkaException.html |  296 ++
 090/javadoc/org/apache/kafka/common/Metric.html |  224 ++
 .../org/apache/kafka/common/MetricName.html     |  453 +++
 090/javadoc/org/apache/kafka/common/Node.html   |  345 ++
 .../org/apache/kafka/common/PartitionInfo.html  |  349 ++
 .../org/apache/kafka/common/TopicPartition.html |  321 ++
 .../kafka/common/errors/ApiException.html       |  334 ++
 .../common/errors/CorruptRecordException.html   |  315 ++
 .../common/errors/InvalidMetadataException.html |  318 ++
 .../common/errors/InvalidTopicException.html    |  309 ++
 .../errors/LeaderNotAvailableException.html     |  282 ++
 .../kafka/common/errors/NetworkException.html   |  320 ++
 .../NotEnoughReplicasAfterAppendException.html  |  315 ++
 .../errors/NotEnoughReplicasException.html      |  314 ++
 .../errors/NotLeaderForPartitionException.html  |  319 ++
 .../common/errors/OffsetMetadataTooLarge.html   |  309 ++
 .../errors/OffsetOutOfRangeException.html       |  314 ++
 .../errors/RecordBatchTooLargeException.html    |  309 ++
 .../common/errors/RecordTooLargeException.html  |  309 ++
 .../kafka/common/errors/RetriableException.html |  313 ++
 .../common/errors/SerializationException.html   |  329 ++
 .../kafka/common/errors/TimeoutException.html   |  314 ++
 .../common/errors/UnknownServerException.html   |  310 ++
 .../UnknownTopicOrPartitionException.html       |  314 ++
 .../kafka/common/errors/package-frame.html      |   36 +
 .../kafka/common/errors/package-summary.html    |  239 ++
 .../kafka/common/errors/package-tree.html       |  168 +
 .../org/apache/kafka/common/package-frame.html  |   32 +
 .../apache/kafka/common/package-summary.html    |  197 ++
 .../org/apache/kafka/common/package-tree.html   |  148 +
 .../serialization/ByteArrayDeserializer.html    |  310 ++
 .../serialization/ByteArraySerializer.html      |  310 ++
 .../common/serialization/Deserializer.html      |  250 ++
 .../kafka/common/serialization/Serializer.html  |  252 ++
 .../serialization/StringDeserializer.html       |  312 ++
 .../common/serialization/StringSerializer.html  |  312 ++
 .../common/serialization/package-frame.html     |   27 +
 .../common/serialization/package-summary.html   |  168 +
 .../common/serialization/package-tree.html      |  134 +
 090/javadoc/overview-frame.html                 |   24 +
 090/javadoc/overview-summary.html               |  143 +
 090/javadoc/overview-tree.html                  |  228 ++
 090/javadoc/package-list                        |    5 +
 090/javadoc/resources/background.gif            |  Bin 0 -> 2313 bytes
 090/javadoc/resources/tab.gif                   |  Bin 0 -> 291 bytes
 090/javadoc/resources/titlebar.gif              |  Bin 0 -> 10701 bytes
 090/javadoc/resources/titlebar_end.gif          |  Bin 0 -> 849 bytes
 090/javadoc/serialized-form.html                |  325 ++
 090/javadoc/stylesheet.css                      |  474 +++
 090/migration.html                              |   17 +
 090/ops.html                                    |  859 +++++
 090/quickstart.html                             |  172 +
 090/upgrade.html                                |   27 +
 090/uses.html                                   |   39 +
 KEYS                                            |  294 ++
 api-docs/0.6/index.html                         |  117 +
 api-docs/0.6/kafka/Kafka$.html                  |  525 +++
 api-docs/0.6/kafka/api/FetchRequest$.html       |  527 +++
 api-docs/0.6/kafka/api/FetchRequest.html        |  601 ++++
 api-docs/0.6/kafka/api/MultiFetchRequest$.html  |  527 +++
 api-docs/0.6/kafka/api/MultiFetchRequest.html   |  577 ++++
 api-docs/0.6/kafka/api/MultiFetchResponse.html  | 1815 +++++++++++
 .../0.6/kafka/api/MultiProducerRequest$.html    |  527 +++
 .../0.6/kafka/api/MultiProducerRequest.html     |  577 ++++
 api-docs/0.6/kafka/api/OffsetRequest$.html      |  575 ++++
 api-docs/0.6/kafka/api/OffsetRequest.html       |  601 ++++
 api-docs/0.6/kafka/api/ProducerRequest$.html    |  535 +++
 api-docs/0.6/kafka/api/ProducerRequest.html     |  609 ++++
 api-docs/0.6/kafka/api/RequestKeys$.html        |  557 ++++
 api-docs/0.6/kafka/api/package.html             |  154 +
 api-docs/0.6/kafka/cluster/Partition$.html      |  527 +++
 api-docs/0.6/kafka/cluster/Partition.html       |  648 ++++
 api-docs/0.6/kafka/cluster/package.html         |   74 +
 api-docs/0.6/kafka/common/ErrorMapping$.html    |  595 ++++
 .../kafka/common/InvalidConfigException.html    |  672 ++++
 .../common/InvalidMessageSizeException.html     |  672 ++++
 .../kafka/common/InvalidPartitionException.html |  672 ++++
 .../0.6/kafka/common/InvalidTopicException.html |  666 ++++
 .../common/MessageSizeTooLargeException.html    |  666 ++++
 .../kafka/common/OffsetOutOfRangeException.html |  672 ++++
 .../common/UnavailableProducerException.html    |  671 ++++
 api-docs/0.6/kafka/common/UnknownException.html |  664 ++++
 api-docs/0.6/kafka/common/package.html          |  136 +
 .../ConsoleConsumer$$MessageFormatter.html      |  548 ++++
 ...onsoleConsumer$$NewlineMessageFormatter.html |  567 ++++
 .../0.6/kafka/consumer/ConsoleConsumer$.html    |  581 ++++
 api-docs/0.6/kafka/consumer/Consumer$.html      |  557 ++++
 .../0.6/kafka/consumer/ConsumerConfig$.html     |  607 ++++
 api-docs/0.6/kafka/consumer/ConsumerConfig.html |  797 +++++
 .../0.6/kafka/consumer/ConsumerConnector.html   |  583 ++++
 .../0.6/kafka/consumer/ConsumerIterator.html    | 1631 ++++++++++
 .../consumer/ConsumerTimeoutException.html      |  658 ++++
 .../0.6/kafka/consumer/FetcherRunnable.html     |  997 ++++++
 .../0.6/kafka/consumer/KafkaMessageStream.html  | 1812 +++++++++++
 api-docs/0.6/kafka/consumer/SimpleConsumer.html |  629 ++++
 .../kafka/consumer/SimpleConsumerStats$.html    |  535 +++
 .../0.6/kafka/consumer/SimpleConsumerStats.html |  611 ++++
 .../consumer/SimpleConsumerStatsMBean.html      |  584 ++++
 .../ZookeeperConsumerConnectorMBean.html        |  588 ++++
 api-docs/0.6/kafka/consumer/package.html        |  182 ++
 .../consumer/storage/MemoryOffsetStorage.html   |  574 ++++
 .../kafka/consumer/storage/OffsetStorage.html   |  564 ++++
 .../0.6/kafka/consumer/storage/package.html     |   82 +
 .../storage/sql/OracleOffsetStorage.html        |  588 ++++
 .../0.6/kafka/consumer/storage/sql/package.html |   64 +
 .../0.6/kafka/javaapi/MultiFetchResponse.html   |  565 ++++
 api-docs/0.6/kafka/javaapi/ProducerRequest.html |  607 ++++
 .../javaapi/consumer/ConsumerConnector$.html    |   54 +
 .../javaapi/consumer/ConsumerConnector.html     |  558 ++++
 .../kafka/javaapi/consumer/SimpleConsumer.html  |  637 ++++
 .../0.6/kafka/javaapi/consumer/package.html     |   83 +
 .../javaapi/message/ByteBufferMessageSet.html   |  644 ++++
 .../0.6/kafka/javaapi/message/MessageSet.html   |  602 ++++
 api-docs/0.6/kafka/javaapi/message/package.html |   71 +
 api-docs/0.6/kafka/javaapi/package.html         |   98 +
 .../0.6/kafka/javaapi/producer/Producer.html    |  611 ++++
 .../kafka/javaapi/producer/ProducerData.html    |  568 ++++
 .../kafka/javaapi/producer/SyncProducer.html    |  576 ++++
 .../producer/async/CallbackHandler$.html        |   54 +
 .../javaapi/producer/async/CallbackHandler.html |  597 ++++
 .../javaapi/producer/async/EventHandler$.html   |   54 +
 .../javaapi/producer/async/EventHandler.html    |  558 ++++
 .../kafka/javaapi/producer/async/package.html   |   90 +
 .../0.6/kafka/javaapi/producer/package.html     |   90 +
 api-docs/0.6/kafka/log/LogStats.html            |  609 ++++
 api-docs/0.6/kafka/log/LogStatsMBean.html       |  584 ++++
 api-docs/0.6/kafka/log/package.html             |   71 +
 .../0.6/kafka/message/ByteBufferMessageSet.html | 1910 +++++++++++
 api-docs/0.6/kafka/message/FileMessageSet.html  | 2069 ++++++++++++
 .../kafka/message/InvalidMessageException.html  |  664 ++++
 api-docs/0.6/kafka/message/LogFlushStats$.html  |  527 +++
 api-docs/0.6/kafka/message/LogFlushStats.html   |  590 ++++
 .../0.6/kafka/message/LogFlushStatsMBean.html   |  571 ++++
 api-docs/0.6/kafka/message/Message$.html        |  581 ++++
 api-docs/0.6/kafka/message/Message.html         |  610 ++++
 .../kafka/message/MessageLengthException.html   |  665 ++++
 api-docs/0.6/kafka/message/MessageSet$.html     |  595 ++++
 api-docs/0.6/kafka/message/MessageSet.html      | 1882 +++++++++++
 api-docs/0.6/kafka/message/package.html         |  152 +
 .../0.6/kafka/network/ConnectionConfig.html     |  575 ++++
 .../kafka/network/InvalidRequestException.html  |  674 ++++
 api-docs/0.6/kafka/network/MultiSend.html       |  649 ++++
 .../0.6/kafka/network/SocketServerStats.html    |  738 +++++
 .../kafka/network/SocketServerStatsMBean.html   |  649 ++++
 api-docs/0.6/kafka/network/package.html         |   96 +
 api-docs/0.6/kafka/package.html                 |  167 +
 .../kafka/producer/DefaultStringEncoder.html    |  541 ++++
 .../0.6/kafka/producer/KafkaLog4jAppender.html  |  860 +++++
 api-docs/0.6/kafka/producer/Partitioner.html    |  538 +++
 api-docs/0.6/kafka/producer/Producer.html       |  614 ++++
 api-docs/0.6/kafka/producer/ProducerConfig.html |  876 +++++
 api-docs/0.6/kafka/producer/ProducerData.html   |  573 ++++
 .../producer/ProducerPool$ProducerPoolData.html |  552 ++++
 api-docs/0.6/kafka/producer/ProducerPool.html   |  631 ++++
 api-docs/0.6/kafka/producer/SyncProducer$.html  |  527 +++
 api-docs/0.6/kafka/producer/SyncProducer.html   |  580 ++++
 .../0.6/kafka/producer/SyncProducerConfig.html  |  645 ++++
 .../producer/SyncProducerConfigShared.html      |  580 ++++
 .../0.6/kafka/producer/SyncProducerStats$.html  |  527 +++
 .../0.6/kafka/producer/SyncProducerStats.html   |  590 ++++
 .../kafka/producer/SyncProducerStatsMBean.html  |  571 ++++
 .../kafka/producer/async/AsyncProducer$.html    |  543 ++++
 .../producer/async/AsyncProducerConfig.html     |  786 +++++
 .../async/AsyncProducerConfigShared.html        |  652 ++++
 ...syncProducerStats$ThroughputTimerThread.html |  541 ++++
 .../producer/async/AsyncProducerStats.html      |  634 ++++
 .../producer/async/AsyncProducerStatsMBean.html |  558 ++++
 .../kafka/producer/async/CallbackHandler.html   |  648 ++++
 .../0.6/kafka/producer/async/EventHandler.html  |  578 ++++
 .../producer/async/MissingConfigException.html  |  666 ++++
 .../producer/async/QueueClosedException.html    |  666 ++++
 .../producer/async/QueueFullException.html      |  666 ++++
 .../0.6/kafka/producer/async/QueueItem.html     |  552 ++++
 api-docs/0.6/kafka/producer/async/package.html  |  147 +
 api-docs/0.6/kafka/producer/package.html        |  178 +
 api-docs/0.6/kafka/serializer/Decoder.html      |  532 +++
 .../0.6/kafka/serializer/DefaultDecoder.html    |  541 ++++
 .../0.6/kafka/serializer/DefaultEncoder.html    |  541 ++++
 api-docs/0.6/kafka/serializer/Encoder.html      |  532 +++
 .../0.6/kafka/serializer/StringDecoder.html     |  541 ++++
 .../0.6/kafka/serializer/StringEncoder.html     |  541 ++++
 api-docs/0.6/kafka/serializer/package.html      |  103 +
 api-docs/0.6/kafka/server/EmbeddedConsumer.html |  544 ++++
 api-docs/0.6/kafka/server/KafkaConfig.html      |  766 +++++
 api-docs/0.6/kafka/server/KafkaServer.html      |  637 ++++
 .../0.6/kafka/server/KafkaServerStartable.html  |  576 ++++
 .../KafkaZooKeeper$SessionExpireListener.html   |  568 ++++
 api-docs/0.6/kafka/server/KafkaZooKeeper.html   |  619 ++++
 api-docs/0.6/kafka/server/package.html          |   96 +
 .../0.6/kafka/tools/ConsumerPerformance$.html   |  531 +++
 api-docs/0.6/kafka/tools/ConsumerShell$.html    |  531 +++
 api-docs/0.6/kafka/tools/GetOffsetShell$.html   |  525 +++
 .../0.6/kafka/tools/ProducerPerformance$.html   |  531 +++
 api-docs/0.6/kafka/tools/ProducerShell$.html    |  531 +++
 .../0.6/kafka/tools/ShutdownableThread.html     |  964 ++++++
 .../kafka/tools/SimpleConsumerPerformance$.html |  531 +++
 .../0.6/kafka/tools/SimpleConsumerShell$.html   |  531 +++
 api-docs/0.6/kafka/tools/ZKConsumerThread.html  |  965 ++++++
 api-docs/0.6/kafka/tools/package.html           |  141 +
 api-docs/0.6/kafka/utils/DONE$.html             |  517 +++
 api-docs/0.6/kafka/utils/DelayedItem.html       |  593 ++++
 api-docs/0.6/kafka/utils/DumpLogSegments$.html  |  525 +++
 api-docs/0.6/kafka/utils/FAILED$.html           |  517 +++
 api-docs/0.6/kafka/utils/IteratorTemplate.html  | 1627 ++++++++++
 api-docs/0.6/kafka/utils/KafkaScheduler.html    |  567 ++++
 api-docs/0.6/kafka/utils/MockTime.html          |  583 ++++
 api-docs/0.6/kafka/utils/NOT_READY$.html        |  517 +++
 api-docs/0.6/kafka/utils/Pool.html              | 1871 +++++++++++
 api-docs/0.6/kafka/utils/READY$.html            |  517 +++
 api-docs/0.6/kafka/utils/Range.html             |  591 ++++
 .../0.6/kafka/utils/SnapshotStats$Stats.html    |  608 ++++
 api-docs/0.6/kafka/utils/SnapshotStats.html     |  597 ++++
 api-docs/0.6/kafka/utils/State.html             |  530 +++
 api-docs/0.6/kafka/utils/StringSerializer$.html |  543 ++++
 api-docs/0.6/kafka/utils/SystemTime$.html       |  562 ++++
 api-docs/0.6/kafka/utils/Throttler$.html        |  535 +++
 api-docs/0.6/kafka/utils/Throttler.html         |  594 ++++
 api-docs/0.6/kafka/utils/Time$.html             |  621 ++++
 api-docs/0.6/kafka/utils/Time.html              |  566 ++++
 .../0.6/kafka/utils/UpdateOffsetsInZK$.html     |  547 ++++
 api-docs/0.6/kafka/utils/Utils$.html            | 1214 +++++++
 api-docs/0.6/kafka/utils/ZKConfig.html          |  594 ++++
 api-docs/0.6/kafka/utils/ZKGroupDirs.html       |  562 ++++
 api-docs/0.6/kafka/utils/ZKGroupTopicDirs.html  |  583 ++++
 api-docs/0.6/kafka/utils/ZkUtils$.html          |  718 ++++
 api-docs/0.6/kafka/utils/immutable.html         |  534 +++
 api-docs/0.6/kafka/utils/nonthreadsafe.html     |  534 +++
 api-docs/0.6/kafka/utils/package.html           |  291 ++
 api-docs/0.6/kafka/utils/threadsafe.html        |  535 +++
 api-docs/0.6/lib/class.png                      |  Bin 0 -> 516 bytes
 api-docs/0.6/lib/class_big.png                  |  Bin 0 -> 3183 bytes
 api-docs/0.6/lib/filter_box_left.png            |  Bin 0 -> 3519 bytes
 api-docs/0.6/lib/filter_box_right.png           |  Bin 0 -> 2977 bytes
 api-docs/0.6/lib/index.css                      |  180 ++
 api-docs/0.6/lib/index.js                       |  277 ++
 api-docs/0.6/lib/jquery.js                      |  154 +
 api-docs/0.6/lib/object.png                     |  Bin 0 -> 518 bytes
 api-docs/0.6/lib/object_big.png                 |  Bin 0 -> 3318 bytes
 api-docs/0.6/lib/package.png                    |  Bin 0 -> 488 bytes
 api-docs/0.6/lib/package_big.png                |  Bin 0 -> 3183 bytes
 api-docs/0.6/lib/remove.png                     |  Bin 0 -> 3186 bytes
 api-docs/0.6/lib/scheduler.js                   |   71 +
 api-docs/0.6/lib/template.css                   |  416 +++
 api-docs/0.6/lib/template.js                    |  130 +
 api-docs/0.6/lib/tools.tooltip.js               |   14 +
 api-docs/0.6/lib/trait.png                      |  Bin 0 -> 494 bytes
 api-docs/0.6/lib/trait_big.png                  |  Bin 0 -> 3088 bytes
 api-docs/0.6/package.html                       |   63 +
 code.html                                       |   22 +
 coding-guide.html                               |  103 +
 committers.html                                 |  120 +
 contact.html                                    |   37 +
 contributing.html                               |   48 +
 diagrams/consumer-groups.graffle                | 1255 +++++++
 diagrams/kafka_multidc_complex.graffle          | 1304 ++++++++
 diagrams/kafka_multidc_simple.graffle           |  Bin 0 -> 2675 bytes
 diagrams/log_anatomy.graffle                    | 2758 ++++++++++++++++
 diagrams/log_compaction.graffle                 | 3052 ++++++++++++++++++
 diagrams/mirror-maker.graffle                   |  655 ++++
 diagrams/producer_consumer.graffle              | 1486 +++++++++
 documentation.html                              |    2 +
 downloads.html                                  |  185 ++
 images/apache_feather.gif                       |  Bin 0 -> 4128 bytes
 images/consumer-groups.png                      |  Bin 0 -> 26820 bytes
 images/david.jpg                                |  Bin 0 -> 6099 bytes
 images/feather-small.png                        |  Bin 0 -> 31327 bytes
 images/guozhang.jpg                             |  Bin 0 -> 12451 bytes
 images/gwenshap.jpg                             |  Bin 0 -> 7172 bytes
 images/jakob.jpg                                |  Bin 0 -> 12012 bytes
 images/jay.jpg                                  |  Bin 0 -> 56077 bytes
 images/joe.jpg                                  |  Bin 0 -> 26132 bytes
 images/joel.jpg                                 |  Bin 0 -> 28469 bytes
 images/jun.jpg                                  |  Bin 0 -> 6869 bytes
 images/kafka_log.png                            |  Bin 0 -> 134321 bytes
 images/kafka_logo.png                           |  Bin 0 -> 10648 bytes
 images/kafka_multidc.png                        |  Bin 0 -> 33959 bytes
 images/kafka_multidc_complex.png                |  Bin 0 -> 38559 bytes
 images/log_anatomy.png                          |  Bin 0 -> 19579 bytes
 images/log_cleaner_anatomy.png                  |  Bin 0 -> 18638 bytes
 images/log_compaction.png                       |  Bin 0 -> 41414 bytes
 images/mirror-maker.png                         |  Bin 0 -> 17054 bytes
 images/neha.jpg                                 |  Bin 0 -> 6501 bytes
 images/prashanth.jpg                            |  Bin 0 -> 36356 bytes
 images/producer_consumer.png                    |  Bin 0 -> 8691 bytes
 images/sriram.jpg                               |  Bin 0 -> 8628 bytes
 images/tracking_high_level.png                  |  Bin 0 -> 82759 bytes
 includes/footer.html                            |    9 +
 includes/header.html                            |   76 +
 index.html                                      |   21 +
 logos/kafka-logo-no-text.png                    |  Bin 0 -> 13592 bytes
 logos/kafka-logo-tall.png                       |  Bin 0 -> 13824 bytes
 logos/kafka-logo-wide.png                       |  Bin 0 -> 13675 bytes
 logos/originals/eps/ICON - Black on White.eps   |  Bin 0 -> 325430 bytes
 logos/originals/eps/ICON - White on Black.eps   |  Bin 0 -> 325830 bytes
 logos/originals/eps/TALL - Black on White.eps   |  Bin 0 -> 319362 bytes
 logos/originals/eps/TALL - White on Black.eps   |  Bin 0 -> 320238 bytes
 logos/originals/eps/WIDE - Black on White.eps   |  Bin 0 -> 317278 bytes
 logos/originals/eps/WIDE - White on Black.eps   |  Bin 0 -> 318410 bytes
 logos/originals/jpg/ICON - Black on White.jpg   |  Bin 0 -> 62663 bytes
 logos/originals/jpg/ICON - White on Black.jpg   |  Bin 0 -> 62667 bytes
 logos/originals/jpg/TALL - Black on White.jpg   |  Bin 0 -> 71335 bytes
 logos/originals/jpg/TALL - White on Black.jpg   |  Bin 0 -> 71446 bytes
 logos/originals/jpg/WIDE - Black on White.jpg   |  Bin 0 -> 68524 bytes
 logos/originals/jpg/WIDE - White on Black.jpg   |  Bin 0 -> 68526 bytes
 .../png/ICON - Black on Transparent.png         |  Bin 0 -> 13592 bytes
 .../png/ICON - White on Transparent.png         |  Bin 0 -> 14520 bytes
 .../png/TALL - Black on Transparent.png         |  Bin 0 -> 13824 bytes
 .../png/TALL - White on Transparent.png         |  Bin 0 -> 14977 bytes
 .../png/WIDE - Black on Transparent.png         |  Bin 0 -> 13675 bytes
 .../png/WIDE - White on Transparent.png         |  Bin 0 -> 15067 bytes
 logos/originals/psd/ICON - Black on White.psd   |  Bin 0 -> 141959 bytes
 logos/originals/psd/ICON - White on Black.psd   |  Bin 0 -> 146009 bytes
 logos/originals/psd/TALL - Black on White.psd   |  Bin 0 -> 153240 bytes
 logos/originals/psd/TALL - White on Black.psd   |  Bin 0 -> 160162 bytes
 logos/originals/psd/WIDE - Black on White.psd   |  Bin 0 -> 148848 bytes
 logos/originals/psd/WIDE - White on Black.psd   |  Bin 0 -> 160698 bytes
 performance.html                                |    5 +
 styles.css                                      |  141 +
 469 files changed, 167487 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/.htaccess
----------------------------------------------------------------------
diff --git a/.htaccess b/.htaccess
new file mode 100644
index 0000000..e097711
--- /dev/null
+++ b/.htaccess
@@ -0,0 +1,6 @@
+Options +Includes
+AddType text/html .html
+AddHandler server-parsed .html
+Redirect 301 /design.html /documentation.html#design
+Redirect 301 /quickstart.html /documentation.html#quickstart
+Redirect 301 /uses.html /documentation.html#uses
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/configuration.html
----------------------------------------------------------------------
diff --git a/07/configuration.html b/07/configuration.html
new file mode 100644
index 0000000..4420560
--- /dev/null
+++ b/07/configuration.html
@@ -0,0 +1,355 @@
+<!--#include virtual="../includes/header.html" -->
+
+<h2> Configuration </h2>
+
+<h3> Important configuration properties for Kafka broker: </h3>
+
+<p>More details about server configuration can be found in the scala class <code>kafka.server.KafkaConfig</code>.</p> 
+
+<table class="data-table">
+<tr>
+        <th>name</th>
+        <th>default</th>
+        <th>description</th>
+</tr>
+<tr>
+    <td><code>brokerid</code></td>
+    <td>none</td>
+    <td>Each broker is uniquely identified by an id. This id serves as the brokers "name", and allows the broker to be moved to a different host/port without confusing consumers.</td>
+</tr>
+<tr>
+    <td><code>enable.zookeeper</code></td>
+    <td>true</td>
+    <td>enable zookeeper registration in the server</td>
+</tr>
+<tr>
+     <td><code>log.flush.interval</code></td>
+     <td>500</td>
+     <td>Controls the number of messages accumulated in each topic (partition) before the data is flushed to disk and made available to consumers.</td>  
+</tr>
+<tr>
+    <td><code>log.default.flush.scheduler.interval.ms</code></td>
+    <td>3000</td>
+    <td>Controls the interval at which logs are checked to see if they need to be flushed to disk. A background thread will run at a frequency specified by this parameter and will check each log to see if it has exceeded its flush.interval time, and if so it will flush it.</td>
+</tr>
+<tr>
+    <td><code>log.default.flush.interval.ms</code> </td>
+    <td>log.default.flush.scheduler.interval.ms</td>
+    <td>Controls the maximum time that a message in any topic is kept in memory before flushed to disk. The value only makes sense if it's a multiple of <code>log.default.flush.scheduler.interval
+.ms</code></td>
+</tr>
+<tr>
+    <td><code>topic.flush.intervals.ms</code></td>
+    <td>none</td>
+    <td>Per-topic overrides for <code>log.default.flush.interval.ms</code>. Controls the maximum time that a message in selected topics is kept in memory before flushed to disk. The per-topic value only makes sense if it's a multiple of <code>log.default.flush.scheduler.interval.ms</code>. E.g., topic1:1000,topic2:2000</td>
+</tr>
+<tr>
+    <td><code>log.retention.hours</code></td>
+    <td>168</td>
+    <td>Controls how long a log file is retained.</td>
+</tr>
+<tr>
+    <td><code>topic.log.retention.hours</code></td>
+    <td>none</td>
+    <td>Topic-specific retention time that overrides <code>log.retention.hours</code>, e.g., topic1:10,topic2:20</td>
+</tr>
+<tr>
+    <td><code>log.retention.size</code></td>
+    <td>-1</td>
+    <td>the maximum size of the log before deleting it. This controls how large a log is allowed to grow</td>
+</tr>
+<tr>
+    <td><code>log.cleanup.interval.mins</code></td>
+    <td>10</td>
+    <td>Controls how often the log cleaner checks logs eligible for deletion. A log file is eligible for deletion if it hasn't been modified for <code>log.retention.hours</code> hours.</td>
+</tr>
+<tr>
+    <td><code>log.dir</code></td>
+    <td>none</td>
+    <td>Specifies the root directory in which all log data is kept.</td>
+</tr>
+<tr>
+    <td><code>log.file.size</code></td>
+    <td>1*1024*1024*1024</td>
+    <td>Controls the maximum size of a single log file.</td>
+</tr>
+<tr>
+    <td><code>log.roll.hours</code></td>
+    <td>24 * 7</td>
+    <td>The maximum time before a new log segment is rolled out</td>
+</tr>
+<tr>
+    <td><code>max.socket.request.bytes<code></td>
+    <td>104857600</td>
+    <td>the maximum number of bytes in a socket request</td>
+</tr>
+<tr>
+    <td><code>monitoring.period.secs<code></td>
+    <td>600</td>
+    <td>the interval in which to measure performance statistics</td>
+</tr>
+<tr>
+    <td><code>num.threads</code></td>
+    <td>Runtime.getRuntime().availableProcessors</td>
+    <td>Controls the number of worker threads in the broker to serve requests.</td>
+</tr>
+<tr>
+    <td><code>num.partitions</code></td>
+    <td>1</td>
+    <td>Specifies the default number of partitions per topic.</td>
+</tr>
+<tr>
+    <td><code>socket.send.buffer</code></td>
+    <td>102400</td>
+    <td>the SO_SNDBUFF buffer of the socket sever sockets</td>
+</tr>
+<tr>
+    <td><code>socket.receive.buffer</code></td>
+    <td>102400</td>
+    <td>the SO_RCVBUFF buffer of the socket sever sockets</td>
+</tr>
+<tr>
+    <td><code>topic.partition.count.map</code></td>
+    <td>none</td>
+    <td>Override parameter to control the number of partitions for selected topics. E.g., topic1:10,topic2:20</td>
+</tr>
+<tr>
+    <td><code>zk.connect</code></td>
+    <td>localhost:2182/kafka</td>
+    <td>Specifies the zookeeper connection string in the form hostname:port/chroot. Here the chroot is a base directory which is prepended to all path operations (this effectively namespaces all kafka znodes to allow sharing with other applications on the same zookeeper cluster)</td>
+</tr>
+<tr>
+    <td><code>zk.connectiontimeout.ms</code> </td>
+    <td>6000</td>
+    <td>Specifies the max time that the client waits to establish a connection to zookeeper.</td>
+</tr>
+<tr>
+    <td><code>zk.sessiontimeout.ms</code> </td>
+    <td>6000</td>
+    <td>The zookeeper session timeout.</td>
+</tr>
+<tr>
+    <td><code>zk.synctime.ms</code></td>
+    <td>2000</td>
+    <td>Max time for how far a ZK follower can be behind a ZK leader</td>
+</tr>
+</table>
+
+
+<h3> Important configuration properties for the high-level consumer: </h3>
+
+<p>More details about consumer configuration can be found in the scala class <code>kafka.consumer.ConsumerConfig</code>.</p> 
+
+<table class="data-table">
+<tr>
+        <th>property</th>
+        <th>default</th>
+        <th>description</th>
+</tr>
+<tr>
+    <td><code>groupid</code></td>
+    <td>groupid</td>
+    <td>is a string that uniquely identifies a set of consumers within the same consumer group. </td>
+</tr>
+<tr>
+    <td><code>socket.timeout.ms</code></td>
+    <td>30000</td>
+    <td>controls the socket timeout for network requests </td>
+</tr>
+<tr>
+    <td><code>socket.buffersize</code></td>
+    <td>64*1024</td>
+    <td>controls the socket receive buffer for network requests</td>
+</tr>
+<tr>
+    <td><code>fetch.size</code></td>
+    <td>300 * 1024</td>
+    <td>controls the number of bytes of messages to attempt to fetch in one request to the Kafka server</td>
+</tr>
+<tr>
+    <td><code>backoff.increment.ms</code></td>
+    <td>1000</td>
+    <td>This parameter avoids repeatedly polling a broker node which has no new data. We will backoff every time we get an empty set 
+from the broker for this time period</td>
+</tr>
+<tr>
+    <td><code>queuedchunks.max</code></td>
+    <td>100</td>
+    <td>the high level consumer buffers the messages fetched from the server internally in blocking queues. This parameter controls
+the size of those queues</td>
+</tr>
+<tr>
+    <td><code>autocommit.enable</code></td>
+    <td>true</td>
+    <td>if set to true, the consumer periodically commits to zookeeper the latest consumed offset of each partition. </td>
+</tr>
+<tr>
+    <td><code>autocommit.interval.ms</code> </td>
+    <td>10000</td>
+    <td>is the frequency that the consumed offsets are committed to zookeeper. </td>
+</tr>
+<tr>
+    <td><code>autooffset.reset</code></td>
+    <td>smallest</td>
+    <td><ul>
+ <li> <code>smallest</code>: automatically reset the offset to the smallest offset available on the broker.</li>
+ <li> <code>largest</code> : automatically reset the offset to the largest offset available on the broker.</li>
+ <li> <code>anything else</code>: throw an exception to the consumer.</li>
+</ul>
+</td>
+</tr>
+<tr>
+    <td><code>consumer.timeout.ms</code></td>
+    <td>-1</td>
+    <td>By default, this value is -1 and a consumer blocks indefinitely if no new message is available for consumption. By setting the value to a positive integer, a timeout exception is thrown to the consumer if no message is available for consumption after the specified timeout value.</td>
+</tr>
+<tr>
+    <td><code>rebalance.retries.max</code> </td>
+    <td>4</td>
+    <td>max number of retries during rebalance</td>
+</tr>
+<tr>
+    <td><code>mirror.topics.whitelist</code></td>
+    <td>""</td>
+    <td>Whitelist of topics for this mirror's embedded consumer to consume. At most one of whitelist/blacklist may be specified.</td>
+</tr>
+<tr>
+    <td><code>mirror.topics.blacklist</code></td>
+    <td>""</td>
+    <td>Topics to skip mirroring. At most one of whitelist/blacklist may be specified</td>
+</tr>
+<tr>
+    <td><code>mirror.consumer.numthreads</code></td>
+    <td>4</td>
+    <td>The number of threads to be used per topic for the mirroring consumer, by default</td>
+</tr>
+</table>
+
+
+<h3> Important configuration properties for the producer: </h3>
+
+<p>More details about producer configuration can be found in the scala class <code>kafka.producer.ProducerConfig</code>.</p> 
+
+<table class="data-table">
+<tr>
+        <th>property</th>
+        <th>default</th>
+        <th>description</th>
+</tr>
+<tr>
+    <td><code>serializer.class</code></td>
+    <td>kafka.serializer.DefaultEncoder. This is a no-op encoder. The serialization of data to Message should be handled outside the Producer</td>
+    <td>class that implements the <code>kafka.serializer.Encoder&lt;T&gt;</code> interface, used to encode data of type T into a Kafka message </td>
+</tr>
+<tr>
+    <td><code>partitioner.class</code></td>
+    <td><code>kafka.producer.DefaultPartitioner&lt;T&gt;</code> - uses the partitioning strategy <code>hash(key)%num_partitions</code>. If key is null, then it picks a random partition. </td>
+    <td>class that implements the <code>kafka.producer.Partitioner&lt;K&gt;</code>, used to supply a custom partitioning strategy on the message key (of type K) that is specified through the <code>ProducerData&lt;K, T&gt;</code> object in the <code>kafka.producer.Producer&lt;T&gt;</code> send API</td>
+</tr>
+<tr>
+    <td><code>producer.type</code></td>
+    <td>sync</td>
+    <td>this parameter specifies whether the messages are sent asynchronously or not. Valid values are - <ul><li><code>async</code> for asynchronous batching send through <code>kafka.producer.AyncProducer</code></li><li><code>sync</code> for synchronous send through <code>kafka.producer.SyncProducer</code></li></ul></td>
+</tr>
+<tr>
+    <td><code>broker.list</code></td>
+    <td>null. Either this parameter or zk.connect needs to be specified by the user.</td>
+    <td>For bypassing zookeeper based auto partition discovery, use this config to pass in static broker and per-broker partition information. Format-<code>brokerid1:host1:port1, brokerid2:host2:port2.</code>
+	If you use this option, the <code>partitioner.class</code> will be ignored and each producer request will be routed to a random broker partition.</td>
+</tr>
+<tr>
+    <td><code>zk.connect</code></td>
+    <td>null. Either this parameter or broker.partition.info needs to be specified by the user</td>
+    <td>For using the zookeeper based automatic broker discovery, use this config to pass in the zookeeper connection url to the zookeeper cluster where the Kafka brokers are registered.</td>
+</tr>
+<tr>
+    <td><code>buffer.size</code></td>
+    <td>102400</td>
+    <td>the socket buffer size, in bytes</td>
+</tr>
+<tr>
+    <td><code>connect.timeout.ms</code></td>
+    <td>5000</td>
+    <td>the maximum time spent by <code>kafka.producer.SyncProducer</code> trying to connect to the kafka broker. Once it elapses, the producer throws an ERROR and stops.</td>
+</tr>
+<tr>
+    <td><code>socket.timeout.ms</code></td>
+    <td>30000</td>
+    <td>The socket timeout in milliseconds</td>
+</tr>
+<tr>
+    <td><code>reconnect.interval</code> </td>
+    <td>30000</td>
+    <td>the number of produce requests after which <code>kafka.producer.SyncProducer</code> tears down the socket connection to the broker and establishes it again; this and the following property are mainly used when the producer connects to the brokers through a VIP in a load balancer; they give the producer a chance to pick up the new broker periodically</td>
+</tr>
+<tr>
+    <td><code>reconnect.time.interval.ms</code> </td>
+    <td>10 * 1000 * 1000</td>
+    <td>the amount of time after which <code>kafka.producer.SyncProducer</code> tears down the socket connection to the broker and establishes it again; negative reconnect time interval means disabling this time-based reconnect feature</td>
+</tr>
+<tr>
+    <td><code>max.message.size</code> </td>
+    <td>1000000</td>
+    <td>the maximum number of bytes that the kafka.producer.SyncProducer can send as a single message payload</td>
+</tr>
+<tr>
+    <td><code>compression.codec</code></td>
+    <td>0 (No compression)</td>
+    <td>This parameter allows you to specify the compression codec for all data generated by this producer.</td>
+</tr>
+<tr>
+    <td><code>compressed.topics</code></td>
+    <td>null</td>
+    <td>This parameter allows you to set whether compression should be turned on for particular topics. If the compression codec is anything other than NoCompressionCodec, enable compression only for specified topics if any. If the list of compressed topics is empty, then enable the specified compression codec for all topics. If the compression codec is NoCompressionCodec, compression is disabled for all topics. </td>
+</tr>
+<tr>
+    <td><code>zk.read.num.retries</code></td>
+    <td>3</td>
+    <td>The producer using the zookeeper software load balancer maintains a ZK cache that gets updated by the zookeeper watcher listeners. During some events like a broker bounce, the producer ZK cache can get into an inconsistent state, for a small time period. In this time period, it could end up picking a broker partition that is unavailable. When this happens, the ZK cache needs to be updated. This parameter specifies the number of times the producer attempts to refresh this ZK cache.</td>
+</tr>
+<tr>
+	<td colspan="3" style="text-align: center">
+	Options for Asynchronous Producers (<code>producer.type=async</code>)
+	</td>
+</tr>
+<tr>
+    <td><code>queue.time</code></td>
+    <td>5000</td>
+    <td>maximum time, in milliseconds, for buffering data on the producer queue. After it elapses, the buffered data in the producer queue is dispatched to the <code>event.handler</code>.</td>
+</tr>
+<tr>
+    <td><code>queue.size</code></td>
+    <td>10000</td>
+    <td>the maximum size of the blocking queue for buffering on the <code> kafka.producer.AsyncProducer</code></td>
+</tr>
+<tr>
+    <td><code>batch.size</code> </td>
+    <td>200</td>
+    <td>the number of messages batched at the producer, before being dispatched to the <code>event.handler</code></td>
+</tr>
+<tr>
+    <td><code>event.handler</code></td>
+    <td><code>kafka.producer.async.EventHandler&lt;T&gt;</code></td>
+    <td>the class that implements <code>kafka.producer.async.IEventHandler&lt;T&gt;</code> used to dispatch a batch of produce requests, using an instance of <code>kafka.producer.SyncProducer</code>. 
+</td>
+</tr>
+<tr>
+    <td><code>event.handler.props</code></td>
+    <td>null</td>
+    <td>the <code>java.util.Properties()</code> object used to initialize the custom <code>event.handler</code> through its <code>init()</code> API</td>
+</tr>
+<tr>
+    <td><code>callback.handler</code></td>
+    <td><code>null</code></td>
+    <td>the class that implements <code>kafka.producer.async.CallbackHandler&lt;T&gt;</code> used to inject callbacks at various stages of the <code>kafka.producer.AsyncProducer</code> pipeline.
+</td>
+</tr>
+<tr>
+    <td><code>callback.handler.props</code></td>
+    <td>null</td>
+    <td>the <code>java.util.Properties()</code> object used to initialize the custom <code>callback.handler</code> through its <code>init()</code> API</td>
+</tr>
+</table>
+
+
+<!--#include virtual="../includes/footer.html" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/documentation.html
----------------------------------------------------------------------
diff --git a/07/documentation.html b/07/documentation.html
new file mode 100644
index 0000000..da4a9e6
--- /dev/null
+++ b/07/documentation.html
@@ -0,0 +1,13 @@
+<!--#include virtual="../includes/header.html" -->
+
+<h1>Kafka 0.7 Documentation</h1>
+
+<ul>
+	<li><a href="/07/quickstart.html">Quickstart</a> &ndash; Get up and running quickly.
+	<li><a href="/07/configuration.html">Configuration</a> &ndash; All the knobs.
+	<li><a href="/07/performance.html">Performance</a> &ndash; Some performance results.
+	<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations">Operations</a> &ndash; Notes on running the system.
+	<li><a href="http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs">API Docs</a> &ndash; Scaladoc for the api.
+</ul>
+
+<!--#include virtual="../includes/footer.html" -->
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/images/dataSize.jpg
----------------------------------------------------------------------
diff --git a/07/images/dataSize.jpg b/07/images/dataSize.jpg
new file mode 100644
index 0000000..8fa4ad5
Binary files /dev/null and b/07/images/dataSize.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/images/onlyBatchSize.jpg
----------------------------------------------------------------------
diff --git a/07/images/onlyBatchSize.jpg b/07/images/onlyBatchSize.jpg
new file mode 100644
index 0000000..171591c
Binary files /dev/null and b/07/images/onlyBatchSize.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/images/onlyConsumer.jpg
----------------------------------------------------------------------
diff --git a/07/images/onlyConsumer.jpg b/07/images/onlyConsumer.jpg
new file mode 100644
index 0000000..2ed4515
Binary files /dev/null and b/07/images/onlyConsumer.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/images/onlyProducer.jpg
----------------------------------------------------------------------
diff --git a/07/images/onlyProducer.jpg b/07/images/onlyProducer.jpg
new file mode 100644
index 0000000..3bd3a1e
Binary files /dev/null and b/07/images/onlyProducer.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/images/onlyTopic.jpg
----------------------------------------------------------------------
diff --git a/07/images/onlyTopic.jpg b/07/images/onlyTopic.jpg
new file mode 100644
index 0000000..542d886
Binary files /dev/null and b/07/images/onlyTopic.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/performance.html
----------------------------------------------------------------------
diff --git a/07/performance.html b/07/performance.html
new file mode 100644
index 0000000..78fc254
--- /dev/null
+++ b/07/performance.html
@@ -0,0 +1,83 @@
+<!--#include virtual="../includes/header.html" -->
+
+<h2>Performance Results</h2>
+<p>The following tests give some basic information on Kafka throughput as the number of topics, consumers and producers and overall data size varies. Since Kafka nodes are independent, these tests are run with a single producer, consumer, and broker machine. Results can be extrapolated for a larger cluster.
+</p>
+
+<p>
+We run producer and consumer tests separately to isolate their performance. For the consumer these tests test <i>cold</i> performance, that is consuming a large uncached backlog of messages. Simultaneous production and consumption tends to help performance since the cache is hot.
+</p>
+
+<p>We took below setting for some of the parameters:</p>
+
+<ul>
+<li>message size = 200 bytes</li>
+<li>batch size = 200 messages</li>
+<li>fetch size = 1MB</li>
+<li>flush interval = 600 messages</li>
+</ul>
+
+In our performance tests, we run experiments to answer below questions.
+<h3>What is the producer throughput as a function of batch size?</h3>
+<p>We can push about 50MB/sec to the system. However, this number changes with the batch size. The below graphs show the relation between these two quantities.<p>
+<p><span style="" class="image-wrap"><img border="0" src="images/onlyBatchSize.jpg" width="500" height="300"/></span><br /></p>
+
+<h3>What is the consumer throughput?</h3>
+<p>According to our experiments, we can consume about 100M/sec from a broker and the total does not seem to change much as we increase
+the number of consumer threads.<p>
+<p><span style="" class="image-wrap"><img border="0" src="images/onlyConsumer.jpg" width="500" height="300"/></span> </p>
+
+<h3> Does data size effect our performance? </h3>
+<p><span style="" class="image-wrap"><img border="0" src="images/dataSize.jpg" width="500" height="300"/></span><br /></p>
+
+<h3>What is the effect of the number of producer threads on producer throughput?</h3>
+<p>We are able to max out production with only a few threads.<p>
+<p><span style="" class="image-wrap"><img border="0" src="images/onlyProducer.jpg" width="500" height="300"/></span><br /></p>
+
+<h3> What is the effect of number of topics on producer throughput?</h3>
+<p>Based on our experiments, the number of topic has a minimal effect on the total data produced.
+The below graph is an experiment where we used 40 producers and varied the number of topics<p>
+
+<p><span style="" class="image-wrap"><img border="0" src="images/onlyTopic.jpg" width="500" height="300"/></span><br /></p>
+
+<h2>How to Run a Performance Test</h2>
+
+<p>The performance related code is under perf folder. To run the simulator :</p>
+
+<p>&nbsp;../run-simulator.sh -kafkaServer=localhost -numTopic=10&nbsp;  -reportFile=report-html/data -time=15 -numConsumer=20 -numProducer=40  -xaxis=numTopic</p>
+
+<p>It will run a simulator with 40 producer and 20 consumer threads 
+          producing/consuming from a local kafkaserver.&nbsp; The simulator is going to
+          run 15 minutes and the results are going to be saved under 
+          report-html/data</p>
+
+<p>and they will be plotted from there. Basically it will write MB of 
+          data consumed/produced, number of messages consumed/produced given a 
+          number of topic and report.html will plot the charts.</p>
+
+
+      <p>Other parameters include numParts, fetchSize, messageSize.</p>
+
+      <p>In order to test how the number of topic affects the performance the below script can be used (it is under utl-bin)</p>
+
+
+
+      <p>#!/bin/bash<br />
+      			 
+         for i in 1 10 20 30 40 50;<br />
+     
+         do<br />
+
+         &nbsp; ../kafka-server.sh server.properties 2>&amp;1 >kafka.out&amp;<br />
+         sleep 60<br />
+  &nbsp;../run-simulator.sh -kafkaServer=localhost -numTopic=$i&nbsp;  -reportFile=report-html/data -time=15 -numConsumer=20 -numProducer=40  -xaxis=numTopic<br />
+         &nbsp;../stop-server.sh<br />
+	  &nbsp;rm -rf /tmp/kafka-logs<br />
+     
+         &nbsp;sleep 300<br />
+    	   
+         done</p>
+
+<p>The charts similar to above graphs can be plotted with report.html automatically.</p>
+
+<!--#include virtual="../includes/footer.html" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/07/quickstart.html
----------------------------------------------------------------------
diff --git a/07/quickstart.html b/07/quickstart.html
new file mode 100644
index 0000000..256fcbd
--- /dev/null
+++ b/07/quickstart.html
@@ -0,0 +1,309 @@
+<!--#include virtual="../includes/header.html" -->
+
+<h2>Quick Start</h2>
+	
+<h3> Step 1: Download the code </h3>
+
+<a href="../downloads.html" title="Kafka downloads">Download</a> a recent stable release.
+
+<pre>
+<b>&gt; tar xzf kafka-&lt;VERSION&gt;.tgz</b>
+<b>&gt; cd kafka-&lt;VERSION&gt;</b>
+<b>&gt; ./sbt update</b>
+<b>&gt; ./sbt package</b>
+</pre>
+
+<h3>Step 2: Start the server</h3>
+
+Kafka brokers and consumers use this for co-ordination. 
+<p>
+First start the zookeeper server. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node zookeeper instance.
+
+<pre>
+<b>&gt; bin/zookeeper-server-start.sh config/zookeeper.properties</b>
+[2010-11-21 23:45:02,335] INFO Reading configuration from: config/zookeeper.properties 
+...
+</pre>
+
+Now start the Kafka server:
+<pre>
+<b>&gt; bin/kafka-server-start.sh config/server.properties</b>
+jkreps-mn-2:kafka-trunk jkreps$ bin/kafka-server-start.sh config/server.properties 
+[2010-11-21 23:51:39,608] INFO starting log cleaner every 60000 ms (kafka.log.LogManager)
+[2010-11-21 23:51:39,628] INFO connecting to ZK: localhost:2181 (kafka.server.KafkaZooKeeper)
+...
+</pre>
+
+<h3>Step 3: Send some messages</h3>
+
+Kafka comes with a command line client that will take input from standard in and send it out as messages to the Kafka cluster. By default each line will be sent as a separate message. The topic <i>test</i> is created automatically when messages are sent to it. Omitting logging you should see something like this:
+
+<pre>
+&gt; <b>bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic test</b> 
+This is a message
+This is another message
+</pre>
+
+<h3>Step 4: Start a consumer</h3>
+
+Kafka also has a command line consumer that will dump out messages to standard out.
+
+<pre>
+<b>&gt; bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning</b>
+This is a message
+This is another message
+</pre>
+<p>
+If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.
+</p>
+<p>
+Both of these command line tools have additional options. Running the command with no arguments will display usage information documenting them in more detail.	
+</p>
+
+<h3>Step 5: Write some code</h3>
+
+Below is some very simple examples of using Kafka for sending messages, more complete examples can be found in the Kafka source code in the examples/ directory.
+
+<h4>Producer Code</h4>
+
+<h5>Producer API </h5>
+
+Here are examples of using the producer API - <code>kafka.producer.Producer&lt;T&gt;</code> -
+
+<ol>
+<li>First, start a local instance of the zookeeper server
+<pre>./bin/zookeeper-server-start.sh config/zookeeper.properties</pre>
+</li>
+<li>Next, start a kafka broker
+<pre>./bin/kafka-server-start.sh config/server.properties</pre>
+</li>
+<li>Now, create the producer with all configuration defaults and use zookeeper based broker discovery.
+<pre>
+import java.util.Arrays;
+import java.util.List;
+import java.util.Properties;
+import kafka.javaapi.producer.SyncProducer;
+import kafka.javaapi.message.ByteBufferMessageSet;
+import kafka.message.Message;
+import kafka.producer.SyncProducerConfig;
+
+...
+
+Properties props = new Properties();
+props.put(“zk.connect”, “127.0.0.1:2181”);
+props.put("serializer.class", "kafka.serializer.StringEncoder");
+ProducerConfig config = new ProducerConfig(props);
+Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
+</pre>
+</li>
+<li>Send a single message
+<pre>
+<small>// The message is sent to a randomly selected partition registered in ZK</small>
+ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic", "test-message");
+producer.send(data);	
+</pre>
+</li>
+<li>Send multiple messages to multiple topics in one request
+<pre>
+List&lt;String&gt; messages = new java.util.ArrayList&lt;String&gt;();
+messages.add("test-message1");
+messages.add("test-message2");
+ProducerData&lt;String, String&gt; data1 = new ProducerData&lt;String, String&gt;("test-topic1", messages);
+ProducerData&lt;String, String&gt; data2 = new ProducerData&lt;String, String&gt;("test-topic2", messages);
+List&lt;ProducerData&lt;String, String&gt;&gt; dataForMultipleTopics = new ArrayList&lt;ProducerData&lt;String, String&gt;&gt;();
+dataForMultipleTopics.add(data1);
+dataForMultipleTopics.add(data2);
+producer.send(dataForMultipleTopics);	
+</pre>
+</li>
+<li>Send a message with a partition key. Messages with the same key are sent to the same partition
+<pre>
+ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic", "test-key", "test-message");
+producer.send(data);
+</pre>
+</li>
+<li>Use your custom partitioner
+<p>If you are using zookeeper based broker discovery, <code>kafka.producer.Producer&lt;T&gt;</code> routes your data to a particular broker partition based on a <code>kafka.producer.Partitioner&lt;T&gt;</code>, specified through the <code>partitioner.class</code> config parameter. It defaults to <code>kafka.producer.DefaultPartitioner</code>. If you don't supply a partition key, then it sends each request to a random broker partition.</p>
+<pre>
+class MemberIdPartitioner extends Partitioner[MemberIdLocation] {
+  def partition(data: MemberIdLocation, numPartitions: Int): Int = {
+    (data.location.hashCode % numPartitions)
+  }
+}
+<small>// create the producer config to plug in the above partitioner</small>
+Properties props = new Properties();
+props.put(“zk.connect”, “127.0.0.1:2181”);
+props.put("serializer.class", "kafka.serializer.StringEncoder");
+props.put("partitioner.class", "xyz.MemberIdPartitioner");
+ProducerConfig config = new ProducerConfig(props);
+Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
+</pre>
+</li>
+<li>Use custom Encoder 
+<p>The producer takes in a required config parameter <code>serializer.class</code> that specifies an <code>Encoder&lt;T&gt;</code> to convert T to a Kafka Message. Default is the no-op kafka.serializer.DefaultEncoder.
+Here is an example of a custom Encoder -</p>
+<pre>
+class TrackingDataSerializer extends Encoder&lt;TrackingData&gt; {
+  <small>// Say you want to use your own custom Avro encoding</small>
+  CustomAvroEncoder avroEncoder = new CustomAvroEncoder();
+  def toMessage(event: TrackingData):Message = {
+	new Message(avroEncoder.getBytes(event));
+  }
+}
+</pre>
+If you want to use the above Encoder, pass it in to the "serializer.class" config parameter
+<pre>
+Properties props = new Properties();
+props.put("serializer.class", "xyz.TrackingDataSerializer");
+</pre>
+</li>
+<li>Using static list of brokers, instead of zookeeper based broker discovery
+<p>Some applications would rather not depend on zookeeper. In that case, the config parameter <code>broker.list</code> 
+can be used to specify the list of all brokers in the Kafka cluster.- the list of all brokers in your Kafka cluster in the following format - 
+<code>broker_id1:host1:port1, broker_id2:host2:port2...</code></p>
+<pre>
+<small>// you can stop the zookeeper instance as it is no longer required</small>
+./bin/zookeeper-server-stop.sh	
+<small>// create the producer config object </small>
+Properties props = new Properties();
+props.put(“broker.list”, “0:localhost:9092”);
+props.put("serializer.class", "kafka.serializer.StringEncoder");
+ProducerConfig config = new ProducerConfig(props);
+<small>// send a message using default partitioner </small>
+Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);	
+List&lt;String&gt; messages = new java.util.ArrayList&lt;String&gt;();
+messages.add("test-message");
+ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic", messages);
+producer.send(data);	
+</pre>
+</li>
+<li>Use the asynchronous producer along with GZIP compression. This buffers writes in memory until either <code>batch.size</code> or <code>queue.time</code> is reached. After that, data is sent to the Kafka brokers
+<pre>
+Properties props = new Properties();
+props.put("zk.connect"‚ "127.0.0.1:2181");
+props.put("serializer.class", "kafka.serializer.StringEncoder");
+props.put("producer.type", "async");
+props.put("compression.codec", "1");
+ProducerConfig config = new ProducerConfig(props);
+Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
+ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic", "test-message");
+producer.send(data);
+</pre
+</li>
+<li>Finally, the producer should be closed, through
+<pre>producer.close();</pre>
+</li>
+</ol>
+
+<h5>Log4j appender </h5>
+
+Data can also be produced to a Kafka server in the form of a log4j appender. In this way, minimal code needs to be written in order to send some data across to the Kafka server. 
+Here is an example of how to use the Kafka Log4j appender -
+
+Start by defining the Kafka appender in your log4j.properties file.
+<pre>
+<small>// define the kafka log4j appender config parameters</small>
+log4j.appender.KAFKA=kafka.producer.KafkaLog4jAppender
+<small>// <b>REQUIRED</b>: set the hostname of the kafka server</small>
+log4j.appender.KAFKA.Host=localhost
+<small>// <b>REQUIRED</b>: set the port on which the Kafka server is listening for connections</small>
+log4j.appender.KAFKA.Port=9092
+<small>// <b>REQUIRED</b>: the topic under which the logger messages are to be posted</small>
+log4j.appender.KAFKA.Topic=test
+<small>// the serializer to be used to turn an object into a Kafka message. Defaults to kafka.producer.DefaultStringEncoder</small>
+log4j.appender.KAFKA.Serializer=kafka.test.AppenderStringSerializer
+<small>// do not set the above KAFKA appender as the root appender</small>
+log4j.rootLogger=INFO
+<small>// set the logger for your package to be the KAFKA appender</small>
+log4j.logger.your.test.package=INFO, KAFKA
+</pre>
+
+Data can be sent using a log4j appender as follows -
+
+<pre>
+Logger logger = Logger.getLogger([your.test.class])    
+logger.info("message from log4j appender");
+</pre>
+
+If your log4j appender fails to send messages, please verify that the correct 
+log4j properties file is being used. You can add 
+<code>-Dlog4j.debug=true</code> to your VM parameters to verify this.
+
+<h4>Consumer Code</h4>
+
+The consumer code is slightly more complex as it enables multithreaded consumption:
+
+<pre>
+// specify some consumer properties
+Properties props = new Properties();
+props.put("zk.connect", "localhost:2181");
+props.put("zk.connectiontimeout.ms", "1000000");
+props.put("groupid", "test_group");
+
+// Create the connection to the cluster
+ConsumerConfig consumerConfig = new ConsumerConfig(props);
+ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);
+
+// create 4 partitions of the stream for topic “test”, to allow 4 threads to consume
+Map&lt;String, List&lt;KafkaStream&lt;Message&gt;&gt;&gt; topicMessageStreams = 
+    consumerConnector.createMessageStreams(ImmutableMap.of("test", 4));
+List&lt;KafkaStream&lt;Message&gt;&gt; streams = topicMessageStreams.get("test");
+
+// create list of 4 threads to consume from each of the partitions 
+ExecutorService executor = Executors.newFixedThreadPool(4);
+
+// consume the messages in the threads
+for(final KafkaStream&lt;Message&gt; stream: streams) {
+  executor.submit(new Runnable() {
+    public void run() {
+      for(MessageAndMetadata msgAndMetadata: stream) {
+        // process message (msgAndMetadata.message())
+      }	
+    }
+  });
+}
+</pre>
+
+<h4>Hadoop Consumer</h4>
+
+<p>
+Providing a horizontally scalable solution for aggregating and loading data into Hadoop was one of our basic use cases. To support this use case, we provide a Hadoop-based consumer which spawns off many map tasks to pull data from the Kafka cluster in parallel. This provides extremely fast pull-based Hadoop data load capabilities (we were able to fully saturate the network with only a handful of Kafka servers).
+</p>
+
+<p>
+Usage information on the hadoop consumer can be found <a href="https://github.com/kafka-dev/kafka/tree/master/contrib/hadoop-consumer">here</a>.
+</p>
+
+<h4>Simple Consumer</h4>
+
+Kafka has a lower-level consumer api for reading message chunks directly from servers. Under most circumstances this should not be needed. But just in case, it's usage is as follows:
+
+<pre>
+import kafka.api.FetchRequest;
+import kafka.javaapi.consumer.SimpleConsumer;
+import kafka.javaapi.message.ByteBufferMessageSet;
+import kafka.message.Message;
+import kafka.message.MessageSet;
+import kafka.utils.Utils;
+
+...
+
+<small>// create a consumer to connect to the kafka server running on localhost, port 9092, socket timeout of 10 secs, socket receive buffer of ~1MB</small>
+SimpleConsumer consumer = new SimpleConsumer("127.0.0.1", 9092, 10000, 1024000);
+
+long offset = 0;
+while (true) {
+  <small>// create a fetch request for topic “test”, partition 0, current offset, and fetch size of 1MB</small>
+  FetchRequest fetchRequest = new FetchRequest("test", 0, offset, 1000000);
+
+  <small>// get the message set from the consumer and print them out</small>
+  ByteBufferMessageSet messages = consumer.fetch(fetchRequest);
+  for(MessageAndOffset msg : messages) {
+    System.out.println("consumed: " + Utils.toString(msg.message.payload(), "UTF-8"));
+    <small>// advance the offset after consuming each message</small>
+    offset = msg.offset;
+  }
+}
+</pre>
+
+<!--#include virtual="../includes/footer.html" -->
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/ba6c994c/08/api.html
----------------------------------------------------------------------
diff --git a/08/api.html b/08/api.html
new file mode 100644
index 0000000..77d5f7b
--- /dev/null
+++ b/08/api.html
@@ -0,0 +1,149 @@
+<h3><a id="producerapi">2.1 Producer API</a></h3>
+<pre>
+/**
+ *  V: type of the message
+ *  K: type of the optional key associated with the message
+ */
+class kafka.javaapi.producer.Producer&lt;K,V&gt;
+{
+  public Producer(ProducerConfig config);
+
+  /**
+   * Sends the data to a single topic, partitioned by key, using either the
+   * synchronous or the asynchronous producer
+   * @param message the producer data object that encapsulates the topic, key and message data
+   */
+  public void send(KeyedMessage&lt;K,V&gt; message);
+
+  /**
+   * Use this API to send data to multiple topics
+   * @param messages list of producer data objects that encapsulate the topic, key and message data
+   */
+  public void send(List&lt;KeyedMessage&lt;K,V&gt;&gt; messages);
+
+  /**
+   * Close API to close the producer pool connections to all Kafka brokers.
+   */
+  public void close();
+}
+
+</pre>
+You can follow 
+<a href="https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example" title="Kafka 0.8 producer example">this example</a> to learn how to use the producer api.
+
+<h3><a id="highlevelconsumerapi">2.2 High Level Consumer API</a></h3>
+<pre>
+class Consumer {
+  /**
+   *  Create a ConsumerConnector
+   *
+   *  @param config  at the minimum, need to specify the groupid of the consumer and the zookeeper
+   *                 connection string zookeeper.connect.
+   */
+  public static kafka.javaapi.consumer.ConsumerConnector createJavaConsumerConnector(config: ConsumerConfig);
+}
+
+/**
+ *  V: type of the message
+ *  K: type of the optional key assciated with the message
+ */
+public interface kafka.javaapi.consumer.ConsumerConnector {
+  /**
+   *  Create a list of message streams of type T for each topic.
+   *
+   *  @param topicCountMap  a map of (topic, #streams) pair
+   *  @param decoder a decoder that converts from Message to T
+   *  @return a map of (topic, list of  KafkaStream) pairs.
+   *          The number of items in the list is #streams. Each stream supports
+   *          an iterator over message/metadata pairs.
+   */
+  public &lt;K,V&gt; Map&lt;String, List&lt;KafkaStream&lt;K,V&gt;&gt;&gt; 
+    createMessageStreams(Map&lt;String, Integer&gt; topicCountMap, Decoder&lt;K&gt; keyDecoder, Decoder&lt;V&gt; valueDecoder);
+  
+  /**
+   *  Create a list of message streams of type T for each topic, using the default decoder.
+   */
+  public Map&lt;String, List&lt;KafkaStream&lt;byte[], byte[]&gt;&gt;&gt; createMessageStreams(Map&lt;String, Integer&gt; topicCountMap);
+
+  /**
+   *  Create a list of message streams for topics matching a wildcard.
+   *
+   *  @param topicFilter a TopicFilter that specifies which topics to
+   *                    subscribe to (encapsulates a whitelist or a blacklist).
+   *  @param numStreams the number of message streams to return.
+   *  @param keyDecoder a decoder that decodes the message key
+   *  @param valueDecoder a decoder that decodes the message itself
+   *  @return a list of KafkaStream. Each stream supports an
+   *          iterator over its MessageAndMetadata elements.
+   */
+  public &lt;K,V> List&lt;KafkaStream&lt;K,V&gt;&gt; 
+    createMessageStreamsByFilter(TopicFilter topicFilter, int numStreams, Decoder&lt;K&gt; keyDecoder, Decoder&lt;V&gt; valueDecoder);
+  
+  /**
+   *  Create a list of message streams for topics matching a wildcard, using the default decoder.
+   */
+  public List&lt;KafkaStream&lt;byte[], byte[]&gt;&gt; createMessageStreamsByFilter(TopicFilter topicFilter, int numStreams);
+  
+  /**
+   *  Create a list of message streams for topics matching a wildcard, using the default decoder, with one stream.
+   */
+  public List&lt;KafkaStream&lt;byte[], byte[]&gt;&gt; createMessageStreamsByFilter(TopicFilter topicFilter);
+
+  /**
+   *  Commit the offsets of all topic/partitions connected by this connector.
+   */
+  public void commitOffsets();
+
+  /**
+   *  Shut down the connector
+   */
+  public void shutdown();
+}
+
+</pre>
+You can follow 
+<a href="https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example" title="Kafka 0.8 consumer example">this example</a> to learn how to use the high level consumer api.
+<h3><a id="simpleconsumerapi">2.3 Simple Consumer API</a></h3>
+<pre>
+class kafka.javaapi.consumer.SimpleConsumer {
+  /**
+   *  Fetch a set of messages from a topic.
+   *
+   *  @param request specifies the topic name, topic partition, starting byte offset, maximum bytes to be fetched.
+   *  @return a set of fetched messages
+   */
+  public FetchResponse fetch(request: kafka.javaapi.FetchRequest);
+
+  /**
+   *  Fetch metadata for a sequence of topics.
+   *  
+   *  @param request specifies the versionId, clientId, sequence of topics.
+   *  @return metadata for each topic in the request.
+   */
+  public kafka.javaapi.TopicMetadataResponse send(request: kafka.javaapi.TopicMetadataRequest);
+
+  /**
+   *  Get a list of valid offsets (up to maxSize) before the given time.
+   *
+   *  @param request a [[kafka.javaapi.OffsetRequest]] object.
+   *  @return a [[kafka.javaapi.OffsetResponse]] object.
+   */
+  public kafka.javaapi.OffsetResponse getOffsetsBefore(request: OffsetRequest);
+
+  /**
+   * Close the SimpleConsumer.
+   */
+  public void close();
+}
+</pre>
+For most applications, the high level consumer Api is good enough. Some applications want features not exposed to the high level consumer yet (e.g., set initial offset when restarting the consumer). They can instead use our low level SimpleConsumer Api. The logic will be a bit more complicated and you can follow the example in
+<a href="https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example" title="Kafka 0.8 SimpleConsumer example">here</a>.
+
+<h3><a id="kafkahadoopconsumerapi">2.4 Kafka Hadoop Consumer API</a></h3>
+<p>
+Providing a horizontally scalable solution for aggregating and loading data into Hadoop was one of our basic use cases. To support this use case, we provide a Hadoop-based consumer which spawns off many map tasks to pull data from the Kafka cluster in parallel. This provides extremely fast pull-based Hadoop data load capabilities (we were able to fully saturate the network with only a handful of Kafka servers).
+</p>
+
+<p>
+Usage information on the Hadoop consumer can be found <a href="https://github.com/apache/kafka/tree/trunk/contrib/hadoop-consumer">here</a>. In addition to the Apache Kafka contrib Hadoop Consumer, there is also an open source project that integrates Hadoop/HDFS using MapReduce to get messages out of Kafka using Avro <a href="https://github.com/linkedin/camus/tree/camus-kafka-0.8/">here</a> that was open sourced by LinkedIn.
+</p>
\ No newline at end of file


Mime
View raw message