kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject [kafka-site] branch asf-site updated: Kafka nav and hompeage redesign (#269)
Date Tue, 04 Aug 2020 20:47:47 GMT
This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 15e7c39  Kafka nav and hompeage redesign (#269)
15e7c39 is described below

commit 15e7c39a43383b74acfd6c4e28e8ddd4df672e0a
Author: scott-confluent <66280178+scott-confluent@users.noreply.github.com>
AuthorDate: Tue Aug 4 13:47:36 2020 -0700

    Kafka nav and hompeage redesign (#269)
---
 .gitignore                                         |    1 +
 0100/generated/protocol_messages.html              |  240 +--
 0100/introduction.html                             |    2 +-
 .../org/apache/kafka/common/MetricName.html        |    2 +-
 0100/protocol.html                                 |    8 +-
 0101/documentation.html                            |    2 +-
 0101/generated/protocol_messages.html              |  288 +--
 0101/introduction.html                             |    2 +-
 .../org/apache/kafka/common/MetricName.html        |    2 +-
 0101/protocol.html                                 |    6 +-
 0102/documentation.html                            |    2 +-
 0102/generated/protocol_messages.html              |  312 ++--
 0102/introduction.html                             |    2 +-
 .../org/apache/kafka/common/MetricName.html        |    2 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    2 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |   36 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   24 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   22 +-
 .../kafka/streams/kstream/KStreamBuilder.html      |   12 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |    6 +-
 0102/protocol.html                                 |    6 +-
 0110/api.html                                      |   20 +-
 0110/configuration.html                            |   20 +-
 0110/connect.html                                  |   90 +-
 0110/design.html                                   |   17 +-
 0110/documentation.html                            |    2 +-
 0110/generated/protocol_messages.html              |  560 +++---
 0110/implementation.html                           |   60 +-
 0110/introduction.html                             |    6 +-
 .../org/apache/kafka/common/MetricName.html        |    2 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    2 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |   36 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   30 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   22 +-
 .../kafka/streams/kstream/KStreamBuilder.html      |   32 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   18 +-
 0110/ops.html                                      |  236 +--
 0110/protocol.html                                 |    6 +-
 0110/quickstart.html                               |  120 +-
 0110/security.html                                 |  218 +--
 0110/streams/core-concepts.html                    |    2 +-
 0110/streams/developer-guide.html                  |  154 +-
 0110/streams/index.html                            |   15 +-
 0110/streams/quickstart.html                       |   85 +-
 0110/streams/tutorial.html                         |  130 +-
 .../kafka/clients/producer/KafkaProducer.html      |    6 +-
 .../org/apache/kafka/common/MetricName.html        |    2 +-
 090/generated/protocol_messages.html               |  192 +-
 .../org/apache/kafka/common/MetricName.html        |    2 +-
 090/protocol.html                                  |    6 +-
 10/api.html                                        |   20 +-
 10/configuration.html                              |   20 +-
 10/connect.html                                    |   90 +-
 10/core-concepts.html                              |    2 +-
 10/design.html                                     |   17 +-
 10/documentation.html                              |    2 +-
 10/generated/protocol_messages.html                |  656 +++----
 10/implementation.html                             |   60 +-
 10/index.html                                      |   23 +-
 10/introduction.html                               |    6 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |    2 +-
 10/javadoc/org/apache/kafka/common/MetricName.html |    2 +-
 10/javadoc/org/apache/kafka/streams/Consumed.html  |    4 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |    6 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    2 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |   56 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   40 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   38 +-
 .../kafka/streams/kstream/KStreamBuilder.html      |   32 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   24 +-
 .../apache/kafka/streams/kstream/Materialized.html |    2 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |    2 +-
 .../streams/kstream/SessionWindowedKStream.html    |    8 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |    6 +-
 .../org/apache/kafka/streams/state/Stores.html     |    4 +-
 10/ops.html                                        |  239 +--
 10/protocol.html                                   |    6 +-
 10/quickstart.html                                 |  125 +-
 10/security.html                                   |  218 +--
 10/streams/core-concepts.html                      |    2 +-
 10/streams/developer-guide/config-streams.html     |   21 +-
 10/streams/developer-guide/manage-topics.html      |    5 +-
 10/streams/developer-guide/memory-mgmt.html        |    2 +-
 10/streams/developer-guide/processor-api.html      |    2 +-
 10/streams/index.html                              |   23 +-
 10/streams/quickstart.html                         |   90 +-
 10/streams/tutorial.html                           |  155 +-
 10/tutorial.html                                   |  155 +-
 11/api.html                                        |   20 +-
 11/configuration.html                              |   50 +-
 11/connect.html                                    |   95 +-
 11/design.html                                     |   17 +-
 11/documentation.html                              |    2 +-
 11/generated/protocol_messages.html                |  712 +++----
 11/implementation.html                             |   60 +-
 11/introduction.html                               |    6 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |    2 +-
 11/javadoc/org/apache/kafka/common/MetricName.html |    2 +-
 11/javadoc/org/apache/kafka/streams/Consumed.html  |    4 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |    6 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    4 +-
 .../apache/kafka/streams/TopologyTestDriver.html   |    4 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |   56 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   40 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   46 +-
 .../kafka/streams/kstream/KStreamBuilder.html      |   32 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   28 +-
 .../apache/kafka/streams/kstream/Materialized.html |    2 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |    2 +-
 .../streams/kstream/SessionWindowedKStream.html    |    8 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |    6 +-
 .../org/apache/kafka/streams/state/Stores.html     |    4 +-
 11/ops.html                                        |  259 ++-
 11/protocol.html                                   |    6 +-
 11/quickstart.html                                 |  125 +-
 11/security.html                                   |  252 ++-
 11/streams/core-concepts.html                      |    2 +-
 11/streams/developer-guide/config-streams.html     |   21 +-
 11/streams/developer-guide/manage-topics.html      |    5 +-
 11/streams/developer-guide/memory-mgmt.html        |    2 +-
 11/streams/developer-guide/processor-api.html      |    2 +-
 11/streams/index.html                              |   23 +-
 11/streams/quickstart.html                         |   90 +-
 11/streams/tutorial.html                           |  155 +-
 20/api.html                                        |   25 +-
 20/configuration.html                              |   55 +-
 20/connect.html                                    |   95 +-
 20/design.html                                     |   17 +-
 20/documentation.html                              |    2 +-
 20/generated/protocol_messages.html                | 1000 +++++-----
 20/implementation.html                             |   70 +-
 20/introduction.html                               |    6 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |    2 +-
 20/javadoc/org/apache/kafka/common/MetricName.html |    2 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |    6 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    4 +-
 .../apache/kafka/streams/TopologyTestDriver.html   |    4 +-
 .../org/apache/kafka/streams/kstream/Consumed.html |    4 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |    8 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   14 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   30 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   24 +-
 .../apache/kafka/streams/kstream/Materialized.html |    2 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |    2 +-
 .../streams/kstream/SessionWindowedKStream.html    |    8 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |    6 +-
 .../org/apache/kafka/streams/state/Stores.html     |    4 +-
 20/ops.html                                        |  259 ++-
 20/protocol.html                                   |    6 +-
 20/quickstart.html                                 |  125 +-
 20/security.html                                   |  280 ++-
 20/streams/core-concepts.html                      |    2 +-
 20/streams/developer-guide/config-streams.html     |   30 +-
 20/streams/developer-guide/dsl-api.html            |  127 +-
 20/streams/developer-guide/manage-topics.html      |    5 +-
 20/streams/developer-guide/memory-mgmt.html        |    2 +-
 20/streams/developer-guide/processor-api.html      |    2 +-
 20/streams/developer-guide/write-streams.html      |    5 +-
 20/streams/index.html                              |   23 +-
 20/streams/quickstart.html                         |   90 +-
 20/streams/tutorial.html                           |  155 +-
 21/api.html                                        |   25 +-
 21/configuration.html                              |   55 +-
 21/connect.html                                    |   95 +-
 21/design.html                                     |   17 +-
 21/documentation.html                              |    2 +-
 21/generated/protocol_messages.html                | 1088 +++++------
 21/implementation.html                             |   50 +-
 21/introduction.html                               |    6 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |    2 +-
 21/javadoc/org/apache/kafka/common/MetricName.html |    2 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |    6 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    4 +-
 .../apache/kafka/streams/TopologyTestDriver.html   |    4 +-
 .../org/apache/kafka/streams/kstream/Consumed.html |    4 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |    8 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   14 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   30 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   24 +-
 .../apache/kafka/streams/kstream/Materialized.html |    2 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |    2 +-
 .../streams/kstream/SessionWindowedKStream.html    |    8 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |    6 +-
 .../org/apache/kafka/streams/state/Stores.html     |    4 +-
 21/ops.html                                        |  259 ++-
 21/protocol.html                                   |    6 +-
 21/quickstart.html                                 |  125 +-
 21/security.html                                   |  285 ++-
 21/streams/core-concepts.html                      |    2 +-
 21/streams/developer-guide/config-streams.html     |   30 +-
 21/streams/developer-guide/dsl-api.html            |  129 +-
 21/streams/developer-guide/manage-topics.html      |    5 +-
 21/streams/developer-guide/memory-mgmt.html        |    2 +-
 21/streams/developer-guide/processor-api.html      |    2 +-
 21/streams/developer-guide/write-streams.html      |    4 +-
 21/streams/index.html                              |   23 +-
 21/streams/quickstart.html                         |   90 +-
 21/streams/tutorial.html                           |  155 +-
 22/api.html                                        |   25 +-
 22/configuration.html                              |   55 +-
 22/connect.html                                    |   95 +-
 22/design.html                                     |   17 +-
 22/documentation.html                              |    2 +-
 22/generated/protocol_messages.html                | 1152 ++++++------
 22/implementation.html                             |   50 +-
 22/introduction.html                               |    6 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |    2 +-
 22/javadoc/org/apache/kafka/common/MetricName.html |    2 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |    6 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    4 +-
 .../apache/kafka/streams/TopologyTestDriver.html   |    4 +-
 .../org/apache/kafka/streams/kstream/Consumed.html |    4 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |    8 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   14 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   34 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   24 +-
 .../apache/kafka/streams/kstream/Materialized.html |    2 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |    2 +-
 .../streams/kstream/SessionWindowedKStream.html    |    8 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |    6 +-
 .../org/apache/kafka/streams/state/Stores.html     |    4 +-
 22/ops.html                                        |  259 ++-
 22/protocol.html                                   |    6 +-
 22/quickstart.html                                 |  125 +-
 22/security.html                                   |  289 ++-
 22/streams/core-concepts.html                      |    2 +-
 22/streams/developer-guide/config-streams.html     |   30 +-
 22/streams/developer-guide/dsl-api.html            |  127 +-
 22/streams/developer-guide/memory-mgmt.html        |    2 +-
 22/streams/developer-guide/processor-api.html      |    2 +-
 22/streams/developer-guide/write-streams.html      |    5 +-
 22/streams/index.html                              |   23 +-
 22/streams/quickstart.html                         |   90 +-
 22/streams/tutorial.html                           |  155 +-
 23/api.html                                        |   25 +-
 23/configuration.html                              |   55 +-
 23/connect.html                                    |   95 +-
 23/design.html                                     |   19 +-
 23/documentation.html                              |    2 +-
 23/generated/protocol_messages.html                | 1224 ++++++------
 23/implementation.html                             |   50 +-
 23/introduction.html                               |    6 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |    2 +-
 23/javadoc/org/apache/kafka/common/MetricName.html |    2 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |    6 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    4 +-
 .../apache/kafka/streams/TopologyTestDriver.html   |    4 +-
 .../org/apache/kafka/streams/kstream/Consumed.html |    4 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |    8 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   14 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   42 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   24 +-
 .../apache/kafka/streams/kstream/Materialized.html |    2 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |    2 +-
 .../streams/kstream/SessionWindowedKStream.html    |    8 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |    6 +-
 .../org/apache/kafka/streams/state/Stores.html     |    4 +-
 23/ops.html                                        |  259 ++-
 23/protocol.html                                   |    6 +-
 23/quickstart.html                                 |  125 +-
 23/security.html                                   |  289 ++-
 23/streams/core-concepts.html                      |    2 +-
 23/streams/developer-guide/config-streams.html     |   30 +-
 23/streams/developer-guide/dsl-api.html            |  127 +-
 23/streams/developer-guide/memory-mgmt.html        |    2 +-
 23/streams/developer-guide/processor-api.html      |    2 +-
 23/streams/developer-guide/write-streams.html      |    5 +-
 23/streams/index.html                              |   23 +-
 23/streams/quickstart.html                         |   90 +-
 23/streams/tutorial.html                           |  155 +-
 24/api.html                                        |   25 +-
 24/configuration.html                              |   55 +-
 24/connect.html                                    |   95 +-
 24/design.html                                     |   19 +-
 24/documentation.html                              |    2 +-
 24/generated/protocol_messages.html                | 1484 +++++++--------
 24/implementation.html                             |   50 +-
 24/introduction.html                               |    6 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |    2 +-
 .../consumer/CooperativeStickyAssignor.html        |    2 +-
 24/javadoc/org/apache/kafka/common/MetricName.html |    2 +-
 .../authorizer/AuthorizableRequestContext.html     |    2 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |    6 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    4 +-
 .../org/apache/kafka/streams/TestInputTopic.html   |    2 +-
 .../org/apache/kafka/streams/TestOutputTopic.html  |    2 +-
 .../apache/kafka/streams/TopologyTestDriver.html   |    6 +-
 .../org/apache/kafka/streams/kstream/Consumed.html |    4 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |   16 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   26 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   84 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   46 +-
 .../apache/kafka/streams/kstream/Materialized.html |    2 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |    2 +-
 .../streams/kstream/SessionWindowedKStream.html    |   16 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |   12 +-
 .../org/apache/kafka/streams/state/Stores.html     |    4 +-
 24/ops.html                                        |  259 ++-
 24/protocol.html                                   |    6 +-
 24/quickstart.html                                 |  125 +-
 24/security.html                                   |  289 ++-
 24/streams/core-concepts.html                      |    2 +-
 24/streams/developer-guide/config-streams.html     |   30 +-
 24/streams/developer-guide/dsl-api.html            |  135 +-
 24/streams/developer-guide/memory-mgmt.html        |    2 +-
 24/streams/developer-guide/processor-api.html      |    2 +-
 24/streams/developer-guide/write-streams.html      |    5 +-
 24/streams/index.html                              |   23 +-
 24/streams/quickstart.html                         |   90 +-
 24/streams/tutorial.html                           |  155 +-
 24/upgrade.html                                    |    4 +-
 25/api.html                                        |   85 +-
 25/configuration.html                              |  104 +-
 25/connect.html                                    |  208 ++-
 25/design.html                                     |  180 +-
 25/documentation.html                              |  112 +-
 25/generated/admin_client_config.html              |  339 +++-
 25/generated/connect_config.html                   |  633 +++++--
 25/generated/connect_transforms.html               |  168 +-
 25/generated/consumer_config.html                  |  483 +++--
 25/generated/kafka_config.html                     | 1449 ++++++++++----
 25/generated/producer_config.html                  |  451 +++--
 25/generated/protocol_messages.html                | 1971 ++++++++------------
 25/generated/sink_connector_config.html            |  108 +-
 25/generated/source_connector_config.html          |   78 +-
 25/generated/streams_config.html                   |  280 ++-
 25/generated/topic_config.html                     |  182 +-
 25/implementation.html                             |  161 +-
 25/introduction.html                               |  339 ++--
 .../kafka/clients/admin/AlterConfigOp.OpType.html  |    3 +-
 .../clients/admin/ConfigEntry.ConfigSource.html    |    3 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |    2 +-
 ...onsumerPartitionAssignor.RebalanceProtocol.html |    3 +-
 .../consumer/ConsumerRebalanceListener.html        |    3 +-
 .../consumer/CooperativeStickyAssignor.html        |    2 +-
 .../kafka/clients/consumer/KafkaConsumer.html      |   18 +-
 .../clients/consumer/OffsetResetStrategy.html      |    3 +-
 .../kafka/clients/consumer/StickyAssignor.html     |    6 +-
 .../kafka/clients/producer/KafkaProducer.html      |    9 +-
 .../apache/kafka/common/ConsumerGroupState.html    |    3 +-
 .../org/apache/kafka/common/ElectionType.html      |    3 +-
 .../org/apache/kafka/common/IsolationLevel.html    |    3 +-
 25/javadoc/org/apache/kafka/common/MetricName.html |    5 +-
 .../org/apache/kafka/common/acl/AclOperation.html  |    3 +-
 .../apache/kafka/common/acl/AclPermissionType.html |    3 +-
 .../kafka/common/config/ConfigDef.Importance.html  |    3 +-
 .../apache/kafka/common/config/ConfigDef.Type.html |    3 +-
 .../kafka/common/config/ConfigDef.Width.html       |    3 +-
 .../org/apache/kafka/common/config/ConfigDef.html  |    3 +-
 .../kafka/common/config/ConfigResource.Type.html   |    3 +-
 .../kafka/common/config/ConfigTransformer.html     |    3 +-
 .../apache/kafka/common/config/SslClientAuth.html  |    3 +-
 .../apache/kafka/common/resource/PatternType.html  |    3 +-
 .../apache/kafka/common/resource/ResourceType.html |    3 +-
 .../common/security/auth/SecurityProtocol.html     |    3 +-
 .../oauthbearer/OAuthBearerLoginModule.html        |   12 +-
 .../ConnectorClientConfigRequest.ClientType.html   |    3 +-
 .../org/apache/kafka/connect/data/Schema.Type.html |    3 +-
 .../apache/kafka/connect/data/SchemaBuilder.html   |    6 +-
 .../org/apache/kafka/connect/data/Struct.html      |    3 +-
 .../org/apache/kafka/connect/data/Values.html      |    3 +-
 .../apache/kafka/connect/health/ConnectorType.html |    3 +-
 .../apache/kafka/connect/mirror/MirrorClient.html  |    3 +-
 .../kafka/connect/mirror/MirrorClientConfig.html   |    6 +-
 .../kafka/connect/mirror/RemoteClusterUtils.html   |    3 +-
 .../kafka/connect/storage/ConverterType.html       |    3 +-
 .../authorizer/AuthorizableRequestContext.html     |    2 +-
 .../server/authorizer/AuthorizationResult.html     |    3 +-
 .../quota/ClientQuotaEntity.ConfigEntityType.html  |    3 +-
 .../apache/kafka/server/quota/ClientQuotaType.html |    3 +-
 .../apache/kafka/streams/KafkaStreams.State.html   |    8 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |    2 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |   12 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |    7 +-
 .../org/apache/kafka/streams/TestInputTopic.html   |    2 +-
 .../org/apache/kafka/streams/TestOutputTopic.html  |    2 +-
 .../kafka/streams/Topology.AutoOffsetReset.html    |    3 +-
 .../apache/kafka/streams/TopologyTestDriver.html   |    6 +-
 ...tionHandler.DeserializationHandlerResponse.html |    3 +-
 ...Handler.ProductionExceptionHandlerResponse.html |    3 +-
 .../kafka/streams/kstream/CogroupedKStream.html    |    4 +-
 .../org/apache/kafka/streams/kstream/Consumed.html |    4 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |    2 +-
 .../kafka/streams/kstream/KGroupedStream.html      |   16 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   26 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |   84 +-
 .../org/apache/kafka/streams/kstream/KTable.html   |   46 +-
 .../apache/kafka/streams/kstream/Materialized.html |    2 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |    2 +-
 .../kstream/SessionWindowedCogroupedKStream.html   |    4 +-
 .../streams/kstream/SessionWindowedKStream.html    |   20 +-
 .../kafka/streams/kstream/SessionWindows.html      |    6 +-
 .../kstream/TimeWindowedCogroupedKStream.html      |    4 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |   20 +-
 .../kafka/streams/processor/PunctuationType.html   |    3 +-
 .../kafka/streams/state/ReadOnlyWindowStore.html   |    6 +-
 .../org/apache/kafka/streams/state/Stores.html     |    4 +-
 .../apache/kafka/streams/state/WindowStore.html    |    6 +-
 25/migration.html                                  |   10 +-
 25/ops.html                                        |  548 +++---
 25/protocol.html                                   |  100 +-
 25/quickstart-docker.html                          |  204 ++
 25/quickstart-zookeeper.html                       |  277 +++
 25/quickstart.html                                 |  300 ---
 25/security.html                                   |  649 ++++---
 25/streams/architecture.html                       |   25 +-
 25/streams/core-concepts.html                      |   47 +-
 25/streams/developer-guide/app-reset-tool.html     |   11 +-
 25/streams/developer-guide/config-streams.html     |   44 +-
 25/streams/developer-guide/datatypes.html          |   17 +-
 25/streams/developer-guide/dsl-api.html            |  211 +--
 .../developer-guide/dsl-topology-naming.html       |   37 +-
 25/streams/developer-guide/index.html              |    7 +-
 .../developer-guide/interactive-queries.html       |   37 +-
 25/streams/developer-guide/manage-topics.html      |    5 +-
 25/streams/developer-guide/memory-mgmt.html        |   19 +-
 25/streams/developer-guide/processor-api.html      |   27 +-
 25/streams/developer-guide/running-app.html        |    9 +-
 25/streams/developer-guide/security.html           |   13 +-
 25/streams/developer-guide/testing.html            |   53 +-
 25/streams/developer-guide/write-streams.html      |   31 +-
 25/streams/index.html                              |   34 +-
 25/streams/quickstart.html                         |  137 +-
 25/streams/tutorial.html                           |  193 +-
 25/streams/upgrade-guide.html                      |   70 +-
 25/toc.html                                        |    5 +-
 25/upgrade.html                                    |  309 ++-
 25/uses.html                                       |   35 +-
 26/api.html                                        |   40 +-
 26/configuration.html                              |   74 +-
 26/connect.html                                    |  182 +-
 26/design.html                                     |   70 +-
 26/documentation.html                              |   34 +-
 26/generated/admin_client_config.html              |    2 +-
 26/generated/connect_config.html                   |    2 +-
 26/generated/kafka_config.html                     |    4 +-
 26/generated/producer_config.html                  |    4 +-
 26/generated/protocol_messages.html                | 1233 ++++--------
 26/generated/sink_connector_config.html            |    8 +-
 26/generated/source_connector_config.html          |    6 +-
 26/generated/streams_config.html                   |    2 +-
 26/generated/topic_config.html                     |    2 +-
 26/implementation.html                             |   98 +-
 26/introduction.html                               |   16 +-
 .../kafka/clients/admin/AbstractOptions.html       |   10 +-
 .../org/apache/kafka/clients/admin/Admin.html      |  154 +-
 .../apache/kafka/clients/admin/AdminClient.html    |    8 +-
 .../kafka/clients/admin/AdminClientConfig.html     |   54 +-
 .../clients/admin/AlterClientQuotasOptions.html    |    8 +-
 .../clients/admin/AlterClientQuotasResult.html     |    8 +-
 .../kafka/clients/admin/AlterConfigOp.OpType.html  |   24 +-
 .../apache/kafka/clients/admin/AlterConfigOp.html  |   14 +-
 .../kafka/clients/admin/AlterConfigsOptions.html   |   10 +-
 .../kafka/clients/admin/AlterConfigsResult.html    |    6 +-
 .../admin/AlterConsumerGroupOffsetsOptions.html    |    4 +-
 .../admin/AlterConsumerGroupOffsetsResult.html     |    6 +-
 .../admin/AlterPartitionReassignmentsOptions.html  |    4 +-
 .../admin/AlterPartitionReassignmentsResult.html   |    6 +-
 .../clients/admin/AlterReplicaLogDirsOptions.html  |    4 +-
 .../clients/admin/AlterReplicaLogDirsResult.html   |    6 +-
 .../org/apache/kafka/clients/admin/Config.html     |   14 +-
 .../clients/admin/ConfigEntry.ConfigSource.html    |   26 +-
 .../clients/admin/ConfigEntry.ConfigSynonym.html   |   14 +-
 .../clients/admin/ConfigEntry.ConfigType.html      |   32 +-
 .../apache/kafka/clients/admin/ConfigEntry.html    |   30 +-
 .../clients/admin/ConsumerGroupDescription.html    |   24 +-
 .../kafka/clients/admin/ConsumerGroupListing.html  |   18 +-
 .../kafka/clients/admin/CreateAclsOptions.html     |    6 +-
 .../kafka/clients/admin/CreateAclsResult.html      |    6 +-
 .../admin/CreateDelegationTokenOptions.html        |   12 +-
 .../clients/admin/CreateDelegationTokenResult.html |    4 +-
 .../clients/admin/CreatePartitionsOptions.html     |    8 +-
 .../clients/admin/CreatePartitionsResult.html      |    6 +-
 .../kafka/clients/admin/CreateTopicsOptions.html   |   10 +-
 .../CreateTopicsResult.TopicMetadataAndConfig.html |    8 +-
 .../kafka/clients/admin/CreateTopicsResult.html    |   14 +-
 .../kafka/clients/admin/DeleteAclsOptions.html     |    6 +-
 .../admin/DeleteAclsResult.FilterResult.html       |    6 +-
 .../admin/DeleteAclsResult.FilterResults.html      |    4 +-
 .../kafka/clients/admin/DeleteAclsResult.html      |    6 +-
 .../admin/DeleteConsumerGroupOffsetsOptions.html   |    4 +-
 .../admin/DeleteConsumerGroupOffsetsResult.html    |    6 +-
 .../clients/admin/DeleteConsumerGroupsOptions.html |    4 +-
 .../clients/admin/DeleteConsumerGroupsResult.html  |    6 +-
 .../kafka/clients/admin/DeleteRecordsOptions.html  |    4 +-
 .../kafka/clients/admin/DeleteRecordsResult.html   |    8 +-
 .../kafka/clients/admin/DeleteTopicsOptions.html   |    6 +-
 .../kafka/clients/admin/DeleteTopicsResult.html    |    6 +-
 .../apache/kafka/clients/admin/DeletedRecords.html |    6 +-
 .../kafka/clients/admin/DescribeAclsOptions.html   |    6 +-
 .../kafka/clients/admin/DescribeAclsResult.html    |    4 +-
 .../clients/admin/DescribeClientQuotasOptions.html |    4 +-
 .../clients/admin/DescribeClientQuotasResult.html  |    6 +-
 .../clients/admin/DescribeClusterOptions.html      |   10 +-
 .../kafka/clients/admin/DescribeClusterResult.html |   10 +-
 .../clients/admin/DescribeConfigsOptions.html      |   14 +-
 .../kafka/clients/admin/DescribeConfigsResult.html |    6 +-
 .../admin/DescribeConsumerGroupsOptions.html       |    8 +-
 .../admin/DescribeConsumerGroupsResult.html        |    8 +-
 .../admin/DescribeDelegationTokenOptions.html      |    8 +-
 .../admin/DescribeDelegationTokenResult.html       |    4 +-
 .../clients/admin/DescribeLogDirsOptions.html      |    4 +-
 .../kafka/clients/admin/DescribeLogDirsResult.html |    6 +-
 .../admin/DescribeReplicaLogDirsOptions.html       |    4 +-
 ...ribeReplicaLogDirsResult.ReplicaLogDirInfo.html |   12 +-
 .../admin/DescribeReplicaLogDirsResult.html        |    6 +-
 .../kafka/clients/admin/DescribeTopicsOptions.html |   10 +-
 .../kafka/clients/admin/DescribeTopicsResult.html  |    8 +-
 .../kafka/clients/admin/ElectLeadersOptions.html   |    4 +-
 .../kafka/clients/admin/ElectLeadersResult.html    |    6 +-
 .../admin/ElectPreferredLeadersOptions.html        |    4 +-
 .../clients/admin/ElectPreferredLeadersResult.html |    8 +-
 .../admin/ExpireDelegationTokenOptions.html        |    8 +-
 .../clients/admin/ExpireDelegationTokenResult.html |    4 +-
 .../kafka/clients/admin/KafkaAdminClient.html      |   72 +-
 .../admin/ListConsumerGroupOffsetsOptions.html     |    8 +-
 .../admin/ListConsumerGroupOffsetsResult.html      |    4 +-
 .../clients/admin/ListConsumerGroupsOptions.html   |    8 +-
 .../clients/admin/ListConsumerGroupsResult.html    |    8 +-
 .../kafka/clients/admin/ListOffsetsOptions.html    |    8 +-
 .../ListOffsetsResult.ListOffsetsResultInfo.html   |   12 +-
 .../kafka/clients/admin/ListOffsetsResult.html     |    8 +-
 .../admin/ListPartitionReassignmentsOptions.html   |    4 +-
 .../admin/ListPartitionReassignmentsResult.html    |    4 +-
 .../kafka/clients/admin/ListTopicsOptions.html     |   10 +-
 .../kafka/clients/admin/ListTopicsResult.html      |    8 +-
 .../kafka/clients/admin/MemberAssignment.html      |   12 +-
 .../kafka/clients/admin/MemberDescription.html     |   22 +-
 .../apache/kafka/clients/admin/MemberToRemove.html |   10 +-
 .../clients/admin/NewPartitionReassignment.html    |    6 +-
 .../apache/kafka/clients/admin/NewPartitions.html  |   14 +-
 .../org/apache/kafka/clients/admin/NewTopic.html   |   26 +-
 .../clients/admin/OffsetSpec.EarliestSpec.html     |    4 +-
 .../kafka/clients/admin/OffsetSpec.LatestSpec.html |    4 +-
 .../clients/admin/OffsetSpec.TimestampSpec.html    |    2 +-
 .../org/apache/kafka/clients/admin/OffsetSpec.html |   10 +-
 .../kafka/clients/admin/PartitionReassignment.html |   12 +-
 .../kafka/clients/admin/RecordsToDelete.html       |   12 +-
 .../RemoveMembersFromConsumerGroupOptions.html     |   10 +-
 .../RemoveMembersFromConsumerGroupResult.html      |    6 +-
 .../clients/admin/RenewDelegationTokenOptions.html |    8 +-
 .../clients/admin/RenewDelegationTokenResult.html  |    4 +-
 .../kafka/clients/admin/TopicDescription.html      |   20 +-
 .../apache/kafka/clients/admin/TopicListing.html   |   10 +-
 .../clients/consumer/CommitFailedException.html    |    6 +-
 .../apache/kafka/clients/consumer/Consumer.html    |   96 +-
 .../kafka/clients/consumer/ConsumerConfig.html     |  118 +-
 .../clients/consumer/ConsumerGroupMetadata.html    |   20 +-
 .../clients/consumer/ConsumerInterceptor.html      |    8 +-
 .../ConsumerPartitionAssignor.Assignment.html      |   12 +-
 .../ConsumerPartitionAssignor.GroupAssignment.html |    6 +-
 ...onsumerPartitionAssignor.GroupSubscription.html |    6 +-
 ...onsumerPartitionAssignor.RebalanceProtocol.html |   20 +-
 .../ConsumerPartitionAssignor.Subscription.html    |   18 +-
 .../consumer/ConsumerPartitionAssignor.html        |   14 +-
 .../consumer/ConsumerRebalanceListener.html        |   14 +-
 .../kafka/clients/consumer/ConsumerRecord.html     |   42 +-
 .../kafka/clients/consumer/ConsumerRecords.html    |   20 +-
 .../consumer/CooperativeStickyAssignor.html        |   12 +-
 .../clients/consumer/InvalidOffsetException.html   |    6 +-
 .../kafka/clients/consumer/KafkaConsumer.html      |  140 +-
 .../clients/consumer/LogTruncationException.html   |    6 +-
 .../kafka/clients/consumer/MockConsumer.html       |  126 +-
 .../consumer/NoOffsetForPartitionException.html    |   10 +-
 .../kafka/clients/consumer/OffsetAndMetadata.html  |   20 +-
 .../kafka/clients/consumer/OffsetAndTimestamp.html |   18 +-
 .../clients/consumer/OffsetCommitCallback.html     |    4 +-
 .../consumer/OffsetOutOfRangeException.html        |   10 +-
 .../clients/consumer/OffsetResetStrategy.html      |   18 +-
 .../kafka/clients/consumer/RangeAssignor.html      |    8 +-
 .../consumer/RetriableCommitFailedException.html   |    8 +-
 .../kafka/clients/consumer/RoundRobinAssignor.html |    8 +-
 .../kafka/clients/consumer/StickyAssignor.html     |   24 +-
 .../clients/producer/BufferExhaustedException.html |    4 +-
 .../apache/kafka/clients/producer/Callback.html    |    4 +-
 .../kafka/clients/producer/KafkaProducer.html      |   73 +-
 .../kafka/clients/producer/MockProducer.html       |   88 +-
 .../apache/kafka/clients/producer/Partitioner.html |    8 +-
 .../apache/kafka/clients/producer/Producer.html    |   30 +-
 .../kafka/clients/producer/ProducerConfig.html     |   98 +-
 .../clients/producer/ProducerInterceptor.html      |    8 +-
 .../kafka/clients/producer/ProducerRecord.html     |   32 +-
 .../kafka/clients/producer/RecordMetadata.html     |   26 +-
 .../clients/producer/RoundRobinPartitioner.html    |   10 +-
 .../clients/producer/UniformStickyPartitioner.html |   12 +-
 26/javadoc/org/apache/kafka/common/Cluster.html    |   52 +-
 .../org/apache/kafka/common/ClusterResource.html   |   12 +-
 .../kafka/common/ClusterResourceListener.html      |    4 +-
 .../org/apache/kafka/common/Configurable.html      |    4 +-
 .../apache/kafka/common/ConsumerGroupState.html    |   28 +-
 .../org/apache/kafka/common/ElectionType.html      |   20 +-
 26/javadoc/org/apache/kafka/common/Endpoint.html   |   18 +-
 .../kafka/common/InvalidRecordException.html       |    6 +-
 .../org/apache/kafka/common/IsolationLevel.html    |   20 +-
 .../org/apache/kafka/common/KafkaException.html    |   10 +-
 .../kafka/common/KafkaFuture.BaseFunction.html     |    4 +-
 .../kafka/common/KafkaFuture.BiConsumer.html       |    4 +-
 .../apache/kafka/common/KafkaFuture.Function.html  |    4 +-
 .../org/apache/kafka/common/KafkaFuture.html       |   34 +-
 26/javadoc/org/apache/kafka/common/Metric.html     |    8 +-
 26/javadoc/org/apache/kafka/common/MetricName.html |   26 +-
 .../apache/kafka/common/MetricNameTemplate.html    |   20 +-
 26/javadoc/org/apache/kafka/common/Node.html       |   28 +-
 .../org/apache/kafka/common/PartitionInfo.html     |   20 +-
 .../org/apache/kafka/common/Reconfigurable.html    |    8 +-
 .../org/apache/kafka/common/TopicPartition.html    |   14 +-
 .../apache/kafka/common/TopicPartitionInfo.html    |   18 +-
 .../apache/kafka/common/TopicPartitionReplica.html |   16 +-
 .../kafka/common/acl/AccessControlEntry.html       |   22 +-
 .../kafka/common/acl/AccessControlEntryFilter.html |   28 +-
 .../org/apache/kafka/common/acl/AclBinding.html    |   20 +-
 .../apache/kafka/common/acl/AclBindingFilter.html  |   26 +-
 .../org/apache/kafka/common/acl/AclOperation.html  |   46 +-
 .../apache/kafka/common/acl/AclPermissionType.html |   28 +-
 .../annotation/InterfaceStability.Evolving.html    |    2 +-
 .../annotation/InterfaceStability.Stable.html      |    2 +-
 .../annotation/InterfaceStability.Unstable.html    |    2 +-
 .../common/annotation/InterfaceStability.html      |    4 +-
 .../apache/kafka/common/config/AbstractConfig.html |   68 +-
 .../org/apache/kafka/common/config/Config.html     |    6 +-
 .../kafka/common/config/ConfigChangeCallback.html  |    4 +-
 .../org/apache/kafka/common/config/ConfigData.html |   10 +-
 .../ConfigDef.CaseInsensitiveValidString.html      |    8 +-
 .../config/ConfigDef.CompositeValidator.html       |    8 +-
 .../kafka/common/config/ConfigDef.ConfigKey.html   |   34 +-
 .../kafka/common/config/ConfigDef.Importance.html  |   18 +-
 .../common/config/ConfigDef.LambdaValidator.html   |    8 +-
 .../common/config/ConfigDef.NonEmptyString.html    |    8 +-
 ...onfigDef.NonEmptyStringWithoutControlChars.html |   10 +-
 .../common/config/ConfigDef.NonNullValidator.html  |    8 +-
 .../kafka/common/config/ConfigDef.Range.html       |   10 +-
 .../kafka/common/config/ConfigDef.Recommender.html |    6 +-
 .../apache/kafka/common/config/ConfigDef.Type.html |   30 +-
 .../kafka/common/config/ConfigDef.ValidList.html   |    8 +-
 .../kafka/common/config/ConfigDef.ValidString.html |    8 +-
 .../kafka/common/config/ConfigDef.Validator.html   |    4 +-
 .../kafka/common/config/ConfigDef.Width.html       |   20 +-
 .../org/apache/kafka/common/config/ConfigDef.html  |   94 +-
 .../kafka/common/config/ConfigException.html       |    8 +-
 .../kafka/common/config/ConfigResource.Type.html   |   24 +-
 .../apache/kafka/common/config/ConfigResource.html |   16 +-
 .../kafka/common/config/ConfigTransformer.html     |   12 +-
 .../common/config/ConfigTransformerResult.html     |    8 +-
 .../apache/kafka/common/config/ConfigValue.html    |   30 +-
 .../apache/kafka/common/config/LogLevelConfig.html |   18 +-
 .../apache/kafka/common/config/SaslConfigs.html    |   94 +-
 .../apache/kafka/common/config/SecurityConfig.html |    8 +-
 .../apache/kafka/common/config/SslClientAuth.html  |   22 +-
 .../org/apache/kafka/common/config/SslConfigs.html |   98 +-
 .../apache/kafka/common/config/TopicConfig.html    |  104 +-
 .../common/config/provider/ConfigProvider.html     |   12 +-
 .../common/config/provider/FileConfigProvider.html |   14 +-
 .../apache/kafka/common/errors/ApiException.html   |   12 +-
 .../common/errors/AuthenticationException.html     |    8 +-
 .../common/errors/AuthorizationException.html      |    6 +-
 .../common/errors/BrokerNotAvailableException.html |    6 +-
 .../errors/ClusterAuthorizationException.html      |    6 +-
 .../errors/ConcurrentTransactionsException.html    |    4 +-
 .../common/errors/ControllerMovedException.html    |    6 +-
 .../errors/CoordinatorLoadInProgressException.html |    6 +-
 .../errors/CoordinatorNotAvailableException.html   |    8 +-
 .../common/errors/CorruptRecordException.html      |   10 +-
 .../DelegationTokenAuthorizationException.html     |    6 +-
 .../errors/DelegationTokenDisabledException.html   |    6 +-
 .../errors/DelegationTokenExpiredException.html    |    6 +-
 .../errors/DelegationTokenNotFoundException.html   |    6 +-
 .../DelegationTokenOwnerMismatchException.html     |    6 +-
 .../kafka/common/errors/DisconnectException.html   |   12 +-
 .../common/errors/DuplicateSequenceException.html  |    4 +-
 .../common/errors/ElectionNotNeededException.html  |    6 +-
 .../EligibleLeadersNotAvailableException.html      |    6 +-
 .../common/errors/FencedInstanceIdException.html   |    6 +-
 .../common/errors/FencedLeaderEpochException.html  |    6 +-
 .../errors/FetchSessionIdNotFoundException.html    |    6 +-
 .../common/errors/GroupAuthorizationException.html |   10 +-
 .../common/errors/GroupIdNotFoundException.html    |    4 +-
 .../errors/GroupMaxSizeReachedException.html       |    4 +-
 .../common/errors/GroupNotEmptyException.html      |    4 +-
 .../errors/GroupSubscribedToTopicException.html    |    4 +-
 .../common/errors/IllegalGenerationException.html  |   10 +-
 .../common/errors/IllegalSaslStateException.html   |    6 +-
 .../errors/InconsistentGroupProtocolException.html |    6 +-
 .../kafka/common/errors/InterruptException.html    |    8 +-
 .../errors/InvalidCommitOffsetSizeException.html   |    6 +-
 .../errors/InvalidConfigurationException.html      |    6 +-
 .../errors/InvalidFetchSessionEpochException.html  |    6 +-
 .../common/errors/InvalidFetchSizeException.html   |    6 +-
 .../common/errors/InvalidGroupIdException.html     |    6 +-
 .../common/errors/InvalidMetadataException.html    |   10 +-
 .../common/errors/InvalidOffsetException.html      |    6 +-
 .../common/errors/InvalidPartitionsException.html  |    6 +-
 .../common/errors/InvalidPidMappingException.html  |    4 +-
 .../errors/InvalidPrincipalTypeException.html      |    6 +-
 .../errors/InvalidReplicaAssignmentException.html  |    6 +-
 .../errors/InvalidReplicationFactorException.html  |    6 +-
 .../common/errors/InvalidRequestException.html     |    6 +-
 .../errors/InvalidRequiredAcksException.html       |    4 +-
 .../errors/InvalidSessionTimeoutException.html     |    6 +-
 .../common/errors/InvalidTimestampException.html   |    6 +-
 .../kafka/common/errors/InvalidTopicException.html |   14 +-
 .../common/errors/InvalidTxnStateException.html    |    4 +-
 .../common/errors/InvalidTxnTimeoutException.html  |    6 +-
 .../kafka/common/errors/KafkaStorageException.html |   10 +-
 .../common/errors/LeaderNotAvailableException.html |    6 +-
 .../common/errors/ListenerNotFoundException.html   |    6 +-
 .../common/errors/LogDirNotFoundException.html     |    8 +-
 .../common/errors/MemberIdRequiredException.html   |    6 +-
 .../kafka/common/errors/NetworkException.html      |   10 +-
 .../errors/NoReassignmentInProgressException.html  |    6 +-
 .../common/errors/NotControllerException.html      |    6 +-
 .../common/errors/NotCoordinatorException.html     |    6 +-
 .../NotEnoughReplicasAfterAppendException.html     |    4 +-
 .../common/errors/NotEnoughReplicasException.html  |   10 +-
 .../errors/NotLeaderForPartitionException.html     |   10 +-
 .../errors/NotLeaderOrFollowerException.html       |   10 +-
 .../common/errors/OffsetMetadataTooLarge.html      |   10 +-
 .../common/errors/OffsetNotAvailableException.html |    4 +-
 .../common/errors/OffsetOutOfRangeException.html   |    6 +-
 .../errors/OperationNotAttemptedException.html     |    4 +-
 .../common/errors/OutOfOrderSequenceException.html |    4 +-
 .../common/errors/PolicyViolationException.html    |    6 +-
 .../PreferredLeaderNotAvailableException.html      |    6 +-
 .../common/errors/ProducerFencedException.html     |    4 +-
 .../errors/ReassignmentInProgressException.html    |    6 +-
 .../errors/RebalanceInProgressException.html       |   10 +-
 .../errors/RecordBatchTooLargeException.html       |   10 +-
 .../common/errors/RecordTooLargeException.html     |   14 +-
 .../errors/ReplicaNotAvailableException.html       |    8 +-
 .../kafka/common/errors/RetriableException.html    |   10 +-
 .../common/errors/SaslAuthenticationException.html |    6 +-
 .../common/errors/SecurityDisabledException.html   |    6 +-
 .../common/errors/SerializationException.html      |   12 +-
 .../common/errors/SslAuthenticationException.html  |    6 +-
 .../common/errors/StaleBrokerEpochException.html   |    6 +-
 .../kafka/common/errors/TimeoutException.html      |   10 +-
 .../common/errors/TopicAuthorizationException.html |   10 +-
 .../errors/TopicDeletionDisabledException.html     |    6 +-
 .../kafka/common/errors/TopicExistsException.html  |    6 +-
 .../TransactionCoordinatorFencedException.html     |    6 +-
 .../TransactionalIdAuthorizationException.html     |    4 +-
 .../common/errors/UnknownLeaderEpochException.html |    6 +-
 .../common/errors/UnknownMemberIdException.html    |   10 +-
 .../common/errors/UnknownProducerIdException.html  |    4 +-
 .../common/errors/UnknownServerException.html      |   10 +-
 .../errors/UnknownTopicOrPartitionException.html   |   10 +-
 .../errors/UnstableOffsetCommitException.html      |    4 +-
 .../UnsupportedByAuthenticationException.html      |    6 +-
 .../UnsupportedCompressionTypeException.html       |    6 +-
 .../UnsupportedForMessageFormatException.html      |    6 +-
 .../errors/UnsupportedSaslMechanismException.html  |    6 +-
 .../common/errors/UnsupportedVersionException.html |    6 +-
 .../kafka/common/errors/WakeupException.html       |    4 +-
 .../org/apache/kafka/common/header/Header.html     |    6 +-
 .../org/apache/kafka/common/header/Headers.html    |   14 +-
 .../apache/kafka/common/resource/PatternType.html  |   32 +-
 .../org/apache/kafka/common/resource/Resource.html |   22 +-
 .../kafka/common/resource/ResourceFilter.html      |   24 +-
 .../kafka/common/resource/ResourcePattern.html     |   22 +-
 .../common/resource/ResourcePatternFilter.html     |   26 +-
 .../apache/kafka/common/resource/ResourceType.html |   34 +-
 .../security/auth/AuthenticateCallbackHandler.html |    6 +-
 .../security/auth/AuthenticationContext.html       |    8 +-
 .../security/auth/DefaultPrincipalBuilder.html     |   10 +-
 .../kafka/common/security/auth/KafkaPrincipal.html |   24 +-
 .../security/auth/KafkaPrincipalBuilder.html       |    4 +-
 .../apache/kafka/common/security/auth/Login.html   |   12 +-
 .../auth/PlaintextAuthenticationContext.html       |   10 +-
 .../common/security/auth/PrincipalBuilder.html     |    8 +-
 .../security/auth/SaslAuthenticationContext.html   |   12 +-
 .../kafka/common/security/auth/SaslExtensions.html |   14 +-
 .../security/auth/SaslExtensionsCallback.html      |    8 +-
 .../common/security/auth/SecurityProtocol.html     |   30 +-
 .../security/auth/SecurityProviderCreator.html     |    6 +-
 .../security/auth/SslAuthenticationContext.html    |   12 +-
 .../common/security/auth/SslEngineFactory.html     |   20 +-
 .../OAuthBearerExtensionsValidatorCallback.html    |   18 +-
 .../oauthbearer/OAuthBearerLoginModule.html        |   28 +-
 .../security/oauthbearer/OAuthBearerToken.html     |   12 +-
 .../oauthbearer/OAuthBearerTokenCallback.html      |   16 +-
 .../oauthbearer/OAuthBearerValidatorCallback.html  |   18 +-
 .../security/plain/PlainAuthenticateCallback.html  |   10 +-
 .../common/security/plain/PlainLoginModule.html    |   14 +-
 .../common/security/scram/ScramCredential.html     |   12 +-
 .../security/scram/ScramCredentialCallback.html    |    8 +-
 .../security/scram/ScramExtensionsCallback.html    |    8 +-
 .../common/security/scram/ScramLoginModule.html    |   16 +-
 .../security/token/delegation/DelegationToken.html |   16 +-
 .../token/delegation/TokenInformation.html         |   30 +-
 .../serialization/ByteArrayDeserializer.html       |    6 +-
 .../common/serialization/ByteArraySerializer.html  |    6 +-
 .../serialization/ByteBufferDeserializer.html      |    6 +-
 .../common/serialization/ByteBufferSerializer.html |    6 +-
 .../common/serialization/BytesDeserializer.html    |    6 +-
 .../common/serialization/BytesSerializer.html      |    6 +-
 .../kafka/common/serialization/Deserializer.html   |   10 +-
 .../common/serialization/DoubleDeserializer.html   |    6 +-
 .../common/serialization/DoubleSerializer.html     |    6 +-
 .../ExtendedDeserializer.Wrapper.html              |   14 +-
 .../common/serialization/ExtendedDeserializer.html |    4 +-
 .../serialization/ExtendedSerializer.Wrapper.html  |   14 +-
 .../common/serialization/ExtendedSerializer.html   |    4 +-
 .../common/serialization/FloatDeserializer.html    |    6 +-
 .../common/serialization/FloatSerializer.html      |    6 +-
 .../common/serialization/IntegerDeserializer.html  |    6 +-
 .../common/serialization/IntegerSerializer.html    |    6 +-
 .../common/serialization/LongDeserializer.html     |    6 +-
 .../kafka/common/serialization/LongSerializer.html |    6 +-
 .../apache/kafka/common/serialization/Serde.html   |   10 +-
 .../serialization/Serdes.ByteArraySerde.html       |    4 +-
 .../serialization/Serdes.ByteBufferSerde.html      |    4 +-
 .../common/serialization/Serdes.BytesSerde.html    |    4 +-
 .../common/serialization/Serdes.DoubleSerde.html   |    4 +-
 .../common/serialization/Serdes.FloatSerde.html    |    4 +-
 .../common/serialization/Serdes.IntegerSerde.html  |    4 +-
 .../common/serialization/Serdes.LongSerde.html     |    4 +-
 .../common/serialization/Serdes.ShortSerde.html    |    4 +-
 .../common/serialization/Serdes.StringSerde.html   |    4 +-
 .../common/serialization/Serdes.UUIDSerde.html     |    4 +-
 .../common/serialization/Serdes.VoidSerde.html     |    4 +-
 .../common/serialization/Serdes.WrapperSerde.html  |   12 +-
 .../apache/kafka/common/serialization/Serdes.html  |   30 +-
 .../kafka/common/serialization/Serializer.html     |   10 +-
 .../common/serialization/ShortDeserializer.html    |    6 +-
 .../common/serialization/ShortSerializer.html      |    6 +-
 .../common/serialization/StringDeserializer.html   |    8 +-
 .../common/serialization/StringSerializer.html     |    8 +-
 .../common/serialization/UUIDDeserializer.html     |    8 +-
 .../kafka/common/serialization/UUIDSerializer.html |    8 +-
 .../common/serialization/VoidDeserializer.html     |    6 +-
 .../kafka/common/serialization/VoidSerializer.html |    6 +-
 .../apache/kafka/connect/components/Versioned.html |    4 +-
 .../kafka/connect/connector/ConnectRecord.html     |   32 +-
 .../apache/kafka/connect/connector/Connector.html  |   26 +-
 .../kafka/connect/connector/ConnectorContext.html  |    6 +-
 .../org/apache/kafka/connect/connector/Task.html   |    8 +-
 .../ConnectorClientConfigOverridePolicy.html       |    4 +-
 .../ConnectorClientConfigRequest.ClientType.html   |   18 +-
 .../policy/ConnectorClientConfigRequest.html       |   14 +-
 .../apache/kafka/connect/data/ConnectSchema.html   |   46 +-
 26/javadoc/org/apache/kafka/connect/data/Date.html |   14 +-
 .../org/apache/kafka/connect/data/Decimal.html     |   16 +-
 .../org/apache/kafka/connect/data/Field.html       |   16 +-
 .../org/apache/kafka/connect/data/Schema.Type.html |   40 +-
 .../org/apache/kafka/connect/data/Schema.html      |   62 +-
 .../apache/kafka/connect/data/SchemaAndValue.html  |   16 +-
 .../apache/kafka/connect/data/SchemaBuilder.html   |   86 +-
 .../apache/kafka/connect/data/SchemaProjector.html |    6 +-
 .../org/apache/kafka/connect/data/Struct.html      |   54 +-
 26/javadoc/org/apache/kafka/connect/data/Time.html |   14 +-
 .../org/apache/kafka/connect/data/Timestamp.html   |   14 +-
 .../apache/kafka/connect/data/Values.Parser.html   |   28 +-
 .../kafka/connect/data/Values.SchemaDetector.html  |    8 +-
 .../org/apache/kafka/connect/data/Values.html      |   70 +-
 .../connect/errors/AlreadyExistsException.html     |    8 +-
 .../kafka/connect/errors/ConnectException.html     |    8 +-
 .../apache/kafka/connect/errors/DataException.html |    8 +-
 .../errors/IllegalWorkerStateException.html        |    8 +-
 .../kafka/connect/errors/NotFoundException.html    |    8 +-
 .../kafka/connect/errors/RetriableException.html   |    8 +-
 .../connect/errors/SchemaBuilderException.html     |    8 +-
 .../connect/errors/SchemaProjectorException.html   |    8 +-
 .../kafka/connect/header/ConnectHeaders.html       |   76 +-
 .../org/apache/kafka/connect/header/Header.html    |   12 +-
 .../connect/header/Headers.HeaderTransform.html    |    4 +-
 .../org/apache/kafka/connect/header/Headers.html   |   62 +-
 .../apache/kafka/connect/health/AbstractState.html |   14 +-
 .../connect/health/ConnectClusterDetails.html      |    4 +-
 .../kafka/connect/health/ConnectClusterState.html  |   10 +-
 .../kafka/connect/health/ConnectorHealth.html      |   18 +-
 .../kafka/connect/health/ConnectorState.html       |    6 +-
 .../apache/kafka/connect/health/ConnectorType.html |   20 +-
 .../org/apache/kafka/connect/health/TaskState.html |   12 +-
 .../apache/kafka/connect/mirror/Checkpoint.html    |   42 +-
 .../connect/mirror/DefaultReplicationPolicy.html   |   16 +-
 .../org/apache/kafka/connect/mirror/Heartbeat.html |   30 +-
 .../apache/kafka/connect/mirror/MirrorClient.html  |   30 +-
 .../kafka/connect/mirror/MirrorClientConfig.html   |   36 +-
 .../kafka/connect/mirror/RemoteClusterUtils.html   |   18 +-
 .../kafka/connect/mirror/ReplicationPolicy.html    |   12 +-
 .../kafka/connect/mirror/SourceAndTarget.html      |   14 +-
 .../kafka/connect/rest/ConnectRestExtension.html   |    4 +-
 .../connect/rest/ConnectRestExtensionContext.html  |    6 +-
 .../kafka/connect/sink/ErrantRecordReporter.html   |    4 +-
 .../apache/kafka/connect/sink/SinkConnector.html   |    8 +-
 .../kafka/connect/sink/SinkConnectorContext.html   |    2 +-
 .../org/apache/kafka/connect/sink/SinkRecord.html  |   22 +-
 .../org/apache/kafka/connect/sink/SinkTask.html    |   30 +-
 .../apache/kafka/connect/sink/SinkTaskContext.html |   26 +-
 .../kafka/connect/source/SourceConnector.html      |    6 +-
 .../connect/source/SourceConnectorContext.html     |    4 +-
 .../apache/kafka/connect/source/SourceRecord.html  |   28 +-
 .../apache/kafka/connect/source/SourceTask.html    |   20 +-
 .../kafka/connect/source/SourceTaskContext.html    |    6 +-
 .../apache/kafka/connect/storage/Converter.html    |   12 +-
 .../kafka/connect/storage/ConverterConfig.html     |   10 +-
 .../kafka/connect/storage/ConverterType.html       |   22 +-
 .../kafka/connect/storage/HeaderConverter.html     |    8 +-
 .../kafka/connect/storage/OffsetStorageReader.html |    6 +-
 .../connect/storage/SimpleHeaderConverter.html     |   14 +-
 .../kafka/connect/storage/StringConverter.html     |   20 +-
 .../connect/storage/StringConverterConfig.html     |   12 +-
 .../kafka/connect/transforms/Transformation.html   |    8 +-
 .../connect/transforms/predicates/Predicate.html   |    8 +-
 .../apache/kafka/connect/util/ConnectorUtils.html  |    6 +-
 .../kafka/server/authorizer/AclCreateResult.html   |    8 +-
 .../AclDeleteResult.AclBindingDeleteResult.html    |   10 +-
 .../kafka/server/authorizer/AclDeleteResult.html   |   10 +-
 .../org/apache/kafka/server/authorizer/Action.html |   20 +-
 .../authorizer/AuthorizableRequestContext.html     |   18 +-
 .../server/authorizer/AuthorizationResult.html     |   16 +-
 .../apache/kafka/server/authorizer/Authorizer.html |   12 +-
 .../server/authorizer/AuthorizerServerInfo.html    |   10 +-
 .../policy/AlterConfigPolicy.RequestMetadata.html  |   10 +-
 .../kafka/server/policy/AlterConfigPolicy.html     |    4 +-
 .../policy/CreateTopicPolicy.RequestMetadata.html  |   16 +-
 .../kafka/server/policy/CreateTopicPolicy.html     |    4 +-
 .../kafka/server/quota/ClientQuotaCallback.html    |   16 +-
 .../quota/ClientQuotaEntity.ConfigEntity.html      |    6 +-
 .../quota/ClientQuotaEntity.ConfigEntityType.html  |   20 +-
 .../kafka/server/quota/ClientQuotaEntity.html      |    4 +-
 .../apache/kafka/server/quota/ClientQuotaType.html |   18 +-
 .../apache/kafka/streams/KafkaClientSupplier.html  |   14 +-
 .../apache/kafka/streams/KafkaStreams.State.html   |   36 +-
 .../kafka/streams/KafkaStreams.StateListener.html  |    4 +-
 .../org/apache/kafka/streams/KafkaStreams.html     |   62 +-
 .../org/apache/kafka/streams/KeyQueryMetadata.html |   18 +-
 26/javadoc/org/apache/kafka/streams/KeyValue.html  |   16 +-
 26/javadoc/org/apache/kafka/streams/LagInfo.html   |   14 +-
 .../apache/kafka/streams/StoreQueryParameters.html |   22 +-
 .../org/apache/kafka/streams/StreamsBuilder.html   |   54 +-
 .../streams/StreamsConfig.InternalConfig.html      |   20 +-
 .../org/apache/kafka/streams/StreamsConfig.html    |  196 +-
 .../org/apache/kafka/streams/StreamsMetrics.html   |   22 +-
 .../org/apache/kafka/streams/TestInputTopic.html   |   30 +-
 .../org/apache/kafka/streams/TestOutputTopic.html  |   24 +-
 .../kafka/streams/Topology.AutoOffsetReset.html    |   16 +-
 26/javadoc/org/apache/kafka/streams/Topology.html  |   60 +-
 .../streams/TopologyDescription.GlobalStore.html   |    8 +-
 .../kafka/streams/TopologyDescription.Node.html    |    8 +-
 .../streams/TopologyDescription.Processor.html     |    4 +-
 .../kafka/streams/TopologyDescription.Sink.html    |    6 +-
 .../kafka/streams/TopologyDescription.Source.html  |    8 +-
 .../streams/TopologyDescription.Subtopology.html   |    6 +-
 .../apache/kafka/streams/TopologyDescription.html  |    6 +-
 .../apache/kafka/streams/TopologyTestDriver.html   |   52 +-
 .../streams/errors/BrokerNotFoundException.html    |    8 +-
 .../errors/DefaultProductionExceptionHandler.html  |    8 +-
 ...tionHandler.DeserializationHandlerResponse.html |   20 +-
 .../errors/DeserializationExceptionHandler.html    |    4 +-
 .../streams/errors/InvalidStateStoreException.html |    8 +-
 .../apache/kafka/streams/errors/LockException.html |    8 +-
 .../errors/LogAndContinueExceptionHandler.html     |    8 +-
 .../streams/errors/LogAndFailExceptionHandler.html |    8 +-
 .../streams/errors/ProcessorStateException.html    |    8 +-
 ...Handler.ProductionExceptionHandlerResponse.html |   20 +-
 .../streams/errors/ProductionExceptionHandler.html |    4 +-
 .../kafka/streams/errors/StreamsException.html     |    8 +-
 .../streams/errors/TaskAssignmentException.html    |    8 +-
 .../streams/errors/TaskCorruptedException.html     |    8 +-
 .../streams/errors/TaskIdFormatException.html      |    8 +-
 .../streams/errors/TaskMigratedException.html      |    6 +-
 .../kafka/streams/errors/TopologyException.html    |    8 +-
 .../apache/kafka/streams/kstream/Aggregator.html   |    4 +-
 .../kafka/streams/kstream/CogroupedKStream.html    |   30 +-
 .../org/apache/kafka/streams/kstream/Consumed.html |   42 +-
 .../kafka/streams/kstream/ForeachAction.html       |    4 +-
 .../apache/kafka/streams/kstream/GlobalKTable.html |    8 +-
 .../org/apache/kafka/streams/kstream/Grouped.html  |   26 +-
 .../apache/kafka/streams/kstream/Initializer.html  |    4 +-
 .../apache/kafka/streams/kstream/JoinWindows.html  |   38 +-
 .../org/apache/kafka/streams/kstream/Joined.html   |   42 +-
 .../kafka/streams/kstream/KGroupedStream.html      |   44 +-
 .../kafka/streams/kstream/KGroupedTable.html       |   50 +-
 .../org/apache/kafka/streams/kstream/KStream.html  |  266 +--
 .../org/apache/kafka/streams/kstream/KTable.html   |  146 +-
 .../kafka/streams/kstream/KeyValueMapper.html      |    4 +-
 .../apache/kafka/streams/kstream/Materialized.html |   46 +-
 .../org/apache/kafka/streams/kstream/Merger.html   |    4 +-
 .../org/apache/kafka/streams/kstream/Named.html    |   14 +-
 .../apache/kafka/streams/kstream/Predicate.html    |    4 +-
 .../org/apache/kafka/streams/kstream/Printed.html  |   24 +-
 .../org/apache/kafka/streams/kstream/Produced.html |   36 +-
 .../org/apache/kafka/streams/kstream/Reducer.html  |    4 +-
 .../kafka/streams/kstream/Repartitioned.html       |   32 +-
 .../apache/kafka/streams/kstream/Serialized.html   |   14 +-
 .../kstream/SessionWindowedCogroupedKStream.html   |   14 +-
 .../kstream/SessionWindowedDeserializer.html       |   12 +-
 .../streams/kstream/SessionWindowedKStream.html    |   46 +-
 .../streams/kstream/SessionWindowedSerializer.html |   14 +-
 .../kafka/streams/kstream/SessionWindows.html      |   34 +-
 .../apache/kafka/streams/kstream/StreamJoined.html |   40 +-
 .../streams/kstream/Suppressed.BufferConfig.html   |   22 +-
 .../kstream/Suppressed.EagerBufferConfig.html      |    2 +-
 .../kstream/Suppressed.StrictBufferConfig.html     |    2 +-
 .../apache/kafka/streams/kstream/Suppressed.html   |    8 +-
 .../kstream/TimeWindowedCogroupedKStream.html      |   14 +-
 .../streams/kstream/TimeWindowedDeserializer.html  |   18 +-
 .../kafka/streams/kstream/TimeWindowedKStream.html |   46 +-
 .../streams/kstream/TimeWindowedSerializer.html    |   14 +-
 .../apache/kafka/streams/kstream/TimeWindows.html  |   32 +-
 .../apache/kafka/streams/kstream/Transformer.html  |    8 +-
 .../kafka/streams/kstream/TransformerSupplier.html |    4 +-
 .../kafka/streams/kstream/UnlimitedWindows.html    |   26 +-
 .../apache/kafka/streams/kstream/ValueJoiner.html  |    4 +-
 .../apache/kafka/streams/kstream/ValueMapper.html  |    4 +-
 .../kafka/streams/kstream/ValueMapperWithKey.html  |    4 +-
 .../kafka/streams/kstream/ValueTransformer.html    |    8 +-
 .../streams/kstream/ValueTransformerSupplier.html  |    4 +-
 .../streams/kstream/ValueTransformerWithKey.html   |    8 +-
 .../kstream/ValueTransformerWithKeySupplier.html   |    4 +-
 .../org/apache/kafka/streams/kstream/Window.html   |   24 +-
 .../org/apache/kafka/streams/kstream/Windowed.html |   14 +-
 .../WindowedSerdes.SessionWindowedSerde.html       |    6 +-
 .../kstream/WindowedSerdes.TimeWindowedSerde.html  |   10 +-
 .../kafka/streams/kstream/WindowedSerdes.html      |   10 +-
 .../org/apache/kafka/streams/kstream/Windows.html  |   18 +-
 .../kafka/streams/processor/AbstractProcessor.html |   10 +-
 .../processor/BatchingStateRestoreCallback.html    |    6 +-
 .../kafka/streams/processor/Cancellable.html       |    4 +-
 .../streams/processor/ConnectedStoreProvider.html  |    6 +-
 .../streams/processor/DefaultPartitionGrouper.html |    8 +-
 .../streams/processor/FailOnInvalidTimestamp.html  |    8 +-
 .../processor/LogAndSkipOnInvalidTimestamp.html    |    8 +-
 .../MockProcessorContext.CapturedForward.html      |   10 +-
 .../MockProcessorContext.CapturedPunctuator.html   |   12 +-
 .../streams/processor/MockProcessorContext.html    |   78 +-
 .../kafka/streams/processor/PartitionGrouper.html  |    4 +-
 .../apache/kafka/streams/processor/Processor.html  |    8 +-
 .../kafka/streams/processor/ProcessorContext.html  |   46 +-
 .../kafka/streams/processor/ProcessorSupplier.html |    4 +-
 .../kafka/streams/processor/PunctuationType.html   |   16 +-
 .../apache/kafka/streams/processor/Punctuator.html |    4 +-
 .../kafka/streams/processor/RecordContext.html     |   12 +-
 .../streams/processor/StateRestoreCallback.html    |    4 +-
 .../streams/processor/StateRestoreListener.html    |    8 +-
 .../apache/kafka/streams/processor/StateStore.html |   14 +-
 .../kafka/streams/processor/StreamPartitioner.html |    4 +-
 .../org/apache/kafka/streams/processor/TaskId.html |   26 +-
 .../kafka/streams/processor/TaskMetadata.html      |   14 +-
 .../kafka/streams/processor/ThreadMetadata.html    |   26 +-
 .../streams/processor/TimestampExtractor.html      |    4 +-
 .../org/apache/kafka/streams/processor/To.html     |   20 +-
 .../streams/processor/TopicNameExtractor.html      |    4 +-
 .../UsePartitionTimeOnInvalidTimestamp.html        |    8 +-
 .../UsePreviousTimeOnInvalidTimestamp.html         |    8 +-
 .../processor/WallclockTimestampExtractor.html     |    6 +-
 .../org/apache/kafka/streams/state/HostInfo.html   |   18 +-
 .../streams/state/KeyValueBytesStoreSupplier.html  |    2 +-
 .../kafka/streams/state/KeyValueIterator.html      |    6 +-
 .../apache/kafka/streams/state/KeyValueStore.html  |   10 +-
 .../kafka/streams/state/QueryableStoreType.html    |    6 +-
 .../QueryableStoreTypes.KeyValueStoreType.html     |    4 +-
 .../QueryableStoreTypes.SessionStoreType.html      |    4 +-
 .../state/QueryableStoreTypes.WindowStoreType.html |    4 +-
 .../kafka/streams/state/QueryableStoreTypes.html   |   14 +-
 .../kafka/streams/state/ReadOnlyKeyValueStore.html |   10 +-
 .../kafka/streams/state/ReadOnlySessionStore.html  |    6 +-
 .../kafka/streams/state/ReadOnlyWindowStore.html   |   30 +-
 .../kafka/streams/state/RocksDBConfigSetter.html   |    8 +-
 .../streams/state/SessionBytesStoreSupplier.html   |    6 +-
 .../apache/kafka/streams/state/SessionStore.html   |   12 +-
 .../apache/kafka/streams/state/StateSerdes.html    |   28 +-
 .../apache/kafka/streams/state/StoreBuilder.html   |   18 +-
 .../apache/kafka/streams/state/StoreSupplier.html  |    8 +-
 .../org/apache/kafka/streams/state/Stores.html     |   40 +-
 .../kafka/streams/state/StreamsMetadata.html       |   26 +-
 .../kafka/streams/state/TimestampedBytesStore.html |    4 +-
 .../streams/state/TimestampedKeyValueStore.html    |    2 +-
 .../streams/state/TimestampedWindowStore.html      |    2 +-
 .../kafka/streams/state/ValueAndTimestamp.html     |   16 +-
 .../streams/state/WindowBytesStoreSupplier.html    |   12 +-
 .../apache/kafka/streams/state/WindowStore.html    |   30 +-
 .../kafka/streams/state/WindowStoreIterator.html   |    4 +-
 .../kafka/streams/test/ConsumerRecordFactory.html  |   60 +-
 .../apache/kafka/streams/test/OutputVerifier.html  |   32 +-
 .../org/apache/kafka/streams/test/TestRecord.html  |   40 +-
 26/javadoc/serialized-form.html                    |   36 +-
 26/migration.html                                  |    4 +-
 26/ops.html                                        |  392 ++--
 26/protocol.html                                   |   42 +-
 26/quickstart.html                                 |  140 +-
 26/security.html                                   |  481 ++---
 26/streams/architecture.html                       |   10 +-
 26/streams/core-concepts.html                      |   18 +-
 26/streams/developer-guide/app-reset-tool.html     |    8 +-
 26/streams/developer-guide/config-streams.html     |   18 +-
 26/streams/developer-guide/datatypes.html          |   14 +-
 26/streams/developer-guide/dsl-api.html            |  213 +--
 .../developer-guide/dsl-topology-naming.html       |   62 +-
 26/streams/developer-guide/index.html              |    2 +-
 .../developer-guide/interactive-queries.html       |   35 +-
 26/streams/developer-guide/manage-topics.html      |    2 +-
 26/streams/developer-guide/memory-mgmt.html        |   14 +-
 26/streams/developer-guide/processor-api.html      |   32 +-
 26/streams/developer-guide/running-app.html        |    5 +-
 26/streams/developer-guide/security.html           |   11 +-
 26/streams/developer-guide/testing.html            |   90 +-
 26/streams/developer-guide/write-streams.html      |   26 +-
 26/streams/index.html                              |   30 +-
 26/streams/quickstart.html                         |  104 +-
 26/streams/tutorial.html                           |  172 +-
 26/streams/upgrade-guide.html                      |   30 +-
 26/upgrade.html                                    |  100 +-
 26/uses.html                                       |   14 +-
 books-and-papers.html                              |  210 +++
 code.html                                          |   11 +-
 coding-guide.html                                  |    3 +-
 committers.html                                    |    3 +-
 contact.html                                       |    3 +-
 contributing.html                                  |    3 +-
 css/highlightjs-a11y-light.css                     |   99 +
 css/styles.css                                     | 1346 ++++++++++++-
 cve-list.html                                      |    1 +
 downloads.html                                     |    3 +-
 events.html                                        |   22 +-
 get-started.html                                   |  168 ++
 images/books/book-ddia.png                         |  Bin 0 -> 69222 bytes
 images/books/book-designing-eds.jpg                |  Bin 0 -> 61530 bytes
 images/books/book-effective-kafka.png              |  Bin 0 -> 102812 bytes
 images/books/book-event-streams-action.png         |  Bin 0 -> 74792 bytes
 images/books/book-heart-logs.jpg                   |  Bin 0 -> 56831 bytes
 images/books/book-kafka-action.png                 |  Bin 0 -> 75947 bytes
 images/books/book-kafka-guide.jpg                  |  Bin 0 -> 62124 bytes
 images/books/book-kafka-streams-in-action.jpg      |  Bin 0 -> 18270 bytes
 images/books/book-kafka-streams-journal.jpg        |  Bin 0 -> 86182 bytes
 images/books/book-kafka-streams-ksqldb.png         |  Bin 0 -> 52023 bytes
 images/books/book-making-sense.png                 |  Bin 0 -> 84560 bytes
 images/cluster-diagram.svg                         |   48 +
 images/hp-icons/icon-code.svg                      |    5 +
 images/hp-icons/icon-community.svg                 |    4 +
 images/hp-icons/icon-connect.svg                   |    5 +
 images/hp-icons/icon-database.svg                  |    8 +
 images/hp-icons/icon-high-availability.svg         |    8 +
 images/hp-icons/icon-high-throughput.svg           |    3 +
 images/hp-icons/icon-library.svg                   |   10 +
 images/hp-icons/icon-online.svg                    |    3 +
 images/hp-icons/icon-scalable.svg                  |    3 +
 images/hp-icons/icon-stream.svg                    |    3 +
 images/hp-icons/icon-trusted.svg                   |    4 +
 images/hp-icons/icon-vital.svg                     |    3 +
 images/icons/airplane.svg                          |    5 +
 images/icons/bank.svg                              |    3 +
 images/icons/manufacturing.svg                     |    3 +
 images/icons/shield.svg                            |    3 +
 images/icons/telecom-tower.svg                     |    7 +
 .../data-engineering-podcast-logo.jpeg             |  Bin 0 -> 15500 bytes
 .../podcast-logos/software-engineering-daily.png   |  Bin 0 -> 18995 bytes
 .../podcast-logos/streaming-audio-confluent.jpeg   |  Bin 0 -> 49044 bytes
 images/streams-and-tables-p1_p4.png                |  Bin 0 -> 747793 bytes
 images/waves.svg                                   |   71 +
 includes/_footer.htm                               |  446 ++++-
 includes/_header.htm                               |    9 +-
 includes/_nav.htm                                  |    3 +
 includes/_top.htm                                  |  159 +-
 index.html                                         |  337 +++-
 intro.html                                         |   25 +-
 logos/kafka-white.svg                              |    9 +
 performance.html                                   |    3 +-
 podcasts.html                                      |   61 +
 powered-by.html                                    |    5 +-
 project-security.html                              |    3 +-
 project.html                                       |    9 +-
 quickstart-docker.html                             |   35 +
 quickstart.html                                    |   20 +-
 uses.html                                          |    3 +-
 videos.html                                        | 1796 ++++++++++++++++++
 1194 files changed, 27401 insertions(+), 23705 deletions(-)

diff --git a/.gitignore b/.gitignore
index e43b0f9..71d1449 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1 +1,2 @@
 .DS_Store
+*.code-workspace
\ No newline at end of file
diff --git a/0100/generated/protocol_messages.html b/0100/generated/protocol_messages.html
index 166fa2e..76d5f72 100644
--- a/0100/generated/protocol_messages.html
+++ b/0100/generated/protocol_messages.html
@@ -1,10 +1,10 @@
 <h5>Headers:</h5>
-<pre>Request Header => api_key api_version correlation_id client_id 
+<pre class="line-numbers"><code class="language-java">Request Header => api_key api_version correlation_id client_id 
   api_key => INT16
   api_version => INT16
   correlation_id => INT32
   client_id => NULLABLE_STRING
-</pre>
+</code></pre>
 <table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -17,9 +17,9 @@
 <tr>
 <td>client_id</td><td>A user specified identifier for the client making the request.</td></tr>
 </table>
-<pre>Response Header => correlation_id 
+<pre class="line-numbers"><code class="language-java">Response Header => correlation_id 
   correlation_id => INT32
-</pre>
+</code></pre>
 <table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -29,7 +29,7 @@
 <h5>Produce API (Key: 0):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Produce Request (Version: 0) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 0) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -37,7 +37,7 @@
     data => partition record_set 
       partition => INT32
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -56,7 +56,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 1) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 1) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -64,7 +64,7 @@
     data => partition record_set 
       partition => INT32
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -83,7 +83,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 2) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 2) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -91,7 +91,7 @@
     data => partition record_set 
       partition => INT32
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -111,14 +111,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Produce Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset 
       partition => INT32
       error_code => INT16
       base_offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -135,7 +135,7 @@
 <td>base_offset</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 1) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 1) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset 
@@ -143,7 +143,7 @@
       error_code => INT16
       base_offset => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -162,7 +162,7 @@
 <td>throttle_time_ms</td><td>Duration in milliseconds for which the request was throttled due to quota violation. (Zero if the request did not violate any quota.)</td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 2) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 2) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset timestamp 
@@ -171,7 +171,7 @@
       base_offset => INT64
       timestamp => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -195,7 +195,7 @@
 <h5>Fetch API (Key: 1):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Fetch Request (Version: 0) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 0) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -205,7 +205,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -228,7 +228,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 1) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 1) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -238,7 +238,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -261,7 +261,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 2) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 2) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -271,7 +271,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -295,7 +295,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Fetch Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code high_watermark record_set 
@@ -303,7 +303,7 @@
       error_code => INT16
       high_watermark => INT64
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -322,7 +322,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 1) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 1) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -331,7 +331,7 @@
       error_code => INT16
       high_watermark => INT64
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -352,7 +352,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 2) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 2) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -361,7 +361,7 @@
       error_code => INT16
       high_watermark => INT64
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -385,7 +385,7 @@
 <h5>Offsets API (Key: 2):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Offsets Request (Version: 0) => replica_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Request (Version: 0) => replica_id [topics] 
   replica_id => INT32
   topics => topic [partitions] 
     topic => STRING
@@ -393,7 +393,7 @@
       partition => INT32
       timestamp => INT64
       max_num_offsets => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -413,14 +413,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Offsets Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code [offsets] 
       partition => INT32
       error_code => INT16
       offsets => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -440,18 +440,18 @@
 <h5>Metadata API (Key: 3):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Metadata Request (Version: 0) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 0) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If no topics are specified fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 1) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 1) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -459,7 +459,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Metadata Response (Version: 0) => [brokers] [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 0) => [brokers] [topic_metadata] 
   brokers => node_id host port 
     node_id => INT32
     host => STRING
@@ -473,7 +473,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -504,7 +504,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
   brokers => node_id host port rack 
     node_id => INT32
     host => STRING
@@ -521,7 +521,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -561,7 +561,7 @@
 <h5>LeaderAndIsr API (Key: 4):</h5>
 
 <b>Requests:</b><br>
-<p><pre>LeaderAndIsr Request (Version: 0) => controller_id controller_epoch [partition_states] [live_leaders] 
+<p><pre class="line-numbers"><code class="language-java">LeaderAndIsr Request (Version: 0) => controller_id controller_epoch [partition_states] [live_leaders] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -577,7 +577,7 @@
     id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -613,13 +613,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>LeaderAndIsr Response (Version: 0) => error_code [partitions] 
+<p><pre class="line-numbers"><code class="language-java">LeaderAndIsr Response (Version: 0) => error_code [partitions] 
   error_code => INT16
   partitions => topic partition error_code 
     topic => STRING
     partition => INT32
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -637,14 +637,14 @@
 <h5>StopReplica API (Key: 5):</h5>
 
 <b>Requests:</b><br>
-<p><pre>StopReplica Request (Version: 0) => controller_id controller_epoch delete_partitions [partitions] 
+<p><pre class="line-numbers"><code class="language-java">StopReplica Request (Version: 0) => controller_id controller_epoch delete_partitions [partitions] 
   controller_id => INT32
   controller_epoch => INT32
   delete_partitions => INT8
   partitions => topic partition 
     topic => STRING
     partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -662,13 +662,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>StopReplica Response (Version: 0) => error_code [partitions] 
+<p><pre class="line-numbers"><code class="language-java">StopReplica Response (Version: 0) => error_code [partitions] 
   error_code => INT16
   partitions => topic partition error_code 
     topic => STRING
     partition => INT32
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -686,7 +686,7 @@
 <h5>UpdateMetadata API (Key: 6):</h5>
 
 <b>Requests:</b><br>
-<p><pre>UpdateMetadata Request (Version: 0) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 0) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -702,7 +702,7 @@
     id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -737,7 +737,7 @@
 <td>port</td><td>The port on which the broker accepts requests.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 1) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 1) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -755,7 +755,7 @@
       port => INT32
       host => STRING
       security_protocol_type => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -794,7 +794,7 @@
 <td>security_protocol_type</td><td>The security protocol type.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 2) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 2) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -813,7 +813,7 @@
       host => STRING
       security_protocol_type => INT16
     rack => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -855,27 +855,27 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>UpdateMetadata Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 1) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 1) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 2) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 2) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -886,9 +886,9 @@
 
 <b>Requests:</b><br>
 </p>
-<p><pre>ControlledShutdown Request (Version: 1) => broker_id 
+<p><pre class="line-numbers"><code class="language-java">ControlledShutdown Request (Version: 1) => broker_id 
   broker_id => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -897,12 +897,12 @@
 </p>
 <b>Responses:</b><br>
 </p>
-<p><pre>ControlledShutdown Response (Version: 1) => error_code [partitions_remaining] 
+<p><pre class="line-numbers"><code class="language-java">ControlledShutdown Response (Version: 1) => error_code [partitions_remaining] 
   error_code => INT16
   partitions_remaining => topic partition 
     topic => STRING
     partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -918,7 +918,7 @@
 <h5>OffsetCommit API (Key: 8):</h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetCommit Request (Version: 0) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 0) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
@@ -926,7 +926,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -945,7 +945,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 1) => group_id group_generation_id member_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 1) => group_id group_generation_id member_id [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -956,7 +956,7 @@
       offset => INT64
       timestamp => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -981,7 +981,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 2) => group_id group_generation_id member_id retention_time [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 2) => group_id group_generation_id member_id retention_time [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -992,7 +992,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1018,13 +1018,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetCommit Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1039,13 +1039,13 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1060,13 +1060,13 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 2) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 2) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1084,13 +1084,13 @@
 <h5>OffsetFetch API (Key: 9):</h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetFetch Request (Version: 0) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 0) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1105,13 +1105,13 @@
 <td>partition</td><td>Topic partition id.</td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Request (Version: 1) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 1) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1127,7 +1127,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetFetch Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1135,7 +1135,7 @@
       offset => INT64
       metadata => NULLABLE_STRING
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1154,7 +1154,7 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1162,7 +1162,7 @@
       offset => INT64
       metadata => NULLABLE_STRING
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1184,9 +1184,9 @@
 <h5>GroupCoordinator API (Key: 10):</h5>
 
 <b>Requests:</b><br>
-<p><pre>GroupCoordinator Request (Version: 0) => group_id 
+<p><pre class="line-numbers"><code class="language-java">GroupCoordinator Request (Version: 0) => group_id 
   group_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1194,13 +1194,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>GroupCoordinator Response (Version: 0) => error_code coordinator 
+<p><pre class="line-numbers"><code class="language-java">GroupCoordinator Response (Version: 0) => error_code coordinator 
   error_code => INT16
   coordinator => node_id host port 
     node_id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1218,7 +1218,7 @@
 <h5>JoinGroup API (Key: 11):</h5>
 
 <b>Requests:</b><br>
-<p><pre>JoinGroup Request (Version: 0) => group_id session_timeout member_id protocol_type [group_protocols] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Request (Version: 0) => group_id session_timeout member_id protocol_type [group_protocols] 
   group_id => STRING
   session_timeout => INT32
   member_id => STRING
@@ -1226,7 +1226,7 @@
   group_protocols => protocol_name protocol_metadata 
     protocol_name => STRING
     protocol_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1246,7 +1246,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>JoinGroup Response (Version: 0) => error_code generation_id group_protocol leader_id member_id [members] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Response (Version: 0) => error_code generation_id group_protocol leader_id member_id [members] 
   error_code => INT16
   generation_id => INT32
   group_protocol => STRING
@@ -1255,7 +1255,7 @@
   members => member_id member_metadata 
     member_id => STRING
     member_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1279,11 +1279,11 @@
 <h5>Heartbeat API (Key: 12):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Heartbeat Request (Version: 0) => group_id group_generation_id member_id 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Request (Version: 0) => group_id group_generation_id member_id 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1295,9 +1295,9 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Heartbeat Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1307,10 +1307,10 @@
 <h5>LeaveGroup API (Key: 13):</h5>
 
 <b>Requests:</b><br>
-<p><pre>LeaveGroup Request (Version: 0) => group_id member_id 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Request (Version: 0) => group_id member_id 
   group_id => STRING
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1320,9 +1320,9 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>LeaveGroup Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1332,14 +1332,14 @@
 <h5>SyncGroup API (Key: 14):</h5>
 
 <b>Requests:</b><br>
-<p><pre>SyncGroup Request (Version: 0) => group_id generation_id member_id [group_assignment] 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Request (Version: 0) => group_id generation_id member_id [group_assignment] 
   group_id => STRING
   generation_id => INT32
   member_id => STRING
   group_assignment => member_id member_assignment 
     member_id => STRING
     member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1357,10 +1357,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>SyncGroup Response (Version: 0) => error_code member_assignment 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Response (Version: 0) => error_code member_assignment 
   error_code => INT16
   member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1372,9 +1372,9 @@
 <h5>DescribeGroups API (Key: 15):</h5>
 
 <b>Requests:</b><br>
-<p><pre>DescribeGroups Request (Version: 0) => [group_ids] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Request (Version: 0) => [group_ids] 
   group_ids => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1382,7 +1382,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DescribeGroups Response (Version: 0) => [groups] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Response (Version: 0) => [groups] 
   groups => error_code group_id state protocol_type protocol [members] 
     error_code => INT16
     group_id => STRING
@@ -1395,7 +1395,7 @@
       client_host => STRING
       member_metadata => BYTES
       member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1427,19 +1427,19 @@
 <h5>ListGroups API (Key: 16):</h5>
 
 <b>Requests:</b><br>
-<p><pre>ListGroups Request (Version: 0) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ListGroups Request (Version: 0) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
 <b>Responses:</b><br>
-<p><pre>ListGroups Response (Version: 0) => error_code [groups] 
+<p><pre class="line-numbers"><code class="language-java">ListGroups Response (Version: 0) => error_code [groups] 
   error_code => INT16
   groups => group_id protocol_type 
     group_id => STRING
     protocol_type => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1455,9 +1455,9 @@
 <h5>SaslHandshake API (Key: 17):</h5>
 
 <b>Requests:</b><br>
-<p><pre>SaslHandshake Request (Version: 0) => mechanism 
+<p><pre class="line-numbers"><code class="language-java">SaslHandshake Request (Version: 0) => mechanism 
   mechanism => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1465,10 +1465,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
+<p><pre class="line-numbers"><code class="language-java">SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
   error_code => INT16
   enabled_mechanisms => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1480,20 +1480,20 @@
 <h5>ApiVersions API (Key: 18):</h5>
 
 <b>Requests:</b><br>
-<p><pre>ApiVersions Request (Version: 0) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Request (Version: 0) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
 <b>Responses:</b><br>
-<p><pre>ApiVersions Response (Version: 0) => error_code [api_versions] 
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Response (Version: 0) => error_code [api_versions] 
   error_code => INT16
   api_versions => api_key min_version max_version 
     api_key => INT16
     min_version => INT16
     max_version => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
diff --git a/0100/introduction.html b/0100/introduction.html
index 8668361..6f14645 100644
--- a/0100/introduction.html
+++ b/0100/introduction.html
@@ -48,7 +48,7 @@
 </div>
 
 <p>
-	In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older versions. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.
+	In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older versions. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.
 </p>
 
 <h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
diff --git a/0100/javadoc/org/apache/kafka/common/MetricName.html b/0100/javadoc/org/apache/kafka/common/MetricName.html
index 34a93ef..80ef609 100644
--- a/0100/javadoc/org/apache/kafka/common/MetricName.html
+++ b/0100/javadoc/org/apache/kafka/common/MetricName.html
@@ -112,7 +112,7 @@ extends <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html?
  <p>
 
  Usage looks something like this:
- <pre><code>// set up metrics:
+ <pre class="line-numbers"><code>// set up metrics:
 
  Map&lt;String, String&gt; metricTags = new LinkedHashMap&lt;String, String&gt;();
  metricTags.put("client-id", "producer-1");
diff --git a/0100/protocol.html b/0100/protocol.html
index 642e566..5f34b62 100644
--- a/0100/protocol.html
+++ b/0100/protocol.html
@@ -20,8 +20,8 @@
 <div class="content">
 	<!--#include virtual="../includes/_nav.htm" -->
 	<div class="right">
-		<h1>Kafka protocol guide</h1>
-    <p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="https://kafka.apache.org/documentation.html#design">here</a></p>
+		<h1 class="content-title">Kafka protocol guide</h1>
+    <p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="/documentation.html#design">here</a></p>
 
     <ul class="toc">
         <li><a href="#protocol_preliminaries">Preliminaries</a>
@@ -183,10 +183,10 @@
 
     <p>All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:</p>
 
-    <pre>
+    <pre class="line-numbers"><code class="language-java">
     RequestOrResponse => Size (RequestMessage | ResponseMessage)
     Size => int32
-    </pre>
+    </code></pre>
 
     <table class="data-table"><tbody>
     <tr><th>Field</th><th>Description</th></tr>
diff --git a/0101/documentation.html b/0101/documentation.html
index a48d0c6..81ddc90 100644
--- a/0101/documentation.html
+++ b/0101/documentation.html
@@ -20,7 +20,7 @@
 <!--#include virtual="../includes/_header.htm" -->
 <!--#include virtual="../includes/_top.htm" -->
 
-<div class="content documentation">
+<div class="content">
 	<!--#include virtual="../includes/_nav.htm" -->
 	<div class="right">
 		<!--#include virtual="../includes/_docs_banner.htm" -->
diff --git a/0101/generated/protocol_messages.html b/0101/generated/protocol_messages.html
index 11dad58..e3aab03 100644
--- a/0101/generated/protocol_messages.html
+++ b/0101/generated/protocol_messages.html
@@ -1,10 +1,10 @@
 <h5>Headers:</h5>
-<pre>Request Header => api_key api_version correlation_id client_id 
+<pre class="line-numbers"><code class="language-java">Request Header => api_key api_version correlation_id client_id 
   api_key => INT16
   api_version => INT16
   correlation_id => INT32
   client_id => NULLABLE_STRING
-</pre>
+</code></pre>
 <table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -17,9 +17,9 @@
 <tr>
 <td>client_id</td><td>A user specified identifier for the client making the request.</td></tr>
 </table>
-<pre>Response Header => correlation_id 
+<pre class="line-numbers"><code class="language-java">Response Header => correlation_id 
   correlation_id => INT32
-</pre>
+</code></pre>
 <table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -29,7 +29,7 @@
 <h5>Produce API (Key: 0):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Produce Request (Version: 0) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 0) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -37,7 +37,7 @@
     data => partition record_set 
       partition => INT32
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -56,7 +56,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 1) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 1) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -64,7 +64,7 @@
     data => partition record_set 
       partition => INT32
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -83,7 +83,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 2) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 2) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -91,7 +91,7 @@
     data => partition record_set 
       partition => INT32
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -111,14 +111,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Produce Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset 
       partition => INT32
       error_code => INT16
       base_offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -135,7 +135,7 @@
 <td>base_offset</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 1) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 1) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset 
@@ -143,7 +143,7 @@
       error_code => INT16
       base_offset => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -162,7 +162,7 @@
 <td>throttle_time_ms</td><td>Duration in milliseconds for which the request was throttled due to quota violation. (Zero if the request did not violate any quota.)</td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 2) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 2) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset timestamp 
@@ -171,7 +171,7 @@
       base_offset => INT64
       timestamp => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -195,7 +195,7 @@
 <h5>Fetch API (Key: 1):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Fetch Request (Version: 0) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 0) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -205,7 +205,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -228,7 +228,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 1) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 1) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -238,7 +238,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -261,7 +261,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 2) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 2) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -271,7 +271,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -294,7 +294,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 3) => replica_id max_wait_time min_bytes max_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 3) => replica_id max_wait_time min_bytes max_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -305,7 +305,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -331,7 +331,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Fetch Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code high_watermark record_set 
@@ -339,7 +339,7 @@
       error_code => INT16
       high_watermark => INT64
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -358,7 +358,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 1) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 1) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -367,7 +367,7 @@
       error_code => INT16
       high_watermark => INT64
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -388,7 +388,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 2) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 2) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -397,7 +397,7 @@
       error_code => INT16
       high_watermark => INT64
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -418,7 +418,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 3) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 3) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -427,7 +427,7 @@
       error_code => INT16
       high_watermark => INT64
       record_set => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -451,7 +451,7 @@
 <h5>Offsets API (Key: 2):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Offsets Request (Version: 0) => replica_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Request (Version: 0) => replica_id [topics] 
   replica_id => INT32
   topics => topic [partitions] 
     topic => STRING
@@ -459,7 +459,7 @@
       partition => INT32
       timestamp => INT64
       max_num_offsets => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -478,14 +478,14 @@
 <td>max_num_offsets</td><td>Maximum offsets to return.</td></tr>
 </table>
 </p>
-<p><pre>Offsets Request (Version: 1) => replica_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Request (Version: 1) => replica_id [topics] 
   replica_id => INT32
   topics => topic [partitions] 
     topic => STRING
     partitions => partition timestamp 
       partition => INT32
       timestamp => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -503,14 +503,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Offsets Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code [offsets] 
       partition => INT32
       error_code => INT16
       offsets => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -527,7 +527,7 @@
 <td>offsets</td><td>A list of offsets.</td></tr>
 </table>
 </p>
-<p><pre>Offsets Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code timestamp offset 
@@ -535,7 +535,7 @@
       error_code => INT16
       timestamp => INT64
       offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -557,27 +557,27 @@
 <h5>Metadata API (Key: 3):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Metadata Request (Version: 0) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 0) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If no topics are specified fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 1) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 1) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If the topics array is null fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 2) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 2) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -585,7 +585,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Metadata Response (Version: 0) => [brokers] [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 0) => [brokers] [topic_metadata] 
   brokers => node_id host port 
     node_id => INT32
     host => STRING
@@ -599,7 +599,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -630,7 +630,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
   brokers => node_id host port rack 
     node_id => INT32
     host => STRING
@@ -647,7 +647,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -684,7 +684,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 2) => [brokers] cluster_id controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 2) => [brokers] cluster_id controller_id [topic_metadata] 
   brokers => node_id host port rack 
     node_id => INT32
     host => STRING
@@ -702,7 +702,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -744,7 +744,7 @@
 <h5>LeaderAndIsr API (Key: 4):</h5>
 
 <b>Requests:</b><br>
-<p><pre>LeaderAndIsr Request (Version: 0) => controller_id controller_epoch [partition_states] [live_leaders] 
+<p><pre class="line-numbers"><code class="language-java">LeaderAndIsr Request (Version: 0) => controller_id controller_epoch [partition_states] [live_leaders] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -760,7 +760,7 @@
     id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -796,13 +796,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>LeaderAndIsr Response (Version: 0) => error_code [partitions] 
+<p><pre class="line-numbers"><code class="language-java">LeaderAndIsr Response (Version: 0) => error_code [partitions] 
   error_code => INT16
   partitions => topic partition error_code 
     topic => STRING
     partition => INT32
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -820,14 +820,14 @@
 <h5>StopReplica API (Key: 5):</h5>
 
 <b>Requests:</b><br>
-<p><pre>StopReplica Request (Version: 0) => controller_id controller_epoch delete_partitions [partitions] 
+<p><pre class="line-numbers"><code class="language-java">StopReplica Request (Version: 0) => controller_id controller_epoch delete_partitions [partitions] 
   controller_id => INT32
   controller_epoch => INT32
   delete_partitions => BOOLEAN
   partitions => topic partition 
     topic => STRING
     partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -845,13 +845,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>StopReplica Response (Version: 0) => error_code [partitions] 
+<p><pre class="line-numbers"><code class="language-java">StopReplica Response (Version: 0) => error_code [partitions] 
   error_code => INT16
   partitions => topic partition error_code 
     topic => STRING
     partition => INT32
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -869,7 +869,7 @@
 <h5>UpdateMetadata API (Key: 6):</h5>
 
 <b>Requests:</b><br>
-<p><pre>UpdateMetadata Request (Version: 0) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 0) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -885,7 +885,7 @@
     id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -920,7 +920,7 @@
 <td>port</td><td>The port on which the broker accepts requests.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 1) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 1) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -938,7 +938,7 @@
       port => INT32
       host => STRING
       security_protocol_type => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -977,7 +977,7 @@
 <td>security_protocol_type</td><td>The security protocol type.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 2) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 2) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -996,7 +996,7 @@
       host => STRING
       security_protocol_type => INT16
     rack => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1038,27 +1038,27 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>UpdateMetadata Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 1) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 1) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 2) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 2) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1069,9 +1069,9 @@
 
 <b>Requests:</b><br>
 </p>
-<p><pre>ControlledShutdown Request (Version: 1) => broker_id 
+<p><pre class="line-numbers"><code class="language-java">ControlledShutdown Request (Version: 1) => broker_id 
   broker_id => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1080,12 +1080,12 @@
 </p>
 <b>Responses:</b><br>
 </p>
-<p><pre>ControlledShutdown Response (Version: 1) => error_code [partitions_remaining] 
+<p><pre class="line-numbers"><code class="language-java">ControlledShutdown Response (Version: 1) => error_code [partitions_remaining] 
   error_code => INT16
   partitions_remaining => topic partition 
     topic => STRING
     partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1101,7 +1101,7 @@
 <h5>OffsetCommit API (Key: 8):</h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetCommit Request (Version: 0) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 0) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
@@ -1109,7 +1109,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1128,7 +1128,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 1) => group_id group_generation_id member_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 1) => group_id group_generation_id member_id [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -1139,7 +1139,7 @@
       offset => INT64
       timestamp => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1164,7 +1164,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 2) => group_id group_generation_id member_id retention_time [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 2) => group_id group_generation_id member_id retention_time [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -1175,7 +1175,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1201,13 +1201,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetCommit Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1222,13 +1222,13 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1243,13 +1243,13 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 2) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 2) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1267,13 +1267,13 @@
 <h5>OffsetFetch API (Key: 9):</h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetFetch Request (Version: 0) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 0) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1288,13 +1288,13 @@
 <td>partition</td><td>Topic partition id.</td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Request (Version: 1) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 1) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1310,7 +1310,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetFetch Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1318,7 +1318,7 @@
       offset => INT64
       metadata => NULLABLE_STRING
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1337,7 +1337,7 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1345,7 +1345,7 @@
       offset => INT64
       metadata => NULLABLE_STRING
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1367,9 +1367,9 @@
 <h5>GroupCoordinator API (Key: 10):</h5>
 
 <b>Requests:</b><br>
-<p><pre>GroupCoordinator Request (Version: 0) => group_id 
+<p><pre class="line-numbers"><code class="language-java">GroupCoordinator Request (Version: 0) => group_id 
   group_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1377,13 +1377,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>GroupCoordinator Response (Version: 0) => error_code coordinator 
+<p><pre class="line-numbers"><code class="language-java">GroupCoordinator Response (Version: 0) => error_code coordinator 
   error_code => INT16
   coordinator => node_id host port 
     node_id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1401,7 +1401,7 @@
 <h5>JoinGroup API (Key: 11):</h5>
 
 <b>Requests:</b><br>
-<p><pre>JoinGroup Request (Version: 0) => group_id session_timeout member_id protocol_type [group_protocols] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Request (Version: 0) => group_id session_timeout member_id protocol_type [group_protocols] 
   group_id => STRING
   session_timeout => INT32
   member_id => STRING
@@ -1409,7 +1409,7 @@
   group_protocols => protocol_name protocol_metadata 
     protocol_name => STRING
     protocol_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1428,7 +1428,7 @@
 <td>protocol_metadata</td><td></td></tr>
 </table>
 </p>
-<p><pre>JoinGroup Request (Version: 1) => group_id session_timeout rebalance_timeout member_id protocol_type [group_protocols] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Request (Version: 1) => group_id session_timeout rebalance_timeout member_id protocol_type [group_protocols] 
   group_id => STRING
   session_timeout => INT32
   rebalance_timeout => INT32
@@ -1437,7 +1437,7 @@
   group_protocols => protocol_name protocol_metadata 
     protocol_name => STRING
     protocol_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1459,7 +1459,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>JoinGroup Response (Version: 0) => error_code generation_id group_protocol leader_id member_id [members] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Response (Version: 0) => error_code generation_id group_protocol leader_id member_id [members] 
   error_code => INT16
   generation_id => INT32
   group_protocol => STRING
@@ -1468,7 +1468,7 @@
   members => member_id member_metadata 
     member_id => STRING
     member_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1489,7 +1489,7 @@
 <td>member_metadata</td><td></td></tr>
 </table>
 </p>
-<p><pre>JoinGroup Response (Version: 1) => error_code generation_id group_protocol leader_id member_id [members] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Response (Version: 1) => error_code generation_id group_protocol leader_id member_id [members] 
   error_code => INT16
   generation_id => INT32
   group_protocol => STRING
@@ -1498,7 +1498,7 @@
   members => member_id member_metadata 
     member_id => STRING
     member_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1522,11 +1522,11 @@
 <h5>Heartbeat API (Key: 12):</h5>
 
 <b>Requests:</b><br>
-<p><pre>Heartbeat Request (Version: 0) => group_id group_generation_id member_id 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Request (Version: 0) => group_id group_generation_id member_id 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1538,9 +1538,9 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Heartbeat Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1550,10 +1550,10 @@
 <h5>LeaveGroup API (Key: 13):</h5>
 
 <b>Requests:</b><br>
-<p><pre>LeaveGroup Request (Version: 0) => group_id member_id 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Request (Version: 0) => group_id member_id 
   group_id => STRING
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1563,9 +1563,9 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>LeaveGroup Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1575,14 +1575,14 @@
 <h5>SyncGroup API (Key: 14):</h5>
 
 <b>Requests:</b><br>
-<p><pre>SyncGroup Request (Version: 0) => group_id generation_id member_id [group_assignment] 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Request (Version: 0) => group_id generation_id member_id [group_assignment] 
   group_id => STRING
   generation_id => INT32
   member_id => STRING
   group_assignment => member_id member_assignment 
     member_id => STRING
     member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1600,10 +1600,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>SyncGroup Response (Version: 0) => error_code member_assignment 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Response (Version: 0) => error_code member_assignment 
   error_code => INT16
   member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1615,9 +1615,9 @@
 <h5>DescribeGroups API (Key: 15):</h5>
 
 <b>Requests:</b><br>
-<p><pre>DescribeGroups Request (Version: 0) => [group_ids] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Request (Version: 0) => [group_ids] 
   group_ids => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1625,7 +1625,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DescribeGroups Response (Version: 0) => [groups] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Response (Version: 0) => [groups] 
   groups => error_code group_id state protocol_type protocol [members] 
     error_code => INT16
     group_id => STRING
@@ -1638,7 +1638,7 @@
       client_host => STRING
       member_metadata => BYTES
       member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1670,19 +1670,19 @@
 <h5>ListGroups API (Key: 16):</h5>
 
 <b>Requests:</b><br>
-<p><pre>ListGroups Request (Version: 0) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ListGroups Request (Version: 0) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
 <b>Responses:</b><br>
-<p><pre>ListGroups Response (Version: 0) => error_code [groups] 
+<p><pre class="line-numbers"><code class="language-java">ListGroups Response (Version: 0) => error_code [groups] 
   error_code => INT16
   groups => group_id protocol_type 
     group_id => STRING
     protocol_type => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1698,9 +1698,9 @@
 <h5>SaslHandshake API (Key: 17):</h5>
 
 <b>Requests:</b><br>
-<p><pre>SaslHandshake Request (Version: 0) => mechanism 
+<p><pre class="line-numbers"><code class="language-java">SaslHandshake Request (Version: 0) => mechanism 
   mechanism => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1708,10 +1708,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
+<p><pre class="line-numbers"><code class="language-java">SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
   error_code => INT16
   enabled_mechanisms => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1723,20 +1723,20 @@
 <h5>ApiVersions API (Key: 18):</h5>
 
 <b>Requests:</b><br>
-<p><pre>ApiVersions Request (Version: 0) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Request (Version: 0) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
 <b>Responses:</b><br>
-<p><pre>ApiVersions Response (Version: 0) => error_code [api_versions] 
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Response (Version: 0) => error_code [api_versions] 
   error_code => INT16
   api_versions => api_key min_version max_version 
     api_key => INT16
     min_version => INT16
     max_version => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1754,7 +1754,7 @@
 <h5>CreateTopics API (Key: 19):</h5>
 
 <b>Requests:</b><br>
-<p><pre>CreateTopics Request (Version: 0) => [create_topic_requests] timeout 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Request (Version: 0) => [create_topic_requests] timeout 
   create_topic_requests => topic num_partitions replication_factor [replica_assignment] [configs] 
     topic => STRING
     num_partitions => INT32
@@ -1766,7 +1766,7 @@
       config_key => STRING
       config_value => STRING
   timeout => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1794,11 +1794,11 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>CreateTopics Response (Version: 0) => [topic_error_codes] 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Response (Version: 0) => [topic_error_codes] 
   topic_error_codes => topic error_code 
     topic => STRING
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1812,10 +1812,10 @@
 <h5>DeleteTopics API (Key: 20):</h5>
 
 <b>Requests:</b><br>
-<p><pre>DeleteTopics Request (Version: 0) => [topics] timeout 
+<p><pre class="line-numbers"><code class="language-java">DeleteTopics Request (Version: 0) => [topics] timeout 
   topics => STRING
   timeout => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1825,11 +1825,11 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DeleteTopics Response (Version: 0) => [topic_error_codes] 
+<p><pre class="line-numbers"><code class="language-java">DeleteTopics Response (Version: 0) => [topic_error_codes] 
   topic_error_codes => topic error_code 
     topic => STRING
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
diff --git a/0101/introduction.html b/0101/introduction.html
index a46482f..16e05c4 100644
--- a/0101/introduction.html
+++ b/0101/introduction.html
@@ -49,7 +49,7 @@
       <img src="/{{version}}/images/kafka-apis.png" style="float: right; width: 50%;">
       </div>
   <p>
-  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
+  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
 
   <h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
   <p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the topic.</p>
diff --git a/0101/javadoc/org/apache/kafka/common/MetricName.html b/0101/javadoc/org/apache/kafka/common/MetricName.html
index 1b415d7..098f11e 100644
--- a/0101/javadoc/org/apache/kafka/common/MetricName.html
+++ b/0101/javadoc/org/apache/kafka/common/MetricName.html
@@ -112,7 +112,7 @@ extends <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html?
  <p>
 
  Usage looks something like this:
- <pre><code>// set up metrics:
+ <pre class="line-numbers"><code>// set up metrics:
 
  Map&lt;String, String&gt; metricTags = new LinkedHashMap&lt;String, String&gt;();
  metricTags.put("client-id", "producer-1");
diff --git a/0101/protocol.html b/0101/protocol.html
index 4637a3e..29717eb 100644
--- a/0101/protocol.html
+++ b/0101/protocol.html
@@ -22,7 +22,7 @@
     <div class="right">
         <h1>Kafka protocol guide</h1>
 
-<p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="https://kafka.apache.org/documentation.html#design">here</a></p>
+<p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="/documentation.html#design">here</a></p>
 
 <ul class="toc">
     <li><a href="#protocol_preliminaries">Preliminaries</a>
@@ -184,10 +184,10 @@ Kafka request. SASL/GSSAPI authentication is performed starting with this packet
 
 <p>All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:</p>
 
-<pre>
+<pre class="line-numbers"><code class="language-java">
 RequestOrResponse => Size (RequestMessage | ResponseMessage)
 Size => int32
-</pre>
+</code></pre>
 
 <table class="data-table"><tbody>
 <tr><th>Field</th><th>Description</th></tr>
diff --git a/0102/documentation.html b/0102/documentation.html
index f9ab673..6b101cf 100644
--- a/0102/documentation.html
+++ b/0102/documentation.html
@@ -21,7 +21,7 @@
 <!--#include virtual="../includes/_top.htm" -->
 
 
-<div class="content documentation documentation--current">
+<div class="content documentation">
 	<!--#include virtual="../includes/_nav.htm" -->
 	<div class="right">
 		<!--#include virtual="../includes/_docs_banner.htm" -->
diff --git a/0102/generated/protocol_messages.html b/0102/generated/protocol_messages.html
index 707fa8e..0aad59b 100644
--- a/0102/generated/protocol_messages.html
+++ b/0102/generated/protocol_messages.html
@@ -1,10 +1,10 @@
 <h5>Headers:</h5>
-<pre>Request Header => api_key api_version correlation_id client_id 
+<pre class="line-numbers"><code class="language-java">Request Header => api_key api_version correlation_id client_id 
   api_key => INT16
   api_version => INT16
   correlation_id => INT32
   client_id => NULLABLE_STRING
-</pre>
+</code></pre>
 <table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -17,9 +17,9 @@
 <tr>
 <td>client_id</td><td>A user specified identifier for the client making the request.</td></tr>
 </table>
-<pre>Response Header => correlation_id 
+<pre class="line-numbers"><code class="language-java">Response Header => correlation_id 
   correlation_id => INT32
-</pre>
+</code></pre>
 <table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -29,7 +29,7 @@
 <h5><a name="The_Messages_Produce">Produce API (Key: 0):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Produce Request (Version: 0) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 0) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -37,7 +37,7 @@
     data => partition record_set 
       partition => INT32
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -56,7 +56,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 1) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 1) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -64,7 +64,7 @@
     data => partition record_set 
       partition => INT32
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -83,7 +83,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 2) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 2) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -91,7 +91,7 @@
     data => partition record_set 
       partition => INT32
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -111,14 +111,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Produce Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset 
       partition => INT32
       error_code => INT16
       base_offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -135,7 +135,7 @@
 <td>base_offset</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 1) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 1) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset 
@@ -143,7 +143,7 @@
       error_code => INT16
       base_offset => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -162,7 +162,7 @@
 <td>throttle_time_ms</td><td>Duration in milliseconds for which the request was throttled due to quota violation. (Zero if the request did not violate any quota.)</td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 2) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 2) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset log_append_time 
@@ -171,7 +171,7 @@
       base_offset => INT64
       log_append_time => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -195,7 +195,7 @@
 <h5><a name="The_Messages_Fetch">Fetch API (Key: 1):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Fetch Request (Version: 0) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 0) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -205,7 +205,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -228,7 +228,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 1) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 1) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -238,7 +238,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -261,7 +261,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 2) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 2) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -271,7 +271,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -294,7 +294,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 3) => replica_id max_wait_time min_bytes max_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 3) => replica_id max_wait_time min_bytes max_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -305,7 +305,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -331,7 +331,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Fetch Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition_header record_set 
@@ -340,7 +340,7 @@
         error_code => INT16
         high_watermark => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -361,7 +361,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 1) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 1) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -371,7 +371,7 @@
         error_code => INT16
         high_watermark => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -394,7 +394,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 2) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 2) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -404,7 +404,7 @@
         error_code => INT16
         high_watermark => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -427,7 +427,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 3) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 3) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -437,7 +437,7 @@
         error_code => INT16
         high_watermark => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -463,7 +463,7 @@
 <h5><a name="The_Messages_Offsets">Offsets API (Key: 2):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Offsets Request (Version: 0) => replica_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Request (Version: 0) => replica_id [topics] 
   replica_id => INT32
   topics => topic [partitions] 
     topic => STRING
@@ -471,7 +471,7 @@
       partition => INT32
       timestamp => INT64
       max_num_offsets => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -490,14 +490,14 @@
 <td>max_num_offsets</td><td>Maximum offsets to return.</td></tr>
 </table>
 </p>
-<p><pre>Offsets Request (Version: 1) => replica_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Request (Version: 1) => replica_id [topics] 
   replica_id => INT32
   topics => topic [partitions] 
     topic => STRING
     partitions => partition timestamp 
       partition => INT32
       timestamp => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -515,14 +515,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Offsets Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code [offsets] 
       partition => INT32
       error_code => INT16
       offsets => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -539,7 +539,7 @@
 <td>offsets</td><td>A list of offsets.</td></tr>
 </table>
 </p>
-<p><pre>Offsets Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code timestamp offset 
@@ -547,7 +547,7 @@
       error_code => INT16
       timestamp => INT64
       offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -569,27 +569,27 @@
 <h5><a name="The_Messages_Metadata">Metadata API (Key: 3):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Metadata Request (Version: 0) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 0) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If no topics are specified fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 1) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 1) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If the topics array is null fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 2) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 2) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -597,7 +597,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Metadata Response (Version: 0) => [brokers] [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 0) => [brokers] [topic_metadata] 
   brokers => node_id host port 
     node_id => INT32
     host => STRING
@@ -611,7 +611,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -642,7 +642,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
   brokers => node_id host port rack 
     node_id => INT32
     host => STRING
@@ -659,7 +659,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -696,7 +696,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 2) => [brokers] cluster_id controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 2) => [brokers] cluster_id controller_id [topic_metadata] 
   brokers => node_id host port rack 
     node_id => INT32
     host => STRING
@@ -714,7 +714,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -756,7 +756,7 @@
 <h5><a name="The_Messages_LeaderAndIsr">LeaderAndIsr API (Key: 4):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>LeaderAndIsr Request (Version: 0) => controller_id controller_epoch [partition_states] [live_leaders] 
+<p><pre class="line-numbers"><code class="language-java">LeaderAndIsr Request (Version: 0) => controller_id controller_epoch [partition_states] [live_leaders] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -772,7 +772,7 @@
     id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -808,13 +808,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>LeaderAndIsr Response (Version: 0) => error_code [partitions] 
+<p><pre class="line-numbers"><code class="language-java">LeaderAndIsr Response (Version: 0) => error_code [partitions] 
   error_code => INT16
   partitions => topic partition error_code 
     topic => STRING
     partition => INT32
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -832,14 +832,14 @@
 <h5><a name="The_Messages_StopReplica">StopReplica API (Key: 5):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>StopReplica Request (Version: 0) => controller_id controller_epoch delete_partitions [partitions] 
+<p><pre class="line-numbers"><code class="language-java">StopReplica Request (Version: 0) => controller_id controller_epoch delete_partitions [partitions] 
   controller_id => INT32
   controller_epoch => INT32
   delete_partitions => BOOLEAN
   partitions => topic partition 
     topic => STRING
     partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -857,13 +857,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>StopReplica Response (Version: 0) => error_code [partitions] 
+<p><pre class="line-numbers"><code class="language-java">StopReplica Response (Version: 0) => error_code [partitions] 
   error_code => INT16
   partitions => topic partition error_code 
     topic => STRING
     partition => INT32
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -881,7 +881,7 @@
 <h5><a name="The_Messages_UpdateMetadata">UpdateMetadata API (Key: 6):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>UpdateMetadata Request (Version: 0) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 0) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -897,7 +897,7 @@
     id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -932,7 +932,7 @@
 <td>port</td><td>The port on which the broker accepts requests.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 1) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 1) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -950,7 +950,7 @@
       port => INT32
       host => STRING
       security_protocol_type => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -989,7 +989,7 @@
 <td>security_protocol_type</td><td>The security protocol type.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 2) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 2) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -1008,7 +1008,7 @@
       host => STRING
       security_protocol_type => INT16
     rack => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1049,7 +1049,7 @@
 <td>rack</td><td>The rack</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 3) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 3) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -1069,7 +1069,7 @@
       listener_name => STRING
       security_protocol_type => INT16
     rack => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1113,36 +1113,36 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>UpdateMetadata Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 1) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 1) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 2) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 2) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 3) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 3) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1153,9 +1153,9 @@
 
 <b>Requests:</b><br>
 </p>
-<p><pre>ControlledShutdown Request (Version: 1) => broker_id 
+<p><pre class="line-numbers"><code class="language-java">ControlledShutdown Request (Version: 1) => broker_id 
   broker_id => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1164,12 +1164,12 @@
 </p>
 <b>Responses:</b><br>
 </p>
-<p><pre>ControlledShutdown Response (Version: 1) => error_code [partitions_remaining] 
+<p><pre class="line-numbers"><code class="language-java">ControlledShutdown Response (Version: 1) => error_code [partitions_remaining] 
   error_code => INT16
   partitions_remaining => topic partition 
     topic => STRING
     partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1185,7 +1185,7 @@
 <h5><a name="The_Messages_OffsetCommit">OffsetCommit API (Key: 8):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetCommit Request (Version: 0) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 0) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
@@ -1193,7 +1193,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1212,7 +1212,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 1) => group_id group_generation_id member_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 1) => group_id group_generation_id member_id [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -1223,7 +1223,7 @@
       offset => INT64
       timestamp => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1248,7 +1248,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 2) => group_id group_generation_id member_id retention_time [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 2) => group_id group_generation_id member_id retention_time [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -1259,7 +1259,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1285,13 +1285,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetCommit Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1306,13 +1306,13 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1327,13 +1327,13 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 2) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 2) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1351,13 +1351,13 @@
 <h5><a name="The_Messages_OffsetFetch">OffsetFetch API (Key: 9):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetFetch Request (Version: 0) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 0) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1372,13 +1372,13 @@
 <td>partition</td><td>Topic partition id.</td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Request (Version: 1) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 1) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1393,13 +1393,13 @@
 <td>partition</td><td>Topic partition id.</td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Request (Version: 2) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 2) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1415,7 +1415,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetFetch Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1423,7 +1423,7 @@
       offset => INT64
       metadata => NULLABLE_STRING
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1442,7 +1442,7 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1450,7 +1450,7 @@
       offset => INT64
       metadata => NULLABLE_STRING
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1469,7 +1469,7 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Response (Version: 2) => [responses] error_code 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 2) => [responses] error_code 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1478,7 +1478,7 @@
       metadata => NULLABLE_STRING
       error_code => INT16
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1502,9 +1502,9 @@
 <h5><a name="The_Messages_GroupCoordinator">GroupCoordinator API (Key: 10):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>GroupCoordinator Request (Version: 0) => group_id 
+<p><pre class="line-numbers"><code class="language-java">GroupCoordinator Request (Version: 0) => group_id 
   group_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1512,13 +1512,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>GroupCoordinator Response (Version: 0) => error_code coordinator 
+<p><pre class="line-numbers"><code class="language-java">GroupCoordinator Response (Version: 0) => error_code coordinator 
   error_code => INT16
   coordinator => node_id host port 
     node_id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1536,7 +1536,7 @@
 <h5><a name="The_Messages_JoinGroup">JoinGroup API (Key: 11):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>JoinGroup Request (Version: 0) => group_id session_timeout member_id protocol_type [group_protocols] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Request (Version: 0) => group_id session_timeout member_id protocol_type [group_protocols] 
   group_id => STRING
   session_timeout => INT32
   member_id => STRING
@@ -1544,7 +1544,7 @@
   group_protocols => protocol_name protocol_metadata 
     protocol_name => STRING
     protocol_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1563,7 +1563,7 @@
 <td>protocol_metadata</td><td></td></tr>
 </table>
 </p>
-<p><pre>JoinGroup Request (Version: 1) => group_id session_timeout rebalance_timeout member_id protocol_type [group_protocols] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Request (Version: 1) => group_id session_timeout rebalance_timeout member_id protocol_type [group_protocols] 
   group_id => STRING
   session_timeout => INT32
   rebalance_timeout => INT32
@@ -1572,7 +1572,7 @@
   group_protocols => protocol_name protocol_metadata 
     protocol_name => STRING
     protocol_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1594,7 +1594,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>JoinGroup Response (Version: 0) => error_code generation_id group_protocol leader_id member_id [members] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Response (Version: 0) => error_code generation_id group_protocol leader_id member_id [members] 
   error_code => INT16
   generation_id => INT32
   group_protocol => STRING
@@ -1603,7 +1603,7 @@
   members => member_id member_metadata 
     member_id => STRING
     member_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1624,7 +1624,7 @@
 <td>member_metadata</td><td></td></tr>
 </table>
 </p>
-<p><pre>JoinGroup Response (Version: 1) => error_code generation_id group_protocol leader_id member_id [members] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Response (Version: 1) => error_code generation_id group_protocol leader_id member_id [members] 
   error_code => INT16
   generation_id => INT32
   group_protocol => STRING
@@ -1633,7 +1633,7 @@
   members => member_id member_metadata 
     member_id => STRING
     member_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1657,11 +1657,11 @@
 <h5><a name="The_Messages_Heartbeat">Heartbeat API (Key: 12):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Heartbeat Request (Version: 0) => group_id group_generation_id member_id 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Request (Version: 0) => group_id group_generation_id member_id 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1673,9 +1673,9 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Heartbeat Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1685,10 +1685,10 @@
 <h5><a name="The_Messages_LeaveGroup">LeaveGroup API (Key: 13):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>LeaveGroup Request (Version: 0) => group_id member_id 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Request (Version: 0) => group_id member_id 
   group_id => STRING
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1698,9 +1698,9 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>LeaveGroup Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1710,14 +1710,14 @@
 <h5><a name="The_Messages_SyncGroup">SyncGroup API (Key: 14):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>SyncGroup Request (Version: 0) => group_id generation_id member_id [group_assignment] 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Request (Version: 0) => group_id generation_id member_id [group_assignment] 
   group_id => STRING
   generation_id => INT32
   member_id => STRING
   group_assignment => member_id member_assignment 
     member_id => STRING
     member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1735,10 +1735,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>SyncGroup Response (Version: 0) => error_code member_assignment 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Response (Version: 0) => error_code member_assignment 
   error_code => INT16
   member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1750,9 +1750,9 @@
 <h5><a name="The_Messages_DescribeGroups">DescribeGroups API (Key: 15):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>DescribeGroups Request (Version: 0) => [group_ids] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Request (Version: 0) => [group_ids] 
   group_ids => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1760,7 +1760,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DescribeGroups Response (Version: 0) => [groups] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Response (Version: 0) => [groups] 
   groups => error_code group_id state protocol_type protocol [members] 
     error_code => INT16
     group_id => STRING
@@ -1773,7 +1773,7 @@
       client_host => STRING
       member_metadata => BYTES
       member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1805,19 +1805,19 @@
 <h5><a name="The_Messages_ListGroups">ListGroups API (Key: 16):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>ListGroups Request (Version: 0) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ListGroups Request (Version: 0) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
 <b>Responses:</b><br>
-<p><pre>ListGroups Response (Version: 0) => error_code [groups] 
+<p><pre class="line-numbers"><code class="language-java">ListGroups Response (Version: 0) => error_code [groups] 
   error_code => INT16
   groups => group_id protocol_type 
     group_id => STRING
     protocol_type => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1833,9 +1833,9 @@
 <h5><a name="The_Messages_SaslHandshake">SaslHandshake API (Key: 17):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>SaslHandshake Request (Version: 0) => mechanism 
+<p><pre class="line-numbers"><code class="language-java">SaslHandshake Request (Version: 0) => mechanism 
   mechanism => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1843,10 +1843,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
+<p><pre class="line-numbers"><code class="language-java">SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
   error_code => INT16
   enabled_mechanisms => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1858,20 +1858,20 @@
 <h5><a name="The_Messages_ApiVersions">ApiVersions API (Key: 18):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>ApiVersions Request (Version: 0) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Request (Version: 0) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
 <b>Responses:</b><br>
-<p><pre>ApiVersions Response (Version: 0) => error_code [api_versions] 
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Response (Version: 0) => error_code [api_versions] 
   error_code => INT16
   api_versions => api_key min_version max_version 
     api_key => INT16
     min_version => INT16
     max_version => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1889,7 +1889,7 @@
 <h5><a name="The_Messages_CreateTopics">CreateTopics API (Key: 19):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>CreateTopics Request (Version: 0) => [create_topic_requests] timeout 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Request (Version: 0) => [create_topic_requests] timeout 
   create_topic_requests => topic num_partitions replication_factor [replica_assignment] [configs] 
     topic => STRING
     num_partitions => INT32
@@ -1901,7 +1901,7 @@
       config_key => STRING
       config_value => STRING
   timeout => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1928,7 +1928,7 @@
 <td>timeout</td><td>The time in ms to wait for a topic to be completely created on the controller node. Values <= 0 will trigger topic creation and return immediately</td></tr>
 </table>
 </p>
-<p><pre>CreateTopics Request (Version: 1) => [create_topic_requests] timeout validate_only 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Request (Version: 1) => [create_topic_requests] timeout validate_only 
   create_topic_requests => topic num_partitions replication_factor [replica_assignment] [configs] 
     topic => STRING
     num_partitions => INT32
@@ -1941,7 +1941,7 @@
       config_value => STRING
   timeout => INT32
   validate_only => BOOLEAN
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1971,11 +1971,11 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>CreateTopics Response (Version: 0) => [topic_errors] 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Response (Version: 0) => [topic_errors] 
   topic_errors => topic error_code 
     topic => STRING
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1986,12 +1986,12 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>CreateTopics Response (Version: 1) => [topic_errors] 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Response (Version: 1) => [topic_errors] 
   topic_errors => topic error_code error_message 
     topic => STRING
     error_code => INT16
     error_message => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2007,10 +2007,10 @@
 <h5><a name="The_Messages_DeleteTopics">DeleteTopics API (Key: 20):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>DeleteTopics Request (Version: 0) => [topics] timeout 
+<p><pre class="line-numbers"><code class="language-java">DeleteTopics Request (Version: 0) => [topics] timeout 
   topics => STRING
   timeout => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2020,11 +2020,11 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DeleteTopics Response (Version: 0) => [topic_error_codes] 
+<p><pre class="line-numbers"><code class="language-java">DeleteTopics Response (Version: 0) => [topic_error_codes] 
   topic_error_codes => topic error_code 
     topic => STRING
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
diff --git a/0102/introduction.html b/0102/introduction.html
index 556aa02..f2740df 100644
--- a/0102/introduction.html
+++ b/0102/introduction.html
@@ -49,7 +49,7 @@
       <img src="/{{version}}/images/kafka-apis.png" style="float: right; width: 50%;">
       </div>
   <p>
-  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
+  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
 
   <h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
   <p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the topic.</p>
diff --git a/0102/javadoc/org/apache/kafka/common/MetricName.html b/0102/javadoc/org/apache/kafka/common/MetricName.html
index b8ac012..04fb268 100644
--- a/0102/javadoc/org/apache/kafka/common/MetricName.html
+++ b/0102/javadoc/org/apache/kafka/common/MetricName.html
@@ -112,7 +112,7 @@ extends <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html?
  <p>
 
  Usage looks something like this:
- <pre><code>// set up metrics:
+ <pre class="line-numbers"><code>// set up metrics:
 
  Map&lt;String, String&gt; metricTags = new LinkedHashMap&lt;String, String&gt;();
  metricTags.put("client-id", "producer-1");
diff --git a/0102/javadoc/org/apache/kafka/streams/KafkaStreams.html b/0102/javadoc/org/apache/kafka/streams/KafkaStreams.html
index 48f61be..8df25b7 100644
--- a/0102/javadoc/org/apache/kafka/streams/KafkaStreams.html
+++ b/0102/javadoc/org/apache/kafka/streams/KafkaStreams.html
@@ -118,7 +118,7 @@ extends <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html?
  that is used for reading input and writing output.
  <p>
  A simple example might look like this:
- <pre><code>Map&lt;String, Object&gt; props = new HashMap&lt;&gt;();
+ <pre class="line-numbers"><code>Map&lt;String, Object&gt; props = new HashMap&lt;&gt;();
  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "my-stream-processing-application");
  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
  props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
diff --git a/0102/javadoc/org/apache/kafka/streams/StreamsConfig.html b/0102/javadoc/org/apache/kafka/streams/StreamsConfig.html
index 491ec23..cd5590d 100644
--- a/0102/javadoc/org/apache/kafka/streams/StreamsConfig.html
+++ b/0102/javadoc/org/apache/kafka/streams/StreamsConfig.html
@@ -108,7 +108,7 @@ extends <a href="../../../../org/apache/kafka/common/config/AbstractConfig.html"
  <a href="../../../../org/apache/kafka/streams/StreamsConfig.html#consumerPrefix(java.lang.String)"><code>consumerPrefix(String)</code></a> and <a href="../../../../org/apache/kafka/streams/StreamsConfig.html#producerPrefix(java.lang.String)"><code>producerPrefix(String)</code></a>, respectively.
  <p>
  Example:
- <pre><code>// potentially wrong: sets "metadata.max.age.ms" to 1 minute for producer AND consumer
+ <pre class="line-numbers"><code>// potentially wrong: sets "metadata.max.age.ms" to 1 minute for producer AND consumer
  Properties streamsProperties = new Properties();
  streamsProperties.put(ConsumerConfig.METADATA_MAX_AGE_CONFIG, 60000);
  // or
diff --git a/0102/javadoc/org/apache/kafka/streams/kstream/GlobalKTable.html b/0102/javadoc/org/apache/kafka/streams/kstream/GlobalKTable.html
index bf9ea73..a0d1ca2 100644
--- a/0102/javadoc/org/apache/kafka/streams/kstream/GlobalKTable.html
+++ b/0102/javadoc/org/apache/kafka/streams/kstream/GlobalKTable.html
@@ -104,12 +104,12 @@ public interface <span class="strong">GlobalKTable&lt;K,V&gt;</span></pre>
  <a href="../../../../../org/apache/kafka/streams/KeyValue.html" title="class in org.apache.kafka.streams"><code>KeyValue</code></a> of the left hand side <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html" title="interface in org.apache.kafka.streams.kstream"><code>KStream</code></a> to the key of the right hand side <code>GlobalKTable</code>.
  <p>
  A <code>GlobalKTable</code> is created via a <a href="../../../../../org/apache/kafka/streams/kstream/KStreamBuilder.html" title="class in org.apache.kafka.streams.kstream"><code>KStreamBuilder</code></a>. For example:
- <pre><code>builder.globalTable("topic-name", "queryable-store-name");
+ <pre class="line-numbers"><code>builder.globalTable("topic-name", "queryable-store-name");
  </code></pre>
  all <code>GlobalKTable</code>s are backed by a <a href="../../../../../org/apache/kafka/streams/state/ReadOnlyKeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>ReadOnlyKeyValueStore</code></a> and are therefore queryable via the
  interactive queries API.
  For example:
- <pre><code>final GlobalKTable globalOne = builder.globalTable("g1", "g1-store");
+ <pre class="line-numbers"><code>final GlobalKTable globalOne = builder.globalTable("g1", "g1-store");
  final GlobalKTable globalTwo = builder.globalTable("g2", "g2-store");
  ...
  final KafkaStreams streams = ...;
diff --git a/0102/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html b/0102/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html
index fd2935d..7c3ff1e 100644
--- a/0102/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html
+++ b/0102/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html
@@ -107,7 +107,7 @@ extends <a href="../../../../../org/apache/kafka/streams/kstream/Windows.html" t
  <p>
  A <code>JoinWindows</code> instance defines a maximum time difference for a <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html#join(org.apache.kafka.streams.kstream.KStream,%20org.apache.kafka.streams.kstream.ValueJoiner,%20org.apache.kafka.streams.kstream.JoinWindows)"><code>join over two streams</code></a> on the same key.
  In SQL-style you would express this join as
- <pre><code>SELECT * FROM stream1, stream2
+ <pre class="line-numbers"><code>SELECT * FROM stream1, stream2
      WHERE
        stream1.key = stream2.key
        AND
diff --git a/0102/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html b/0102/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html
index 6678dae..850680f 100644
--- a/0102/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html
+++ b/0102/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html
@@ -299,7 +299,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -341,7 +341,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String storeName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
@@ -381,7 +381,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-word";
  long fromTime = ...;
@@ -430,7 +430,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String storeName = storeSupplier.name();
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-word";
@@ -471,7 +471,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/SessionStore.html" title="interface in org.apache.kafka.streams.state"><code>SessionStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String storeName = storeSupplier.name();
  ReadOnlySessionStore&lt;String,Long&gt; sessionStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;sessionStore());
  String key = "some-word";
@@ -510,7 +510,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/SessionStore.html" title="interface in org.apache.kafka.streams.state"><code>SessionStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String storeName = storeSupplier.name();
  ReadOnlySessionStore&lt;String,Long&gt; sessionStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;sessionStore());
  String key = "some-word";
@@ -554,7 +554,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long sumForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -606,7 +606,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  String storeName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
@@ -655,7 +655,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-key";
  long fromTime = ...;
@@ -714,7 +714,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  Sting storeName = storeSupplier.name();
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-key";
@@ -764,7 +764,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/SessionStore.html" title="interface in org.apache.kafka.streams.state"><code>SessionStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  ReadOnlySessionStore&lt;String,Long&gt; sessionStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;sessionStore());
  String key = "some-key";
  KeyValueIterator&lt;Windowed&lt;String&gt;, Long&gt; sumForKeyForSession = localWindowStore.fetch(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -819,7 +819,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/SessionStore.html" title="interface in org.apache.kafka.streams.state"><code>SessionStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  Sting storeName = storeSupplier.name();
  ReadOnlySessionStore&lt;String,Long&gt; sessionStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;sessionStore());
  String key = "some-key";
@@ -874,7 +874,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // some aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some aggregation on value type double
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long aggForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -929,7 +929,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // some aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some aggregation on value type double
  Sting storeName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
@@ -982,7 +982,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // some windowed aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some windowed aggregation on value type double
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-key";
  long fromTime = ...;
@@ -1044,7 +1044,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // some windowed aggregation on value type Long
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some windowed aggregation on value type Long
  Sting storeName = storeSupplier.name();
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-key";
@@ -1099,7 +1099,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/SessionStore.html" title="interface in org.apache.kafka.streams.state"><code>SessionStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // some windowed aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some windowed aggregation on value type double
  Sting storeName = storeSupplier.name();
  ReadOnlySessionStore&lt;String, Long&gt; sessionStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;sessionStore());
  String key = "some-key";
@@ -1153,7 +1153,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/SessionStore.html" title="interface in org.apache.kafka.streams.state"><code>SessionStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // some windowed aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some windowed aggregation on value type double
  Sting storeName = storeSupplier.name();
  ReadOnlySessionStore&lt;String, Long&gt; sessionStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;sessionStore());
  String key = "some-key";
diff --git a/0102/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html b/0102/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html
index 68183b4..717c7f2 100644
--- a/0102/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html
+++ b/0102/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html
@@ -212,7 +212,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -254,7 +254,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String storeName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
@@ -301,7 +301,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  value as-is.
  Thus, <code>reduce(Reducer, Reducer, String)</code> can be used to compute aggregate functions like sum.
  For sum, the adder and substractor would work as follows:
- <pre><code>public class SumAdder implements Reducer&lt;Integer&gt; {
+ <pre class="line-numbers"><code>public class SumAdder implements Reducer&lt;Integer&gt; {
    public Integer apply(Integer currentAgg, Integer newValue) {
      return currentAgg + newValue;
    }
@@ -322,7 +322,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -368,7 +368,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  value as-is.
  Thus, <code>reduce(Reducer, Reducer, String)</code> can be used to compute aggregate functions like sum.
  For sum, the adder and substractor would work as follows:
- <pre><code>public class SumAdder implements Reducer&lt;Integer&gt; {
+ <pre class="line-numbers"><code>public class SumAdder implements Reducer&lt;Integer&gt; {
    public Integer apply(Integer currentAgg, Integer newValue) {
      return currentAgg + newValue;
    }
@@ -389,7 +389,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String storeName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
@@ -441,7 +441,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  Thus, <code>aggregate(Initializer, Aggregator, Aggregator, String)</code> can be used to compute aggregate functions
  like sum.
  For sum, the initializer, adder, and substractor would work as follows:
- <pre><code>// in this example, LongSerde.class must be set as default value serde in StreamsConfig
+ <pre class="line-numbers"><code>// in this example, LongSerde.class must be set as default value serde in StreamsConfig
  public class SumInitializer implements Initializer&lt;Long&gt; {
    public Long apply() {
      return 0L;
@@ -469,7 +469,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -519,7 +519,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  Thus, <code>aggregate(Initializer, Aggregator, Aggregator, String)</code> can be used to compute aggregate functions
  like sum.
  For sum, the initializer, adder, and substractor would work as follows:
- <pre><code>public class SumInitializer implements Initializer&lt;Long&gt; {
+ <pre class="line-numbers"><code>public class SumInitializer implements Initializer&lt;Long&gt; {
    public Long apply() {
      return 0L;
    }
@@ -546,7 +546,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -596,7 +596,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  Thus, <code>aggregate(Initializer, Aggregator, Aggregator, String)</code> can be used to compute aggregate functions
  like sum.
  For sum, the initializer, adder, and substractor would work as follows:
- <pre><code>public class SumInitializer implements Initializer&lt;Long&gt; {
+ <pre class="line-numbers"><code>public class SumInitializer implements Initializer&lt;Long&gt; {
    public Long apply() {
      return 0L;
    }
@@ -623,7 +623,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String storeName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
diff --git a/0102/javadoc/org/apache/kafka/streams/kstream/KStream.html b/0102/javadoc/org/apache/kafka/streams/kstream/KStream.html
index 00e8afc..5274688 100644
--- a/0102/javadoc/org/apache/kafka/streams/kstream/KStream.html
+++ b/0102/javadoc/org/apache/kafka/streams/kstream/KStream.html
@@ -529,7 +529,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  For example, you can use this transformation to set a key for a key-less input record <code>&lt;null,V&gt;</code> by
  extracting a key from the value within your <a href="../../../../../org/apache/kafka/streams/kstream/KeyValueMapper.html" title="interface in org.apache.kafka.streams.kstream"><code>KeyValueMapper</code></a>. The example below computes the new key as the
  length of the value string.
- <pre><code>KStream&lt;Byte[], String&gt; keyLessStream = builder.stream("key-less-topic");
+ <pre class="line-numbers"><code>KStream&lt;Byte[], String&gt; keyLessStream = builder.stream("key-less-topic");
  KStream&lt;Integer, String&gt; keyedStream = keyLessStream.selectKey(new KeyValueMapper&lt;Byte[], String, Integer&gt; {
      Integer apply(Byte[] key, String value) {
          return value.length();
@@ -561,7 +561,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  stateful record transformation).
  <p>
  The example below normalizes the String key to upper-case letters and counts the number of token of the value string.
- <pre><code>KStream&lt;String, String&gt; inputStream = builder.stream("topic");
+ <pre class="line-numbers"><code>KStream&lt;String, String&gt; inputStream = builder.stream("topic");
  KStream&lt;Integer, String&gt; outputStream = inputStream.map(new KeyValueMapper&lt;String, String, KeyValue&lt;String, Integer&gt;&gt; {
      KeyValue&lt;String, Integer&gt; apply(String key, String value) {
          return new KeyValue&lt;&gt;(key.toUpperCase(), value.split(" ").length);
@@ -596,7 +596,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html#transformValues(org.apache.kafka.streams.kstream.ValueTransformerSupplier,%20java.lang.String...)"><code>transformValues(ValueTransformerSupplier, String...)</code></a> for stateful value transformation).
  <p>
  The example below counts the number of token of the value string.
- <pre><code>KStream&lt;String, String&gt; inputStream = builder.stream("topic");
+ <pre class="line-numbers"><code>KStream&lt;String, String&gt; inputStream = builder.stream("topic");
  KStream&lt;String, Integer&gt; outputStream = inputStream.mapValues(new ValueMapper&lt;String, Integer&gt; {
      Integer apply(String value) {
          return value.split(" ").length;
@@ -632,7 +632,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  <p>
  The example below splits input records <code>&lt;null:String&gt;</code> containing sentences as values into their words
  and emit a record <code>&lt;word:1&gt;</code> for each word.
- <pre><code>KStream&lt;byte[], String&gt; inputStream = builder.stream("topic");
+ <pre class="line-numbers"><code>KStream&lt;byte[], String&gt; inputStream = builder.stream("topic");
  KStream&lt;Integer, String&gt; outputStream = inputStream.flatMap(new KeyValueMapper&lt;byte[], String, Iterable&lt;KeyValue&lt;String, Integer&gt;&gt;&gt; {
      Iterable&lt;KeyValue&lt;String, Integer&gt;&gt; apply(byte[] key, String value) {
          String[] tokens = value.split(" ");
@@ -678,7 +678,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  for stateful value transformation).
  <p>
  The example below splits input records <code>&lt;null:String&gt;</code> containing sentences as values into their words.
- <pre><code>KStream&lt;byte[], String&gt; inputStream = builder.stream("topic");
+ <pre class="line-numbers"><code>KStream&lt;byte[], String&gt; inputStream = builder.stream("topic");
  KStream&lt;Integer, String&gt; outputStream = inputStream.flatMap(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt; {
      Iterable&lt;String&gt; apply(String value) {
          return Arrays.asList(value.split(" "));
@@ -1064,7 +1064,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  periodic actions can be performed.
  <p>
  In order to assign a state, the state must be created and registered beforehand:
- <pre><code>// create store
+ <pre class="line-numbers"><code>// create store
  StateStoreSupplier myStore = Stores.create("myTransformState")
      .withKeys(...)
      .withValues(...)
@@ -1081,7 +1081,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  <a href="../../../../../org/apache/kafka/streams/processor/ProcessorContext.html" title="interface in org.apache.kafka.streams.processor"><code>ProcessorContext</code></a>.
  To trigger periodic actions via <a href="../../../../../org/apache/kafka/streams/kstream/Transformer.html#punctuate(long)"><code>punctuate()</code></a>, a schedule must be registered.
  The <a href="../../../../../org/apache/kafka/streams/kstream/Transformer.html" title="interface in org.apache.kafka.streams.kstream"><code>Transformer</code></a> must return a <a href="../../../../../org/apache/kafka/streams/KeyValue.html" title="class in org.apache.kafka.streams"><code>KeyValue</code></a> type in <a href="../../../../../org/apache/kafka/streams/kstream/Transformer.html#transform(K,%20V)"><code>transform()</code></a> and <a href="../../../../../org/apache/kafka/streams/ [...]
- <pre><code>new TransformerSupplier() {
+ <pre class="line-numbers"><code>new TransformerSupplier() {
      Transformer get() {
          return new Transformer() {
              private ProcessorContext context;
@@ -1140,7 +1140,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  periodic actions get be performed.
  <p>
  In order to assign a state, the state must be created and registered beforehand:
- <pre><code>// create store
+ <pre class="line-numbers"><code>// create store
  StateStoreSupplier myStore = Stores.create("myValueTransformState")
      .withKeys(...)
      .withValues(...)
@@ -1159,7 +1159,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  registered.
  In contrast to <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html#transform(org.apache.kafka.streams.kstream.TransformerSupplier,%20java.lang.String...)"><code>transform()</code></a>, no additional <a href="../../../../../org/apache/kafka/streams/KeyValue.html" title="class in org.apache.kafka.streams"><code>KeyValue</code></a>
  pairs should be emitted via <a href="../../../../../org/apache/kafka/streams/processor/ProcessorContext.html#forward(K,%20V)"><code>ProcessorContext.forward()</code></a>.
- <pre><code>new ValueTransformerSupplier() {
+ <pre class="line-numbers"><code>new ValueTransformerSupplier() {
      ValueTransformer get() {
          return new ValueTransformer() {
              private StateStore state;
@@ -1212,7 +1212,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  Note that this is a terminal operation that returns void.
  <p>
  In order to assign a state, the state must be created and registered beforehand:
- <pre><code>// create store
+ <pre class="line-numbers"><code>// create store
  StateStoreSupplier myStore = Stores.create("myProcessorState")
      .withKeys(...)
      .withValues(...)
@@ -1229,7 +1229,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  <a href="../../../../../org/apache/kafka/streams/processor/ProcessorContext.html" title="interface in org.apache.kafka.streams.processor"><code>ProcessorContext</code></a>.
  To trigger periodic actions via <a href="../../../../../org/apache/kafka/streams/processor/Processor.html#punctuate(long)"><code>punctuate()</code></a>,
  a schedule must be registered.
- <pre><code>new ProcessorSupplier() {
+ <pre class="line-numbers"><code>new ProcessorSupplier() {
      Processor get() {
          return new Processor() {
              private StateStore state;
diff --git a/0102/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html b/0102/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html
index 66d7f87..3e678e4 100644
--- a/0102/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html
+++ b/0102/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html
@@ -528,7 +528,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -562,7 +562,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -598,7 +598,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -635,7 +635,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -668,7 +668,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key);
@@ -700,7 +700,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key);
diff --git a/0102/javadoc/org/apache/kafka/streams/kstream/KTable.html b/0102/javadoc/org/apache/kafka/streams/kstream/KTable.html
index e194230..520ab90 100644
--- a/0102/javadoc/org/apache/kafka/streams/kstream/KTable.html
+++ b/0102/javadoc/org/apache/kafka/streams/kstream/KTable.html
@@ -103,7 +103,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  Some <code>KTable</code>s have an internal state (a <a href="../../../../../org/apache/kafka/streams/state/ReadOnlyKeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>ReadOnlyKeyValueStore</code></a>) and are therefore queryable via the
  interactive queries API.
  For example:
- <pre><code>final KTable table = ...
+ <pre class="line-numbers"><code>final KTable table = ...
      ...
      final KafkaStreams streams = ...;
      streams.start()
@@ -423,7 +423,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  This is a stateless record-by-record operation.
  <p>
  The example below counts the number of token of the value string.
- <pre><code>KTable&lt;String, String&gt; inputTable = builder.table("topic");
+ <pre class="line-numbers"><code>KTable&lt;String, String&gt; inputTable = builder.table("topic");
  KTable&lt;String, Integer&gt; outputTable = inputTable.mapValue(new ValueMapper&lt;String, Integer&gt; {
      Integer apply(String value) {
          return value.split(" ").length;
@@ -662,7 +662,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
 <div class="block">Convert this changelog stream to a <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html" title="interface in org.apache.kafka.streams.kstream"><code>KStream</code></a> using the given <a href="../../../../../org/apache/kafka/streams/kstream/KeyValueMapper.html" title="interface in org.apache.kafka.streams.kstream"><code>KeyValueMapper</code></a> to select the new key.
  <p>
  For example, you can compute the new key as the length of the value string.
- <pre><code>KTable&lt;String, String&gt; table = builder.table("topic");
+ <pre class="line-numbers"><code>KTable&lt;String, String&gt; table = builder.table("topic");
  KTable&lt;Integer, String&gt; keyedStream = table.toStream(new KeyValueMapper&lt;String, String, Integer&gt; {
      Integer apply(String key, String value) {
          return value.length();
diff --git a/0102/protocol.html b/0102/protocol.html
index 4042223..7a69e8f 100644
--- a/0102/protocol.html
+++ b/0102/protocol.html
@@ -22,7 +22,7 @@
     <div class="right">
         <h1>Kafka protocol guide</h1>
 
-<p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="https://kafka.apache.org/documentation.html#design">here</a></p>
+<p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="/documentation.html#design">here</a></p>
 
 <ul class="toc">
     <li><a href="#protocol_preliminaries">Preliminaries</a>
@@ -183,10 +183,10 @@ Kafka request. SASL/GSSAPI authentication is performed starting with this packet
 
 <p>All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:</p>
 
-<pre>
+<pre class="line-numbers"><code class="language-java">
 RequestOrResponse => Size (RequestMessage | ResponseMessage)
 Size => int32
-</pre>
+</code></pre>
 
 <table class="data-table"><tbody>
 <tr><th>Field</th><th>Description</th></tr>
diff --git a/0110/api.html b/0110/api.html
index 9777186..b333675 100644
--- a/0110/api.html
+++ b/0110/api.html
@@ -35,13 +35,12 @@
 	<p>
 	To use the producer, you can use the following maven dependency:
 
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-clients&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
 		&lt;/dependency&gt;
-	</pre>
+	</code></pre>
 
 	<h3><a id="consumerapi" href="#consumerapi">2.2 Consumer API</a></h3>
 
@@ -51,13 +50,12 @@
 	<a href="/{{version}}/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html" title="Kafka {{dotVersion}} Javadoc">javadocs</a>.
 	<p>
 	To use the consumer, you can use the following maven dependency:
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-clients&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
 		&lt;/dependency&gt;
-	</pre>
+	</code></pre>
 
 	<h3><a id="streamsapi" href="#streamsapi">2.3 Streams API</a></h3>
 
@@ -70,13 +68,12 @@
 	<p>
 	To use Kafka Streams you can use the following maven dependency:
 
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-streams&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
 		&lt;/dependency&gt;
-	</pre>
+	</code></pre>
 
 	<h3><a id="connectapi" href="#connectapi">2.4 Connect API</a></h3>
 
@@ -92,13 +89,12 @@
 	The AdminClient API supports managing and inspecting topics, brokers, acls, and other Kafka objects.
 	<p>
 	To use the AdminClient API, add the following Maven dependency:
-	<pre class="brush: xml;">
-		&lt;dependency&gt;
+	<pre class="line-numbers"><code class="language-xml">		&lt;dependency&gt;
 			&lt;groupId&gt;org.apache.kafka&lt;/groupId&gt;
 			&lt;artifactId&gt;kafka-clients&lt;/artifactId&gt;
 			&lt;version&gt;{{fullDotVersion}}&lt;/version&gt;
 		&lt;/dependency&gt;
-	</pre>
+	</code></pre>
 	For more information about the AdminClient APIs, see the <a href="/{{version}}/javadoc/index.html?org/apache/kafka/clients/admin/AdminClient.html" title="Kafka {{dotVersion}} Javadoc">javadoc</a>.
 	<p>
 
diff --git a/0110/configuration.html b/0110/configuration.html
index 3f962e9..d907a0f 100644
--- a/0110/configuration.html
+++ b/0110/configuration.html
@@ -36,25 +36,21 @@
   <h3><a id="topicconfigs" href="#topicconfigs">3.2 Topic-Level Configs</a></h3>
 
   Configurations pertinent to topics have both a server default as well an optional per-topic override. If no per-topic configuration is given the server default is used. The override can be set at topic creation time by giving one or more <code>--config</code> options. This example creates a topic named <i>my-topic</i> with a custom max message size and flush rate:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic my-topic --partitions 1
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic my-topic --partitions 1
       --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
-  </pre>
+  </code></pre>
   Overrides can also be changed or set later using the alter configs command. This example updates the max message size for <i>my-topic</i>:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic
       --alter --add-config max.message.bytes=128000
-  </pre>
+  </code></pre>
 
   To check overrides set on the topic you can do
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic --describe
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic --describe
+  </code></pre>
 
   To remove an override you can do
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --zookeeper localhost:2181  --entity-type topics --entity-name my-topic --alter --delete-config max.message.bytes
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --zookeeper localhost:2181  --entity-type topics --entity-name my-topic --alter --delete-config max.message.bytes
+  </code></pre>
 
   The following are the topic-level configurations. The server's default configuration for this property is given under the Server Default Property heading. A given server default config value only applies to a topic if it does not have an explicit topic config override.
 
diff --git a/0110/connect.html b/0110/connect.html
index 8c79cc1..233c547 100644
--- a/0110/connect.html
+++ b/0110/connect.html
@@ -40,9 +40,8 @@
 
     In standalone mode all work is performed in a single process. This configuration is simpler to setup and get started with and may be useful in situations where only one worker makes sense (e.g. collecting log files), but it does not benefit from some of the features of Kafka Connect such as fault tolerance. You can start a standalone process with the following command:
 
-    <pre class="brush: bash;">
-    &gt; bin/connect-standalone.sh config/connect-standalone.properties connector1.properties [connector2.properties ...]
-    </pre>
+    <pre class="line-numbers"><code class="language-bash">    &gt; bin/connect-standalone.sh config/connect-standalone.properties connector1.properties [connector2.properties ...]
+    </code></pre>
 
     The first parameter is the configuration for the worker. This includes settings such as the Kafka connection parameters, serialization format, and how frequently to commit offsets. The provided example should work well with a local cluster running with the default configuration provided by <code>config/server.properties</code>. It will require tweaking to use with a different configuration or production deployment. All workers (both standalone and distributed) require a few configs:
     <ul>
@@ -62,9 +61,8 @@
 
     Distributed mode handles automatic balancing of work, allows you to scale up (or down) dynamically, and offers fault tolerance both in the active tasks and for configuration and offset commit data. Execution is very similar to standalone mode:
 
-    <pre class="brush: bash;">
-    &gt; bin/connect-distributed.sh config/connect-distributed.properties
-    </pre>
+    <pre class="line-numbers"><code class="language-bash">    &gt; bin/connect-distributed.sh config/connect-distributed.properties
+    </code></pre>
 
     The difference is in the class which is started and the configuration parameters which change how the Kafka Connect process decides where to store configurations, how to assign work, and where to store offsets and task statues. In the distributed mode, Kafka Connect stores the offsets, configs and task statuses in Kafka topics. It is recommended to manually create the topics for offset, configs and statuses in order to achieve the desired the number of partitions and replication fact [...]
 
@@ -118,10 +116,9 @@
 
     <p>Throughout the example we'll use schemaless JSON data format. To use schemaless format, we changed the following two lines in <code>connect-standalone.properties</code> from true to false:</p>
 
-    <pre class="brush: text;">
-        key.converter.schemas.enable
+    <pre class="line-numbers"><code class="language-text">        key.converter.schemas.enable
         value.converter.schemas.enable
-    </pre>
+    </code></pre>
 
     The file source connector reads each line as a String. We will wrap each line in a Map and then add a second field to identify the origin of the event. To do this, we use two transformations:
     <ul>
@@ -131,8 +128,7 @@
 
     After adding the transformations, <code>connect-file-source.properties</code> file looks as following:
 
-    <pre class="brush: text;">
-        name=local-file-source
+    <pre class="line-numbers"><code class="language-text">        name=local-file-source
         connector.class=FileStreamSource
         tasks.max=1
         file=test.txt
@@ -143,25 +139,23 @@
         transforms.InsertSource.type=org.apache.kafka.connect.transforms.InsertField$Value
         transforms.InsertSource.static.field=data_source
         transforms.InsertSource.static.value=test-file-source
-    </pre>
+    </code></pre>
 
     <p>All the lines starting with <code>transforms</code> were added for the transformations. You can see the two transformations we created: "InsertSource" and "MakeMap" are aliases that we chose to give the transformations. The transformation types are based on the list of built-in transformations you can see below. Each transformation type has additional configuration: HoistField requires a configuration called "field", which is the name of the field in the map that will include the  [...]
 
     When we ran the file source connector on my sample file without the transformations, and then read them using <code>kafka-console-consumer.sh</code>, the results were:
 
-    <pre class="brush: text;">
-        "foo"
+    <pre class="line-numbers"><code class="language-text">        "foo"
         "bar"
         "hello world"
-   </pre>
+   </code></pre>
 
     We then create a new file connector, this time after adding the transformations to the configuration file. This time, the results will be:
 
-    <pre class="brush: json;">
-        {"line":"foo","data_source":"test-file-source"}
+    <pre class="line-numbers"><code class="language-json">        {"line":"foo","data_source":"test-file-source"}
         {"line":"bar","data_source":"test-file-source"}
         {"line":"hello world","data_source":"test-file-source"}
-    </pre>
+    </code></pre>
 
     You can see that the lines we've read are now part of a JSON map, and there is an extra field with the static value we specified. This is just one example of what you can do with transformations.
 
@@ -247,25 +241,22 @@
 
     We'll cover the <code>SourceConnector</code> as a simple example. <code>SinkConnector</code> implementations are very similar. Start by creating the class that inherits from <code>SourceConnector</code> and add a couple of fields that will store parsed configuration information (the filename to read from and the topic to send data to):
 
-    <pre class="brush: java;">
-    public class FileStreamSourceConnector extends SourceConnector {
+    <pre class="line-numbers"><code class="language-java">    public class FileStreamSourceConnector extends SourceConnector {
         private String filename;
         private String topic;
-    </pre>
+    </code></pre>
 
     The easiest method to fill in is <code>taskClass()</code>, which defines the class that should be instantiated in worker processes to actually read the data:
 
-    <pre class="brush: java;">
-    @Override
+    <pre class="line-numbers"><code class="language-java">    @Override
     public Class&lt;? extends Task&gt; taskClass() {
         return FileStreamSourceTask.class;
     }
-    </pre>
+    </code></pre>
 
     We will define the <code>FileStreamSourceTask</code> class below. Next, we add some standard lifecycle methods, <code>start()</code> and <code>stop()</code>:
 
-    <pre class="brush: java;">
-    @Override
+    <pre class="line-numbers"><code class="language-java">    @Override
     public void start(Map&lt;String, String&gt; props) {
         // The complete version includes error handling as well.
         filename = props.get(FILE_CONFIG);
@@ -276,14 +267,13 @@
     public void stop() {
         // Nothing to do since no background monitoring is required.
     }
-    </pre>
+    </code></pre>
 
     Finally, the real core of the implementation is in <code>taskConfigs()</code>. In this case we are only
     handling a single file, so even though we may be permitted to generate more tasks as per the
     <code>maxTasks</code> argument, we return a list with only one entry:
 
-    <pre class="brush: java;">
-    @Override
+    <pre class="line-numbers"><code class="language-java">    @Override
     public List&lt;Map&lt;String, String&gt;&gt; taskConfigs(int maxTasks) {
         ArrayList&lt;Map&lt;String, String&gt;&gt; configs = new ArrayList&lt;&gt;();
         // Only one input stream makes sense.
@@ -294,7 +284,7 @@
         configs.add(config);
         return configs;
     }
-    </pre>
+    </code></pre>
 
     Although not used in the example, <code>SourceTask</code> also provides two APIs to commit offsets in the source system: <code>commit</code> and <code>commitRecord</code>. The APIs are provided for source systems which have an acknowledgement mechanism for messages. Overriding these methods allows the source connector to acknowledge messages in the source system, either in bulk or individually, once they have been written to Kafka.
     The <code>commit</code> API stores the offsets in the source system, up to the offsets that have been returned by <code>poll</code>. The implementation of this API should block until the commit is complete. The <code>commitRecord</code> API saves the offset in the source system for each <code>SourceRecord</code> after it is written to Kafka. As Kafka Connect will record offsets automatically, <code>SourceTask</code>s are not required to implement them. In cases where a connector does [...]
@@ -310,8 +300,7 @@
     Just as with the connector, we need to create a class inheriting from the appropriate base <code>Task</code> class. It also has some standard lifecycle methods:
 
 
-    <pre class="brush: java;">
-    public class FileStreamSourceTask extends SourceTask {
+    <pre class="line-numbers"><code class="language-java">    public class FileStreamSourceTask extends SourceTask {
         String filename;
         InputStream stream;
         String topic;
@@ -327,14 +316,13 @@
         public synchronized void stop() {
             stream.close();
         }
-    </pre>
+    </code></pre>
 
     These are slightly simplified versions, but show that that these methods should be relatively simple and the only work they should perform is allocating or freeing resources. There are two points to note about this implementation. First, the <code>start()</code> method does not yet handle resuming from a previous offset, which will be addressed in a later section. Second, the <code>stop()</code> method is synchronized. This will be necessary because <code>SourceTasks</code> are given [...]
 
     Next, we implement the main functionality of the task, the <code>poll()</code> method which gets events from the input system and returns a <code>List&lt;SourceRecord&gt;</code>:
 
-    <pre class="brush: java;">
-    @Override
+    <pre class="line-numbers"><code class="language-java">    @Override
     public List&lt;SourceRecord&gt; poll() throws InterruptedException {
         try {
             ArrayList&lt;SourceRecord&gt; records = new ArrayList&lt;&gt;();
@@ -355,7 +343,7 @@
         }
         return null;
     }
-    </pre>
+    </code></pre>
 
     Again, we've omitted some details, but we can see the important steps: the <code>poll()</code> method is going to be called repeatedly, and for each call it will loop trying to read records from the file. For each line it reads, it also tracks the file offset. It uses this information to create an output <code>SourceRecord</code> with four pieces of information: the source partition (there is only one, the single file being read), source offset (byte offset in the file), output topic [...]
 
@@ -365,8 +353,7 @@
 
     The previous section described how to implement a simple <code>SourceTask</code>. Unlike <code>SourceConnector</code> and <code>SinkConnector</code>, <code>SourceTask</code> and <code>SinkTask</code> have very different interfaces because <code>SourceTask</code> uses a pull interface and <code>SinkTask</code> uses a push interface. Both share the common lifecycle methods, but the <code>SinkTask</code> interface is quite different:
 
-    <pre class="brush: java;">
-    public abstract class SinkTask implements Task {
+    <pre class="line-numbers"><code class="language-java">    public abstract class SinkTask implements Task {
         public void initialize(SinkTaskContext context) {
             this.context = context;
         }
@@ -375,7 +362,7 @@
 
         public void flush(Map&lt;TopicPartition, OffsetAndMetadata&gt; currentOffsets) {
         }
-    </pre>
+    </code></pre>
 
     The <code>SinkTask</code> documentation contains full details, but this interface is nearly as simple as the <code>SourceTask</code>. The <code>put()</code> method should contain most of the implementation, accepting sets of <code>SinkRecords</code>, performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering wi [...]
 
@@ -389,15 +376,14 @@
 
     To correctly resume upon startup, the task can use the <code>SourceContext</code> passed into its <code>initialize()</code> method to access the offset data. In <code>initialize()</code>, we would add a bit more code to read the offset (if it exists) and seek to that position:
 
-    <pre class="brush: java;">
-        stream = new FileInputStream(filename);
+    <pre class="line-numbers"><code class="language-java">        stream = new FileInputStream(filename);
         Map&lt;String, Object&gt; offset = context.offsetStorageReader().offset(Collections.singletonMap(FILENAME_FIELD, filename));
         if (offset != null) {
             Long lastRecordedOffset = (Long) offset.get("position");
             if (lastRecordedOffset != null)
                 seekToOffset(stream, lastRecordedOffset);
         }
-    </pre>
+    </code></pre>
 
     Of course, you might need to read many keys for each of the input streams. The <code>OffsetStorageReader</code> interface also allows you to issue bulk reads to efficiently load all offsets, then apply them by seeking each input stream to the appropriate position.
 
@@ -407,10 +393,9 @@
 
     Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the <code>ConnectorContext</code> object that reconfiguration is necessary. For example, in a <code>SourceConnector</code>:
 
-    <pre class="brush: java;">
-        if (inputsChanged())
+    <pre class="line-numbers"><code class="language-java">        if (inputsChanged())
             this.context.requestTaskReconfiguration();
-    </pre>
+    </code></pre>
 
     The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the <code>SourceConnector</code> this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.
 
@@ -424,15 +409,14 @@
 
     The following code in <code>FileStreamSourceConnector</code> defines the configuration and exposes it to the framework.
 
-    <pre class="brush: java;">
-        private static final ConfigDef CONFIG_DEF = new ConfigDef()
+    <pre class="line-numbers"><code class="language-java">        private static final ConfigDef CONFIG_DEF = new ConfigDef()
             .define(FILE_CONFIG, Type.STRING, Importance.HIGH, "Source filename.")
             .define(TOPIC_CONFIG, Type.STRING, Importance.HIGH, "The topic to publish data to");
 
         public ConfigDef config() {
             return CONFIG_DEF;
         }
-    </pre>
+    </code></pre>
 
     <code>ConfigDef</code> class is used for specifying the set of expected configurations. For each configuration, you can specify the name, the type, the default value, the documentation, the group information, the order in the group, the width of the configuration value and the name suitable for display in the UI. Plus, you can provide special validation logic used for single configuration validation by overriding the <code>Validator</code> class. Moreover, as there may be dependencie [...]
 
@@ -446,8 +430,7 @@
 
     The API documentation provides a complete reference, but here is a simple example creating a <code>Schema</code> and <code>Struct</code>:
 
-    <pre class="brush: java;">
-    Schema schema = SchemaBuilder.struct().name(NAME)
+    <pre class="line-numbers"><code class="language-java">    Schema schema = SchemaBuilder.struct().name(NAME)
         .field("name", Schema.STRING_SCHEMA)
         .field("age", Schema.INT_SCHEMA)
         .field("admin", new SchemaBuilder.boolean().defaultValue(false).build())
@@ -456,7 +439,7 @@
     Struct struct = new Struct(schema)
         .put("name", "Barbara Liskov")
         .put("age", 75);
-    </pre>
+    </code></pre>
 
     If you are implementing a source connector, you'll need to decide when and how to create schemas. Where possible, you should avoid recomputing them as much as possible. For example, if your connector is guaranteed to have a fixed schema, create it statically and reuse a single instance.
 
@@ -474,8 +457,7 @@
     When a connector is first submitted to the cluster, the workers rebalance the full set of connectors in the cluster and their tasks so that each worker has approximately the same amount of work. This same rebalancing procedure is also used when connectors increase or decrease the number of tasks they require, or when a connector's configuration is changed. You can use the REST API to view the current status of a connector and its tasks, including the id of the worker to which each wa [...]
     </p>
 
-    <pre class="brush: json;">
-    {
+    <pre class="line-numbers"><code class="language-json">    {
     "name": "file-source",
     "connector": {
         "state": "RUNNING",
@@ -489,7 +471,7 @@
         }
     ]
     }
-    </pre>
+    </code></pre>
 
     <p>
     Connectors and their tasks publish status updates to a shared topic (configured with <code>status.storage.topic</code>) which all workers in the cluster monitor. Because the workers consume this topic asynchronously, there is typically a (short) delay before a state change is visible through the status API. The following states are possible for a connector or one of its tasks:
diff --git a/0110/design.html b/0110/design.html
index fee48c2..021bf1c 100644
--- a/0110/design.html
+++ b/0110/design.html
@@ -264,7 +264,7 @@
     messages have a primary key and so the updates are idempotent (receiving the same message twice just overwrites a record with another copy of itself).
     </ol>
     <p>
-    So what about exactly once semantics (i.e. the thing you actually want)? When consuming from a Kafka topic and producing to another topic (as in a <a href="https://kafka.apache.org/documentation/streams">Kafka Streams</a>
+    So what about exactly once semantics (i.e. the thing you actually want)? When consuming from a Kafka topic and producing to another topic (as in a <a href="/documentation/streams">Kafka Streams</a>
     application), we can leverage the new transactional producer capabilities in 0.11.0.0 that were mentioned above. The consumer's position is stored as a message in a topic, so we can write the offset to Kafka in the
     same transaction as the output topics receiving the processed data. If the transaction is aborted, the consumer's position will revert to its old value and the produced data on the output topics will not be visible
     to other consumers, depending on their "isolation level." In the default "read_uncommitted" isolation level, all messages are visible to consumers even if they were part of an aborted transaction,
@@ -273,12 +273,12 @@
     When writing to an external system, the limitation is in the need to coordinate the consumer's position with what is actually stored as output. The classic way of achieving this would be to introduce a two-phase
     commit between the storage of the consumer position and the storage of the consumers output. But this can be handled more simply and generally by letting the consumer store its offset in the same place as
     its output. This is better because many of the output systems a consumer might want to write to will not support a two-phase commit. As an example of this, consider a
-    <a href="https://kafka.apache.org/documentation/#connect">Kafka Connect</a> connector which populates data in HDFS along with the offsets of the data it reads so that it is guaranteed that either data and
+    <a href="/documentation/#connect">Kafka Connect</a> connector which populates data in HDFS along with the offsets of the data it reads so that it is guaranteed that either data and
     offsets are both updated or neither is. We follow similar patterns for many other data systems which require these stronger semantics and for which the messages do not have a primary key to allow for deduplication.
     <p>
-    So effectively Kafka supports exactly-once delivery in <a href="https://kafka.apache.org/documentation/streams">Kafka Streams</a>, and the transactional producer/consumer can be used generally to provide
+    So effectively Kafka supports exactly-once delivery in <a href="/documentation/streams">Kafka Streams</a>, and the transactional producer/consumer can be used generally to provide
     exactly-once delivery when transfering and processing data between Kafka topics. Exactly-once delivery for other destination systems generally requires cooperation with such systems, but Kafka provides the
-    offset which makes implementing this feasible (see also <a href="https://kafka.apache.org/documentation/#connect">Kafka Connect</a>). Otherwise, Kafka guarantees at-least-once delivery by default, and allows
+    offset which makes implementing this feasible (see also <a href="/documentation/#connect">Kafka Connect</a>). Otherwise, Kafka guarantees at-least-once delivery by default, and allows
     the user to implement at-most-once delivery by disabling retries on the producer and committing offsets in the consumer prior to processing a batch of messages.
 
     <h3><a id="replication" href="#replication">4.7 Replication</a></h3>
@@ -431,8 +431,7 @@
     <p>
     Let's discuss a concrete example of such a stream. Say we have a topic containing user email addresses; every time a user updates their email address we send a message to this topic using their user id as the
     primary key. Now say we send the following messages over some time period for a user with id 123, each message corresponding to a change in email address (messages for other ids are omitted):
-    <pre class="brush: text;">
-        123 => bill@microsoft.com
+    <pre class="line-numbers"><code class="language-text">        123 => bill@microsoft.com
                 .
                 .
                 .
@@ -441,7 +440,7 @@
                 .
                 .
         123 => bill@gmail.com
-    </pre>
+    </code></pre>
     Log compaction gives us a more granular retention mechanism so that we are guaranteed to retain at least the last update for each primary key (e.g. <code>bill@gmail.com</code>). By doing this we guarantee that the
     log contains a full snapshot of the final value for every key not just keys that changed recently. This means downstream consumers can restore their own state off this topic without us having to retain a complete
     log of all changes.
@@ -524,11 +523,11 @@
 
     The log cleaner is enabled by default. This will start the pool of cleaner threads. 
     To enable log cleaning on a particular topic you can add the log-specific property
-    <pre class="brush: text;">  log.cleanup.policy=compact</pre>
+    <pre class="line-numbers"><code class="language-text">  log.cleanup.policy=compact</code></pre>
     This can be done either at topic creation time or using the alter topic command.
     <p>
     The log cleaner can be configured to retain a minimum amount of the uncompacted "head" of the log. This is enabled by setting the compaction time lag.
-    <pre class="brush: text;">  log.cleaner.min.compaction.lag.ms</pre>
+    <pre class="line-numbers"><code class="language-text">  log.cleaner.min.compaction.lag.ms</code></pre>
 
     This can be used to prevent messages newer than a minimum message age from being subject to compaction. If not set, all log segments are eligible for compaction except for the last segment, i.e. the one currently
     being written to. The active segment will not be compacted even if all of its messages are older than the minimum compaction time lag.
diff --git a/0110/documentation.html b/0110/documentation.html
index 7f297cc..fc2e205 100644
--- a/0110/documentation.html
+++ b/0110/documentation.html
@@ -21,7 +21,7 @@
 <!--#include virtual="../includes/_top.htm" -->
 
 
-<div class="content documentation documentation--current">
+<div class="content documentation">
 	<!--#include virtual="../includes/_nav.htm" -->
 	<div class="right">
 		<!--#include virtual="../includes/_docs_banner.htm" -->
diff --git a/0110/generated/protocol_messages.html b/0110/generated/protocol_messages.html
index 688fb3d..073ecab 100644
--- a/0110/generated/protocol_messages.html
+++ b/0110/generated/protocol_messages.html
@@ -1,10 +1,10 @@
 <h5>Headers:</h5>
-<pre>Request Header => api_key api_version correlation_id client_id 
+<pre class="line-numbers"><code class="language-java">Request Header => api_key api_version correlation_id client_id 
   api_key => INT16
   api_version => INT16
   correlation_id => INT32
   client_id => NULLABLE_STRING
-</pre>
+</code></pre>
 <table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -17,9 +17,9 @@
 <tr>
 <td>client_id</td><td>A user specified identifier for the client making the request.</td></tr>
 </table>
-<pre>Response Header => correlation_id 
+<pre class="line-numbers"><code class="language-java">Response Header => correlation_id 
   correlation_id => INT32
-</pre>
+</code></pre>
 <table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -29,7 +29,7 @@
 <h5><a name="The_Messages_Produce">Produce API (Key: 0):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Produce Request (Version: 0) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 0) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -37,7 +37,7 @@
     data => partition record_set 
       partition => INT32
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -56,7 +56,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 1) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 1) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -64,7 +64,7 @@
     data => partition record_set 
       partition => INT32
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -83,7 +83,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 2) => acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 2) => acks timeout [topic_data] 
   acks => INT16
   timeout => INT32
   topic_data => topic [data] 
@@ -91,7 +91,7 @@
     data => partition record_set 
       partition => INT32
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -110,7 +110,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Request (Version: 3) => transactional_id acks timeout [topic_data] 
+<p><pre class="line-numbers"><code class="language-java">Produce Request (Version: 3) => transactional_id acks timeout [topic_data] 
   transactional_id => NULLABLE_STRING
   acks => INT16
   timeout => INT32
@@ -119,7 +119,7 @@
     data => partition record_set 
       partition => INT32
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -141,14 +141,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Produce Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset 
       partition => INT32
       error_code => INT16
       base_offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -165,7 +165,7 @@
 <td>base_offset</td><td></td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 1) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 1) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset 
@@ -173,7 +173,7 @@
       error_code => INT16
       base_offset => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -192,7 +192,7 @@
 <td>throttle_time_ms</td><td>Duration in milliseconds for which the request was throttled due to quota violation. (Zero if the request did not violate any quota.)</td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 2) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 2) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset log_append_time 
@@ -201,7 +201,7 @@
       base_offset => INT64
       log_append_time => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -222,7 +222,7 @@
 <td>throttle_time_ms</td><td>Duration in milliseconds for which the request was throttled due to quota violation. (Zero if the request did not violate any quota.)</td></tr>
 </table>
 </p>
-<p><pre>Produce Response (Version: 3) => [responses] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">Produce Response (Version: 3) => [responses] throttle_time_ms 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code base_offset log_append_time 
@@ -231,7 +231,7 @@
       base_offset => INT64
       log_append_time => INT64
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -255,7 +255,7 @@
 <h5><a name="The_Messages_Fetch">Fetch API (Key: 1):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Fetch Request (Version: 0) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 0) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -265,7 +265,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -288,7 +288,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 1) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 1) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -298,7 +298,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -321,7 +321,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 2) => replica_id max_wait_time min_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 2) => replica_id max_wait_time min_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -331,7 +331,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -354,7 +354,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 3) => replica_id max_wait_time min_bytes max_bytes [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 3) => replica_id max_wait_time min_bytes max_bytes [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -365,7 +365,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -390,7 +390,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 4) => replica_id max_wait_time min_bytes max_bytes isolation_level [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 4) => replica_id max_wait_time min_bytes max_bytes isolation_level [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -402,7 +402,7 @@
       partition => INT32
       fetch_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -429,7 +429,7 @@
 <td>max_bytes</td><td>Maximum bytes to fetch.</td></tr>
 </table>
 </p>
-<p><pre>Fetch Request (Version: 5) => replica_id max_wait_time min_bytes max_bytes isolation_level [topics] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Request (Version: 5) => replica_id max_wait_time min_bytes max_bytes isolation_level [topics] 
   replica_id => INT32
   max_wait_time => INT32
   min_bytes => INT32
@@ -442,7 +442,7 @@
       fetch_offset => INT64
       log_start_offset => INT64
       max_bytes => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -472,7 +472,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Fetch Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition_header record_set 
@@ -481,7 +481,7 @@
         error_code => INT16
         high_watermark => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -502,7 +502,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 1) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 1) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -512,7 +512,7 @@
         error_code => INT16
         high_watermark => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -535,7 +535,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 2) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 2) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -545,7 +545,7 @@
         error_code => INT16
         high_watermark => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -568,7 +568,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 3) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 3) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -578,7 +578,7 @@
         error_code => INT16
         high_watermark => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -601,7 +601,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 4) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 4) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -615,7 +615,7 @@
           producer_id => INT64
           first_offset => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -646,7 +646,7 @@
 <td>record_set</td><td></td></tr>
 </table>
 </p>
-<p><pre>Fetch Response (Version: 5) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Fetch Response (Version: 5) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -661,7 +661,7 @@
           producer_id => INT64
           first_offset => INT64
       record_set => RECORDS
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -697,7 +697,7 @@
 <h5><a name="The_Messages_Offsets">Offsets API (Key: 2):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Offsets Request (Version: 0) => replica_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Request (Version: 0) => replica_id [topics] 
   replica_id => INT32
   topics => topic [partitions] 
     topic => STRING
@@ -705,7 +705,7 @@
       partition => INT32
       timestamp => INT64
       max_num_offsets => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -724,14 +724,14 @@
 <td>max_num_offsets</td><td>Maximum offsets to return.</td></tr>
 </table>
 </p>
-<p><pre>Offsets Request (Version: 1) => replica_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Request (Version: 1) => replica_id [topics] 
   replica_id => INT32
   topics => topic [partitions] 
     topic => STRING
     partitions => partition timestamp 
       partition => INT32
       timestamp => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -748,7 +748,7 @@
 <td>timestamp</td><td>The target timestamp for the partition.</td></tr>
 </table>
 </p>
-<p><pre>Offsets Request (Version: 2) => replica_id isolation_level [topics] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Request (Version: 2) => replica_id isolation_level [topics] 
   replica_id => INT32
   isolation_level => INT8
   topics => topic [partitions] 
@@ -756,7 +756,7 @@
     partitions => partition timestamp 
       partition => INT32
       timestamp => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -776,14 +776,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Offsets Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code [offsets] 
       partition => INT32
       error_code => INT16
       offsets => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -800,7 +800,7 @@
 <td>offsets</td><td>A list of offsets.</td></tr>
 </table>
 </p>
-<p><pre>Offsets Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code timestamp offset 
@@ -808,7 +808,7 @@
       error_code => INT16
       timestamp => INT64
       offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -827,7 +827,7 @@
 <td>offset</td><td>offset found</td></tr>
 </table>
 </p>
-<p><pre>Offsets Response (Version: 2) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">Offsets Response (Version: 2) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -836,7 +836,7 @@
       error_code => INT16
       timestamp => INT64
       offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -860,46 +860,46 @@
 <h5><a name="The_Messages_Metadata">Metadata API (Key: 3):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Metadata Request (Version: 0) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 0) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If no topics are specified fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 1) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 1) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If the topics array is null fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 2) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 2) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If the topics array is null fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 3) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 3) => [topics] 
   topics => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>topics</td><td>An array of topics to fetch metadata for. If the topics array is null fetch metadata for all topics.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Request (Version: 4) => [topics] allow_auto_topic_creation 
+<p><pre class="line-numbers"><code class="language-java">Metadata Request (Version: 4) => [topics] allow_auto_topic_creation 
   topics => STRING
   allow_auto_topic_creation => BOOLEAN
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -909,7 +909,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Metadata Response (Version: 0) => [brokers] [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 0) => [brokers] [topic_metadata] 
   brokers => node_id host port 
     node_id => INT32
     host => STRING
@@ -923,7 +923,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -954,7 +954,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
   brokers => node_id host port rack 
     node_id => INT32
     host => STRING
@@ -971,7 +971,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1008,7 +1008,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 2) => [brokers] cluster_id controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 2) => [brokers] cluster_id controller_id [topic_metadata] 
   brokers => node_id host port rack 
     node_id => INT32
     host => STRING
@@ -1026,7 +1026,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1065,7 +1065,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 3) => throttle_time_ms [brokers] cluster_id controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 3) => throttle_time_ms [brokers] cluster_id controller_id [topic_metadata] 
   throttle_time_ms => INT32
   brokers => node_id host port rack 
     node_id => INT32
@@ -1084,7 +1084,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1125,7 +1125,7 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
-<p><pre>Metadata Response (Version: 4) => throttle_time_ms [brokers] cluster_id controller_id [topic_metadata] 
+<p><pre class="line-numbers"><code class="language-java">Metadata Response (Version: 4) => throttle_time_ms [brokers] cluster_id controller_id [topic_metadata] 
   throttle_time_ms => INT32
   brokers => node_id host port rack 
     node_id => INT32
@@ -1144,7 +1144,7 @@
       leader => INT32
       replicas => INT32
       isr => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1188,7 +1188,7 @@
 <h5><a name="The_Messages_LeaderAndIsr">LeaderAndIsr API (Key: 4):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>LeaderAndIsr Request (Version: 0) => controller_id controller_epoch [partition_states] [live_leaders] 
+<p><pre class="line-numbers"><code class="language-java">LeaderAndIsr Request (Version: 0) => controller_id controller_epoch [partition_states] [live_leaders] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -1204,7 +1204,7 @@
     id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1240,13 +1240,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>LeaderAndIsr Response (Version: 0) => error_code [partitions] 
+<p><pre class="line-numbers"><code class="language-java">LeaderAndIsr Response (Version: 0) => error_code [partitions] 
   error_code => INT16
   partitions => topic partition error_code 
     topic => STRING
     partition => INT32
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1264,14 +1264,14 @@
 <h5><a name="The_Messages_StopReplica">StopReplica API (Key: 5):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>StopReplica Request (Version: 0) => controller_id controller_epoch delete_partitions [partitions] 
+<p><pre class="line-numbers"><code class="language-java">StopReplica Request (Version: 0) => controller_id controller_epoch delete_partitions [partitions] 
   controller_id => INT32
   controller_epoch => INT32
   delete_partitions => BOOLEAN
   partitions => topic partition 
     topic => STRING
     partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1289,13 +1289,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>StopReplica Response (Version: 0) => error_code [partitions] 
+<p><pre class="line-numbers"><code class="language-java">StopReplica Response (Version: 0) => error_code [partitions] 
   error_code => INT16
   partitions => topic partition error_code 
     topic => STRING
     partition => INT32
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1313,7 +1313,7 @@
 <h5><a name="The_Messages_UpdateMetadata">UpdateMetadata API (Key: 6):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>UpdateMetadata Request (Version: 0) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 0) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -1329,7 +1329,7 @@
     id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1364,7 +1364,7 @@
 <td>port</td><td>The port on which the broker accepts requests.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 1) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 1) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -1382,7 +1382,7 @@
       port => INT32
       host => STRING
       security_protocol_type => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1421,7 +1421,7 @@
 <td>security_protocol_type</td><td>The security protocol type.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 2) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 2) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -1440,7 +1440,7 @@
       host => STRING
       security_protocol_type => INT16
     rack => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1481,7 +1481,7 @@
 <td>rack</td><td>The rack</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Request (Version: 3) => controller_id controller_epoch [partition_states] [live_brokers] 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Request (Version: 3) => controller_id controller_epoch [partition_states] [live_brokers] 
   controller_id => INT32
   controller_epoch => INT32
   partition_states => topic partition controller_epoch leader leader_epoch [isr] zk_version [replicas] 
@@ -1501,7 +1501,7 @@
       listener_name => STRING
       security_protocol_type => INT16
     rack => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1545,36 +1545,36 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>UpdateMetadata Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 1) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 1) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 2) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 2) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td>Error code.</td></tr>
 </table>
 </p>
-<p><pre>UpdateMetadata Response (Version: 3) => error_code 
+<p><pre class="line-numbers"><code class="language-java">UpdateMetadata Response (Version: 3) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1585,9 +1585,9 @@
 
 <b>Requests:</b><br>
 </p>
-<p><pre>ControlledShutdown Request (Version: 1) => broker_id 
+<p><pre class="line-numbers"><code class="language-java">ControlledShutdown Request (Version: 1) => broker_id 
   broker_id => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1596,12 +1596,12 @@
 </p>
 <b>Responses:</b><br>
 </p>
-<p><pre>ControlledShutdown Response (Version: 1) => error_code [partitions_remaining] 
+<p><pre class="line-numbers"><code class="language-java">ControlledShutdown Response (Version: 1) => error_code [partitions_remaining] 
   error_code => INT16
   partitions_remaining => topic partition 
     topic => STRING
     partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1617,7 +1617,7 @@
 <h5><a name="The_Messages_OffsetCommit">OffsetCommit API (Key: 8):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetCommit Request (Version: 0) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 0) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
@@ -1625,7 +1625,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1644,7 +1644,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 1) => group_id group_generation_id member_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 1) => group_id group_generation_id member_id [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -1655,7 +1655,7 @@
       offset => INT64
       timestamp => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1680,7 +1680,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 2) => group_id group_generation_id member_id retention_time [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 2) => group_id group_generation_id member_id retention_time [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -1691,7 +1691,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1716,7 +1716,7 @@
 <td>metadata</td><td>Any associated metadata the client wants to keep.</td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Request (Version: 3) => group_id group_generation_id member_id retention_time [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Request (Version: 3) => group_id group_generation_id member_id retention_time [topics] 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
@@ -1727,7 +1727,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1753,13 +1753,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetCommit Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1774,13 +1774,13 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1795,13 +1795,13 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 2) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 2) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1816,14 +1816,14 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetCommit Response (Version: 3) => throttle_time_ms [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetCommit Response (Version: 3) => throttle_time_ms [responses] 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1843,13 +1843,13 @@
 <h5><a name="The_Messages_OffsetFetch">OffsetFetch API (Key: 9):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetFetch Request (Version: 0) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 0) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1864,13 +1864,13 @@
 <td>partition</td><td>Topic partition id.</td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Request (Version: 1) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 1) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1885,13 +1885,13 @@
 <td>partition</td><td>Topic partition id.</td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Request (Version: 2) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 2) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1906,13 +1906,13 @@
 <td>partition</td><td>Topic partition id.</td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Request (Version: 3) => group_id [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Request (Version: 3) => group_id [topics] 
   group_id => STRING
   topics => topic [partitions] 
     topic => STRING
     partitions => partition 
       partition => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1928,7 +1928,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetFetch Response (Version: 0) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 0) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1936,7 +1936,7 @@
       offset => INT64
       metadata => NULLABLE_STRING
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1955,7 +1955,7 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Response (Version: 1) => [responses] 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 1) => [responses] 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1963,7 +1963,7 @@
       offset => INT64
       metadata => NULLABLE_STRING
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -1982,7 +1982,7 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Response (Version: 2) => [responses] error_code 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 2) => [responses] error_code 
   responses => topic [partition_responses] 
     topic => STRING
     partition_responses => partition offset metadata error_code 
@@ -1991,7 +1991,7 @@
       metadata => NULLABLE_STRING
       error_code => INT16
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2012,7 +2012,7 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>OffsetFetch Response (Version: 3) => throttle_time_ms [responses] error_code 
+<p><pre class="line-numbers"><code class="language-java">OffsetFetch Response (Version: 3) => throttle_time_ms [responses] error_code 
   throttle_time_ms => INT32
   responses => topic [partition_responses] 
     topic => STRING
@@ -2022,7 +2022,7 @@
       metadata => NULLABLE_STRING
       error_code => INT16
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2048,19 +2048,19 @@
 <h5><a name="The_Messages_FindCoordinator">FindCoordinator API (Key: 10):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>FindCoordinator Request (Version: 0) => group_id 
+<p><pre class="line-numbers"><code class="language-java">FindCoordinator Request (Version: 0) => group_id 
   group_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>group_id</td><td>The unique group id.</td></tr>
 </table>
 </p>
-<p><pre>FindCoordinator Request (Version: 1) => coordinator_key coordinator_type 
+<p><pre class="line-numbers"><code class="language-java">FindCoordinator Request (Version: 1) => coordinator_key coordinator_type 
   coordinator_key => STRING
   coordinator_type => INT8
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2070,13 +2070,13 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>FindCoordinator Response (Version: 0) => error_code coordinator 
+<p><pre class="line-numbers"><code class="language-java">FindCoordinator Response (Version: 0) => error_code coordinator 
   error_code => INT16
   coordinator => node_id host port 
     node_id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2091,7 +2091,7 @@
 <td>port</td><td>The port on which the broker accepts requests.</td></tr>
 </table>
 </p>
-<p><pre>FindCoordinator Response (Version: 1) => throttle_time_ms error_code error_message coordinator 
+<p><pre class="line-numbers"><code class="language-java">FindCoordinator Response (Version: 1) => throttle_time_ms error_code error_message coordinator 
   throttle_time_ms => INT32
   error_code => INT16
   error_message => NULLABLE_STRING
@@ -2099,7 +2099,7 @@
     node_id => INT32
     host => STRING
     port => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2121,7 +2121,7 @@
 <h5><a name="The_Messages_JoinGroup">JoinGroup API (Key: 11):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>JoinGroup Request (Version: 0) => group_id session_timeout member_id protocol_type [group_protocols] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Request (Version: 0) => group_id session_timeout member_id protocol_type [group_protocols] 
   group_id => STRING
   session_timeout => INT32
   member_id => STRING
@@ -2129,7 +2129,7 @@
   group_protocols => protocol_name protocol_metadata 
     protocol_name => STRING
     protocol_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2148,7 +2148,7 @@
 <td>protocol_metadata</td><td></td></tr>
 </table>
 </p>
-<p><pre>JoinGroup Request (Version: 1) => group_id session_timeout rebalance_timeout member_id protocol_type [group_protocols] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Request (Version: 1) => group_id session_timeout rebalance_timeout member_id protocol_type [group_protocols] 
   group_id => STRING
   session_timeout => INT32
   rebalance_timeout => INT32
@@ -2157,7 +2157,7 @@
   group_protocols => protocol_name protocol_metadata 
     protocol_name => STRING
     protocol_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2178,7 +2178,7 @@
 <td>protocol_metadata</td><td></td></tr>
 </table>
 </p>
-<p><pre>JoinGroup Request (Version: 2) => group_id session_timeout rebalance_timeout member_id protocol_type [group_protocols] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Request (Version: 2) => group_id session_timeout rebalance_timeout member_id protocol_type [group_protocols] 
   group_id => STRING
   session_timeout => INT32
   rebalance_timeout => INT32
@@ -2187,7 +2187,7 @@
   group_protocols => protocol_name protocol_metadata 
     protocol_name => STRING
     protocol_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2209,7 +2209,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>JoinGroup Response (Version: 0) => error_code generation_id group_protocol leader_id member_id [members] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Response (Version: 0) => error_code generation_id group_protocol leader_id member_id [members] 
   error_code => INT16
   generation_id => INT32
   group_protocol => STRING
@@ -2218,7 +2218,7 @@
   members => member_id member_metadata 
     member_id => STRING
     member_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2239,7 +2239,7 @@
 <td>member_metadata</td><td></td></tr>
 </table>
 </p>
-<p><pre>JoinGroup Response (Version: 1) => error_code generation_id group_protocol leader_id member_id [members] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Response (Version: 1) => error_code generation_id group_protocol leader_id member_id [members] 
   error_code => INT16
   generation_id => INT32
   group_protocol => STRING
@@ -2248,7 +2248,7 @@
   members => member_id member_metadata 
     member_id => STRING
     member_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2269,7 +2269,7 @@
 <td>member_metadata</td><td></td></tr>
 </table>
 </p>
-<p><pre>JoinGroup Response (Version: 2) => throttle_time_ms error_code generation_id group_protocol leader_id member_id [members] 
+<p><pre class="line-numbers"><code class="language-java">JoinGroup Response (Version: 2) => throttle_time_ms error_code generation_id group_protocol leader_id member_id [members] 
   throttle_time_ms => INT32
   error_code => INT16
   generation_id => INT32
@@ -2279,7 +2279,7 @@
   members => member_id member_metadata 
     member_id => STRING
     member_metadata => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2305,11 +2305,11 @@
 <h5><a name="The_Messages_Heartbeat">Heartbeat API (Key: 12):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>Heartbeat Request (Version: 0) => group_id group_generation_id member_id 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Request (Version: 0) => group_id group_generation_id member_id 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2320,11 +2320,11 @@
 <td>member_id</td><td>The member id assigned by the group coordinator.</td></tr>
 </table>
 </p>
-<p><pre>Heartbeat Request (Version: 1) => group_id group_generation_id member_id 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Request (Version: 1) => group_id group_generation_id member_id 
   group_id => STRING
   group_generation_id => INT32
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2336,19 +2336,19 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>Heartbeat Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>Heartbeat Response (Version: 1) => throttle_time_ms error_code 
+<p><pre class="line-numbers"><code class="language-java">Heartbeat Response (Version: 1) => throttle_time_ms error_code 
   throttle_time_ms => INT32
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2360,10 +2360,10 @@
 <h5><a name="The_Messages_LeaveGroup">LeaveGroup API (Key: 13):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>LeaveGroup Request (Version: 0) => group_id member_id 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Request (Version: 0) => group_id member_id 
   group_id => STRING
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2372,10 +2372,10 @@
 <td>member_id</td><td>The member id assigned by the group coordinator.</td></tr>
 </table>
 </p>
-<p><pre>LeaveGroup Request (Version: 1) => group_id member_id 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Request (Version: 1) => group_id member_id 
   group_id => STRING
   member_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2385,19 +2385,19 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>LeaveGroup Response (Version: 0) => error_code 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Response (Version: 0) => error_code 
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>LeaveGroup Response (Version: 1) => throttle_time_ms error_code 
+<p><pre class="line-numbers"><code class="language-java">LeaveGroup Response (Version: 1) => throttle_time_ms error_code 
   throttle_time_ms => INT32
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2409,14 +2409,14 @@
 <h5><a name="The_Messages_SyncGroup">SyncGroup API (Key: 14):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>SyncGroup Request (Version: 0) => group_id generation_id member_id [group_assignment] 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Request (Version: 0) => group_id generation_id member_id [group_assignment] 
   group_id => STRING
   generation_id => INT32
   member_id => STRING
   group_assignment => member_id member_assignment 
     member_id => STRING
     member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2433,14 +2433,14 @@
 <td>member_assignment</td><td></td></tr>
 </table>
 </p>
-<p><pre>SyncGroup Request (Version: 1) => group_id generation_id member_id [group_assignment] 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Request (Version: 1) => group_id generation_id member_id [group_assignment] 
   group_id => STRING
   generation_id => INT32
   member_id => STRING
   group_assignment => member_id member_assignment 
     member_id => STRING
     member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2458,10 +2458,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>SyncGroup Response (Version: 0) => error_code member_assignment 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Response (Version: 0) => error_code member_assignment 
   error_code => INT16
   member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2470,11 +2470,11 @@
 <td>member_assignment</td><td></td></tr>
 </table>
 </p>
-<p><pre>SyncGroup Response (Version: 1) => throttle_time_ms error_code member_assignment 
+<p><pre class="line-numbers"><code class="language-java">SyncGroup Response (Version: 1) => throttle_time_ms error_code member_assignment 
   throttle_time_ms => INT32
   error_code => INT16
   member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2488,18 +2488,18 @@
 <h5><a name="The_Messages_DescribeGroups">DescribeGroups API (Key: 15):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>DescribeGroups Request (Version: 0) => [group_ids] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Request (Version: 0) => [group_ids] 
   group_ids => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>group_ids</td><td>List of groupIds to request metadata for (an empty groupId array will return empty group metadata).</td></tr>
 </table>
 </p>
-<p><pre>DescribeGroups Request (Version: 1) => [group_ids] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Request (Version: 1) => [group_ids] 
   group_ids => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2507,7 +2507,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DescribeGroups Response (Version: 0) => [groups] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Response (Version: 0) => [groups] 
   groups => error_code group_id state protocol_type protocol [members] 
     error_code => INT16
     group_id => STRING
@@ -2520,7 +2520,7 @@
       client_host => STRING
       member_metadata => BYTES
       member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2549,7 +2549,7 @@
 <td>member_assignment</td><td>The current assignment provided by the group leader (will only be present if the group is stable).</td></tr>
 </table>
 </p>
-<p><pre>DescribeGroups Response (Version: 1) => throttle_time_ms [groups] 
+<p><pre class="line-numbers"><code class="language-java">DescribeGroups Response (Version: 1) => throttle_time_ms [groups] 
   throttle_time_ms => INT32
   groups => error_code group_id state protocol_type protocol [members] 
     error_code => INT16
@@ -2563,7 +2563,7 @@
       client_host => STRING
       member_metadata => BYTES
       member_assignment => BYTES
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2597,25 +2597,25 @@
 <h5><a name="The_Messages_ListGroups">ListGroups API (Key: 16):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>ListGroups Request (Version: 0) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ListGroups Request (Version: 0) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
-<p><pre>ListGroups Request (Version: 1) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ListGroups Request (Version: 1) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
 <b>Responses:</b><br>
-<p><pre>ListGroups Response (Version: 0) => error_code [groups] 
+<p><pre class="line-numbers"><code class="language-java">ListGroups Response (Version: 0) => error_code [groups] 
   error_code => INT16
   groups => group_id protocol_type 
     group_id => STRING
     protocol_type => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2628,13 +2628,13 @@
 <td>protocol_type</td><td></td></tr>
 </table>
 </p>
-<p><pre>ListGroups Response (Version: 1) => throttle_time_ms error_code [groups] 
+<p><pre class="line-numbers"><code class="language-java">ListGroups Response (Version: 1) => throttle_time_ms error_code [groups] 
   throttle_time_ms => INT32
   error_code => INT16
   groups => group_id protocol_type 
     group_id => STRING
     protocol_type => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2652,9 +2652,9 @@
 <h5><a name="The_Messages_SaslHandshake">SaslHandshake API (Key: 17):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>SaslHandshake Request (Version: 0) => mechanism 
+<p><pre class="line-numbers"><code class="language-java">SaslHandshake Request (Version: 0) => mechanism 
   mechanism => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2662,10 +2662,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
+<p><pre class="line-numbers"><code class="language-java">SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
   error_code => INT16
   enabled_mechanisms => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2677,26 +2677,26 @@
 <h5><a name="The_Messages_ApiVersions">ApiVersions API (Key: 18):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>ApiVersions Request (Version: 0) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Request (Version: 0) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
-<p><pre>ApiVersions Request (Version: 1) => 
-</pre><table class="data-table"><tbody>
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Request (Version: 1) => 
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr></table>
 </p>
 <b>Responses:</b><br>
-<p><pre>ApiVersions Response (Version: 0) => error_code [api_versions] 
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Response (Version: 0) => error_code [api_versions] 
   error_code => INT16
   api_versions => api_key min_version max_version 
     api_key => INT16
     min_version => INT16
     max_version => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2711,14 +2711,14 @@
 <td>max_version</td><td>Maximum supported version.</td></tr>
 </table>
 </p>
-<p><pre>ApiVersions Response (Version: 1) => error_code [api_versions] throttle_time_ms 
+<p><pre class="line-numbers"><code class="language-java">ApiVersions Response (Version: 1) => error_code [api_versions] throttle_time_ms 
   error_code => INT16
   api_versions => api_key min_version max_version 
     api_key => INT16
     min_version => INT16
     max_version => INT16
   throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2738,7 +2738,7 @@
 <h5><a name="The_Messages_CreateTopics">CreateTopics API (Key: 19):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>CreateTopics Request (Version: 0) => [create_topic_requests] timeout 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Request (Version: 0) => [create_topic_requests] timeout 
   create_topic_requests => topic num_partitions replication_factor [replica_assignment] [config_entries] 
     topic => STRING
     num_partitions => INT32
@@ -2750,7 +2750,7 @@
       config_name => STRING
       config_value => NULLABLE_STRING
   timeout => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2777,7 +2777,7 @@
 <td>timeout</td><td>The time in ms to wait for a topic to be completely created on the controller node. Values <= 0 will trigger topic creation and return immediately</td></tr>
 </table>
 </p>
-<p><pre>CreateTopics Request (Version: 1) => [create_topic_requests] timeout validate_only 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Request (Version: 1) => [create_topic_requests] timeout validate_only 
   create_topic_requests => topic num_partitions replication_factor [replica_assignment] [config_entries] 
     topic => STRING
     num_partitions => INT32
@@ -2790,7 +2790,7 @@
       config_value => NULLABLE_STRING
   timeout => INT32
   validate_only => BOOLEAN
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2819,7 +2819,7 @@
 <td>validate_only</td><td>If this is true, the request will be validated, but the topic won't be created.</td></tr>
 </table>
 </p>
-<p><pre>CreateTopics Request (Version: 2) => [create_topic_requests] timeout validate_only 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Request (Version: 2) => [create_topic_requests] timeout validate_only 
   create_topic_requests => topic num_partitions replication_factor [replica_assignment] [config_entries] 
     topic => STRING
     num_partitions => INT32
@@ -2832,7 +2832,7 @@
       config_value => NULLABLE_STRING
   timeout => INT32
   validate_only => BOOLEAN
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2862,11 +2862,11 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>CreateTopics Response (Version: 0) => [topic_errors] 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Response (Version: 0) => [topic_errors] 
   topic_errors => topic error_code 
     topic => STRING
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2877,12 +2877,12 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>CreateTopics Response (Version: 1) => [topic_errors] 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Response (Version: 1) => [topic_errors] 
   topic_errors => topic error_code error_message 
     topic => STRING
     error_code => INT16
     error_message => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2895,13 +2895,13 @@
 <td>error_message</td><td></td></tr>
 </table>
 </p>
-<p><pre>CreateTopics Response (Version: 2) => throttle_time_ms [topic_errors] 
+<p><pre class="line-numbers"><code class="language-java">CreateTopics Response (Version: 2) => throttle_time_ms [topic_errors] 
   throttle_time_ms => INT32
   topic_errors => topic error_code error_message 
     topic => STRING
     error_code => INT16
     error_message => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2919,10 +2919,10 @@
 <h5><a name="The_Messages_DeleteTopics">DeleteTopics API (Key: 20):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>DeleteTopics Request (Version: 0) => [topics] timeout 
+<p><pre class="line-numbers"><code class="language-java">DeleteTopics Request (Version: 0) => [topics] timeout 
   topics => STRING
   timeout => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2931,10 +2931,10 @@
 <td>timeout</td><td>The time in ms to wait for a topic to be completely deleted on the controller node. Values <= 0 will trigger topic deletion and return immediately</td></tr>
 </table>
 </p>
-<p><pre>DeleteTopics Request (Version: 1) => [topics] timeout 
+<p><pre class="line-numbers"><code class="language-java">DeleteTopics Request (Version: 1) => [topics] timeout 
   topics => STRING
   timeout => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2944,11 +2944,11 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DeleteTopics Response (Version: 0) => [topic_error_codes] 
+<p><pre class="line-numbers"><code class="language-java">DeleteTopics Response (Version: 0) => [topic_error_codes] 
   topic_error_codes => topic error_code 
     topic => STRING
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2959,12 +2959,12 @@
 <td>error_code</td><td></td></tr>
 </table>
 </p>
-<p><pre>DeleteTopics Response (Version: 1) => throttle_time_ms [topic_error_codes] 
+<p><pre class="line-numbers"><code class="language-java">DeleteTopics Response (Version: 1) => throttle_time_ms [topic_error_codes] 
   throttle_time_ms => INT32
   topic_error_codes => topic error_code 
     topic => STRING
     error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -2980,14 +2980,14 @@
 <h5><a name="The_Messages_DeleteRecords">DeleteRecords API (Key: 21):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>DeleteRecords Request (Version: 0) => [topics] timeout 
+<p><pre class="line-numbers"><code class="language-java">DeleteRecords Request (Version: 0) => [topics] timeout 
   topics => topic [partitions] 
     topic => STRING
     partitions => partition offset 
       partition => INT32
       offset => INT64
   timeout => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3005,7 +3005,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DeleteRecords Response (Version: 0) => throttle_time_ms [topics] 
+<p><pre class="line-numbers"><code class="language-java">DeleteRecords Response (Version: 0) => throttle_time_ms [topics] 
   throttle_time_ms => INT32
   topics => topic [partitions] 
     topic => STRING
@@ -3013,7 +3013,7 @@
       partition => INT32
       low_watermark => INT64
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3035,10 +3035,10 @@
 <h5><a name="The_Messages_InitProducerId">InitProducerId API (Key: 22):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>InitProducerId Request (Version: 0) => transactional_id transaction_timeout_ms 
+<p><pre class="line-numbers"><code class="language-java">InitProducerId Request (Version: 0) => transactional_id transaction_timeout_ms 
   transactional_id => NULLABLE_STRING
   transaction_timeout_ms => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3048,12 +3048,12 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>InitProducerId Response (Version: 0) => throttle_time_ms error_code producer_id producer_epoch 
+<p><pre class="line-numbers"><code class="language-java">InitProducerId Response (Version: 0) => throttle_time_ms error_code producer_id producer_epoch 
   throttle_time_ms => INT32
   error_code => INT16
   producer_id => INT64
   producer_epoch => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3069,13 +3069,13 @@
 <h5><a name="The_Messages_OffsetForLeaderEpoch">OffsetForLeaderEpoch API (Key: 23):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>OffsetForLeaderEpoch Request (Version: 0) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetForLeaderEpoch Request (Version: 0) => [topics] 
   topics => topic [partitions] 
     topic => STRING
     partitions => partition_id leader_epoch 
       partition_id => INT32
       leader_epoch => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3091,14 +3091,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>OffsetForLeaderEpoch Response (Version: 0) => [topics] 
+<p><pre class="line-numbers"><code class="language-java">OffsetForLeaderEpoch Response (Version: 0) => [topics] 
   topics => topic [partitions] 
     topic => STRING
     partitions => error_code partition_id end_offset 
       error_code => INT16
       partition_id => INT32
       end_offset => INT64
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3118,14 +3118,14 @@
 <h5><a name="The_Messages_AddPartitionsToTxn">AddPartitionsToTxn API (Key: 24):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>AddPartitionsToTxn Request (Version: 0) => transactional_id producer_id producer_epoch [topics] 
+<p><pre class="line-numbers"><code class="language-java">AddPartitionsToTxn Request (Version: 0) => transactional_id producer_id producer_epoch [topics] 
   transactional_id => STRING
   producer_id => INT64
   producer_epoch => INT16
   topics => topic [partitions] 
     topic => STRING
     partitions => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3143,14 +3143,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>AddPartitionsToTxn Response (Version: 0) => throttle_time_ms [errors] 
+<p><pre class="line-numbers"><code class="language-java">AddPartitionsToTxn Response (Version: 0) => throttle_time_ms [errors] 
   throttle_time_ms => INT32
   errors => topic [partition_errors] 
     topic => STRING
     partition_errors => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3170,12 +3170,12 @@
 <h5><a name="The_Messages_AddOffsetsToTxn">AddOffsetsToTxn API (Key: 25):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>AddOffsetsToTxn Request (Version: 0) => transactional_id producer_id producer_epoch consumer_group_id 
+<p><pre class="line-numbers"><code class="language-java">AddOffsetsToTxn Request (Version: 0) => transactional_id producer_id producer_epoch consumer_group_id 
   transactional_id => STRING
   producer_id => INT64
   producer_epoch => INT16
   consumer_group_id => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3189,10 +3189,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>AddOffsetsToTxn Response (Version: 0) => throttle_time_ms error_code 
+<p><pre class="line-numbers"><code class="language-java">AddOffsetsToTxn Response (Version: 0) => throttle_time_ms error_code 
   throttle_time_ms => INT32
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3204,12 +3204,12 @@
 <h5><a name="The_Messages_EndTxn">EndTxn API (Key: 26):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>EndTxn Request (Version: 0) => transactional_id producer_id producer_epoch transaction_result 
+<p><pre class="line-numbers"><code class="language-java">EndTxn Request (Version: 0) => transactional_id producer_id producer_epoch transaction_result 
   transactional_id => STRING
   producer_id => INT64
   producer_epoch => INT16
   transaction_result => BOOLEAN
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3223,10 +3223,10 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>EndTxn Response (Version: 0) => throttle_time_ms error_code 
+<p><pre class="line-numbers"><code class="language-java">EndTxn Response (Version: 0) => throttle_time_ms error_code 
   throttle_time_ms => INT32
   error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3238,7 +3238,7 @@
 <h5><a name="The_Messages_WriteTxnMarkers">WriteTxnMarkers API (Key: 27):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>WriteTxnMarkers Request (Version: 0) => [transaction_markers] 
+<p><pre class="line-numbers"><code class="language-java">WriteTxnMarkers Request (Version: 0) => [transaction_markers] 
   transaction_markers => producer_id producer_epoch transaction_result [topics] coordinator_epoch 
     producer_id => INT64
     producer_epoch => INT16
@@ -3247,7 +3247,7 @@
       topic => STRING
       partitions => INT32
     coordinator_epoch => INT32
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3269,7 +3269,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>WriteTxnMarkers Response (Version: 0) => [transaction_markers] 
+<p><pre class="line-numbers"><code class="language-java">WriteTxnMarkers Response (Version: 0) => [transaction_markers] 
   transaction_markers => producer_id [topics] 
     producer_id => INT64
     topics => topic [partitions] 
@@ -3277,7 +3277,7 @@
       partitions => partition error_code 
         partition => INT32
         error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3299,7 +3299,7 @@
 <h5><a name="The_Messages_TxnOffsetCommit">TxnOffsetCommit API (Key: 28):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>TxnOffsetCommit Request (Version: 0) => transactional_id consumer_group_id producer_id producer_epoch [topics] 
+<p><pre class="line-numbers"><code class="language-java">TxnOffsetCommit Request (Version: 0) => transactional_id consumer_group_id producer_id producer_epoch [topics] 
   transactional_id => STRING
   consumer_group_id => STRING
   producer_id => INT64
@@ -3310,7 +3310,7 @@
       partition => INT32
       offset => INT64
       metadata => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3336,14 +3336,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>TxnOffsetCommit Response (Version: 0) => throttle_time_ms [topics] 
+<p><pre class="line-numbers"><code class="language-java">TxnOffsetCommit Response (Version: 0) => throttle_time_ms [topics] 
   throttle_time_ms => INT32
   topics => topic [partitions] 
     topic => STRING
     partitions => partition error_code 
       partition => INT32
       error_code => INT16
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3363,14 +3363,14 @@
 <h5><a name="The_Messages_DescribeAcls">DescribeAcls API (Key: 29):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>DescribeAcls Request (Version: 0) => resource_type resource_name principal host operation permission_type 
+<p><pre class="line-numbers"><code class="language-java">DescribeAcls Request (Version: 0) => resource_type resource_name principal host operation permission_type 
   resource_type => INT8
   resource_name => NULLABLE_STRING
   principal => NULLABLE_STRING
   host => NULLABLE_STRING
   operation => INT8
   permission_type => INT8
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3388,7 +3388,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DescribeAcls Response (Version: 0) => throttle_time_ms error_code error_message [resources] 
+<p><pre class="line-numbers"><code class="language-java">DescribeAcls Response (Version: 0) => throttle_time_ms error_code error_message [resources] 
   throttle_time_ms => INT32
   error_code => INT16
   error_message => NULLABLE_STRING
@@ -3400,7 +3400,7 @@
       host => STRING
       operation => INT8
       permission_type => INT8
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3430,7 +3430,7 @@
 <h5><a name="The_Messages_CreateAcls">CreateAcls API (Key: 30):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>CreateAcls Request (Version: 0) => [creations] 
+<p><pre class="line-numbers"><code class="language-java">CreateAcls Request (Version: 0) => [creations] 
   creations => resource_type resource_name principal host operation permission_type 
     resource_type => INT8
     resource_name => STRING
@@ -3438,7 +3438,7 @@
     host => STRING
     operation => INT8
     permission_type => INT8
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3458,12 +3458,12 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>CreateAcls Response (Version: 0) => throttle_time_ms [creation_responses] 
+<p><pre class="line-numbers"><code class="language-java">CreateAcls Response (Version: 0) => throttle_time_ms [creation_responses] 
   throttle_time_ms => INT32
   creation_responses => error_code error_message 
     error_code => INT16
     error_message => NULLABLE_STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3479,7 +3479,7 @@
 <h5><a name="The_Messages_DeleteAcls">DeleteAcls API (Key: 31):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>DeleteAcls Request (Version: 0) => [filters] 
+<p><pre class="line-numbers"><code class="language-java">DeleteAcls Request (Version: 0) => [filters] 
   filters => resource_type resource_name principal host operation permission_type 
     resource_type => INT8
     resource_name => NULLABLE_STRING
@@ -3487,7 +3487,7 @@
     host => NULLABLE_STRING
     operation => INT8
     permission_type => INT8
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3507,7 +3507,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DeleteAcls Response (Version: 0) => throttle_time_ms [filter_responses] 
+<p><pre class="line-numbers"><code class="language-java">DeleteAcls Response (Version: 0) => throttle_time_ms [filter_responses] 
   throttle_time_ms => INT32
   filter_responses => error_code error_message [matching_acls] 
     error_code => INT16
@@ -3521,7 +3521,7 @@
       host => STRING
       operation => INT8
       permission_type => INT8
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3555,12 +3555,12 @@
 <h5><a name="The_Messages_DescribeConfigs">DescribeConfigs API (Key: 32):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>DescribeConfigs Request (Version: 0) => [resources] 
+<p><pre class="line-numbers"><code class="language-java">DescribeConfigs Request (Version: 0) => [resources] 
   resources => resource_type resource_name [config_names] 
     resource_type => INT8
     resource_name => STRING
     config_names => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3574,7 +3574,7 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>DescribeConfigs Response (Version: 0) => throttle_time_ms [resources] 
+<p><pre class="line-numbers"><code class="language-java">DescribeConfigs Response (Version: 0) => throttle_time_ms [resources] 
   throttle_time_ms => INT32
   resources => error_code error_message resource_type resource_name [config_entries] 
     error_code => INT16
@@ -3587,7 +3587,7 @@
       read_only => BOOLEAN
       is_default => BOOLEAN
       is_sensitive => BOOLEAN
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3619,7 +3619,7 @@
 <h5><a name="The_Messages_AlterConfigs">AlterConfigs API (Key: 33):</a></h5>
 
 <b>Requests:</b><br>
-<p><pre>AlterConfigs Request (Version: 0) => [resources] validate_only 
+<p><pre class="line-numbers"><code class="language-java">AlterConfigs Request (Version: 0) => [resources] validate_only 
   resources => resource_type resource_name [config_entries] 
     resource_type => INT8
     resource_name => STRING
@@ -3627,7 +3627,7 @@
       config_name => STRING
       config_value => NULLABLE_STRING
   validate_only => BOOLEAN
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
@@ -3647,14 +3647,14 @@
 </table>
 </p>
 <b>Responses:</b><br>
-<p><pre>AlterConfigs Response (Version: 0) => throttle_time_ms [resources] 
+<p><pre class="line-numbers"><code class="language-java">AlterConfigs Response (Version: 0) => throttle_time_ms [resources] 
   throttle_time_ms => INT32
   resources => error_code error_message resource_type resource_name 
     error_code => INT16
     error_message => NULLABLE_STRING
     resource_type => INT8
     resource_name => STRING
-</pre><table class="data-table"><tbody>
+</code></pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
diff --git a/0110/implementation.html b/0110/implementation.html
index af234ea..787cadb 100644
--- a/0110/implementation.html
+++ b/0110/implementation.html
@@ -32,8 +32,7 @@
 
     <h4><a id="recordbatch" href="#recordbatch">5.3.1 Record Batch</a></h4>
 	<p> The following is the on-disk format of a RecordBatch. </p>
-	<p><pre class="brush: java;">
-		baseOffset: int64
+	<p><pre class="line-numbers"><code class="language-java">		baseOffset: int64
 		batchLength: int32
 		partitionLeaderEpoch: int32
 		magic: int8 (current magic value is 2)
@@ -55,7 +54,7 @@
 		producerEpoch: int16
 		baseSequence: int32
 		records: [Record]
-	</pre></p>
+	</code></pre></p>
     <p> Note that when compression is enabled, the compressed record data is serialized directly following the count of the number of records. </p>
 
     <p>The CRC covers the data from the attributes to the end of the batch (i.e. all the bytes that follow the CRC). It is located after the magic byte, which
@@ -72,16 +71,14 @@
     <h5><a id="controlbatch" href="#controlbatch">5.3.1.1 Control Batches</a></h5>
     <p>A control batch contains a single record called the control record. Control records should not be passed on to applications. Instead, they are used by consumers to filter out aborted transactional messages.</p>
     <p> The key of a control record conforms to the following schema: </p>
-    <p><pre class="brush: java">
-       version: int16 (current version is 0)
+    <p><pre class="line-numbers"><code class="language-java">       version: int16 (current version is 0)
        type: int16 (0 indicates an abort marker, 1 indicates a commit)
-    </pre></p>
+    </code></pre></p>
     <p>The schema for the value of a control record is dependent on the type. The value is opaque to clients.</p>
 
 	<h4><a id="record" href="#record">5.3.2 Record</a></h4>
 	<p>Record level headers were introduced in Kafka 0.11.0. The on-disk format of a record with Headers is delineated below. </p>
-	<p><pre class="brush: java;">
-		length: varint
+	<p><pre class="line-numbers"><code class="language-java">		length: varint
 		attributes: int8
 			bit 0~7: unused
 		timestampDelta: varint
@@ -91,14 +88,13 @@
 		valueLen: varint
 		value: byte[]
 		Headers => [Header]
-	</pre></p>
+	</code></pre></p>
 	<h5><a id="recordheader" href="#recordheader">5.4.2.1 Record Header</a></h5>
-	<p><pre class="brush: java;">
-		headerKeyLength: varint
+	<p><pre class="line-numbers"><code class="language-java">		headerKeyLength: varint
 		headerKey: String
 		headerValueLength: varint
 		Value: byte[]
-	</pre></p>
+	</code></pre></p>
     <p>We use the the same varint encoding as Protobuf. More information on the latter can be found <a href="https://developers.google.com/protocol-buffers/docs/encoding#varints">here</a>. The count of headers in a record
     is also encoded as a varint.</p>
 
@@ -130,25 +126,23 @@
 
     <p> The following is the format of the results sent to the consumer.
 
-    <pre class="brush: text;">
-    MessageSetSend (fetch result)
+    <pre class="line-numbers"><code class="language-text">    MessageSetSend (fetch result)
 
     total length     : 4 bytes
     error code       : 2 bytes
     message 1        : x bytes
     ...
     message n        : x bytes
-    </pre>
+    </code></pre>
 
-    <pre class="brush: text;">
-    MultiMessageSetSend (multiFetch result)
+    <pre class="line-numbers"><code class="language-text">    MultiMessageSetSend (multiFetch result)
 
     total length       : 4 bytes
     error code         : 2 bytes
     messageSetSend 1
     ...
     messageSetSend n
-    </pre>
+    </code></pre>
     <h4><a id="impl_deletes" href="#impl_deletes">Deletes</a></h4>
     <p>
     Data is deleted one log segment at a time. The log manager allows pluggable delete policies to choose which files are eligible for deletion. The current policy deletes any log with a modification time of more than <i>N</i> days ago, though a policy which retained the last <i>N</i> GB could also be useful. To avoid locking reads while still allowing deletes that modify the segment list we use a copy-on-write style segment list implementation that provides consistent views to allow a b [...]
@@ -202,9 +196,8 @@
     </p>
 
     <h4><a id="impl_zkbroker" href="#impl_zkbroker">Broker Node Registry</a></h4>
-    <pre class="brush: json;">
-    /brokers/ids/[0...N] --> {"jmx_port":...,"timestamp":...,"endpoints":[...],"host":...,"version":...,"port":...} (ephemeral node)
-    </pre>
+    <pre class="line-numbers"><code class="language-json">    /brokers/ids/[0...N] --> {"jmx_port":...,"timestamp":...,"endpoints":[...],"host":...,"version":...,"port":...} (ephemeral node)
+    </code></pre>
     <p>
     This is a list of all present broker nodes, each of which provides a unique logical broker id which identifies it to consumers (which must be given as part of its configuration). On startup, a broker node registers itself by creating a znode with the logical broker id under /brokers/ids. The purpose of the logical broker id is to allow a broker to be moved to a different physical machine without affecting consumers. An attempt to register a broker id that is already in use (say becau [...]
     </p>
@@ -212,9 +205,8 @@
     Since the broker registers itself in ZooKeeper using ephemeral znodes, this registration is dynamic and will disappear if the broker is shutdown or dies (thus notifying consumers it is no longer available).
     </p>
     <h4><a id="impl_zktopic" href="#impl_zktopic">Broker Topic Registry</a></h4>
-    <pre class="brush: json;">
-    /brokers/topics/[topic]/partitions/[0...N]/state --> {"controller_epoch":...,"leader":...,"version":...,"leader_epoch":...,"isr":[...]} (ephemeral node)
-    </pre>
+    <pre class="line-numbers"><code class="language-json">    /brokers/topics/[topic]/partitions/[0...N]/state --> {"controller_epoch":...,"leader":...,"version":...,"leader_epoch":...,"isr":[...]} (ephemeral node)
+    </code></pre>
 
     <p>
     Each broker registers itself under the topics it maintains and stores the number of partitions for that topic.
@@ -237,9 +229,8 @@
     <h4><a id="impl_zkconsumerid" href="#impl_zkconsumerid">Consumer Id Registry</a></h4>
     <p>
     In addition to the group_id which is shared by all consumers in a group, each consumer is given a transient, unique consumer_id (of the form hostname:uuid) for identification purposes. Consumer ids are registered in the following directory.
-    <pre class="brush: json;">
-    /consumers/[group_id]/ids/[consumer_id] --> {"version":...,"subscription":{...:...},"pattern":...,"timestamp":...} (ephemeral node)
-    </pre>
+    <pre class="line-numbers"><code class="language-json">    /consumers/[group_id]/ids/[consumer_id] --> {"version":...,"subscription":{...:...},"pattern":...,"timestamp":...} (ephemeral node)
+    </code></pre>
     Each of the consumers in the group registers under its group and creates a znode with its consumer_id. The value of the znode contains a map of &lt;topic, #streams&gt;. This id is simply used to identify each of the consumers which is currently active within a group. This is an ephemeral node so it will disappear if the consumer process dies.
     </p>
 
@@ -247,9 +238,8 @@
     <p>
     Consumers track the maximum offset they have consumed in each partition. This value is stored in a ZooKeeper directory if <code>offsets.storage=zookeeper</code>.
     </p>
-    <pre class="brush: json;">
-    /consumers/[group_id]/offsets/[topic]/[partition_id] --> offset_counter_value (persistent node)
-    </pre>
+    <pre class="line-numbers"><code class="language-json">    /consumers/[group_id]/offsets/[topic]/[partition_id] --> offset_counter_value (persistent node)
+    </code></pre>
 
     <h4><a id="impl_zkowner" href="#impl_zkowner">Partition Owner registry</a></h4>
 
@@ -257,9 +247,8 @@
     Each broker partition is consumed by a single consumer within a given consumer group. The consumer must establish its ownership of a given partition before any consumption can begin. To establish its ownership, a consumer writes its own id in an ephemeral node under the particular broker partition it is claiming.
     </p>
 
-    <pre class="brush: json;">
-    /consumers/[group_id]/owners/[topic]/[partition_id] --> consumer_node_id (ephemeral node)
-    </pre>
+    <pre class="line-numbers"><code class="language-json">    /consumers/[group_id]/owners/[topic]/[partition_id] --> consumer_node_id (ephemeral node)
+    </code></pre>
 
     <h4><a id="impl_clusterid" href="#impl_clusterid">Cluster Id</a></h4>
 
@@ -299,8 +288,7 @@
     <p>
     Each consumer does the following during rebalancing:
     </p>
-    <pre class="brush: text;">
-    1. For each topic T that C<sub>i</sub> subscribes to
+    <pre class="line-numbers"><code class="language-text">    1. For each topic T that C<sub>i</sub> subscribes to
     2.   let P<sub>T</sub> be all partitions producing topic T
     3.   let C<sub>G</sub> be all consumers in the same group as C<sub>i</sub> that consume topic T
     4.   sort P<sub>T</sub> (so partitions on the same broker are clustered together)
@@ -310,7 +298,7 @@
     8.   remove current entries owned by C<sub>i</sub> from the partition owner registry
     9.   add newly assigned partitions to the partition owner registry
             (we may need to re-try this until the original partition owner releases its ownership)
-    </pre>
+    </code></pre>
     <p>
     When rebalancing is triggered at one consumer, rebalancing should be triggered in other consumers within the same group about the same time.
     </p>
diff --git a/0110/introduction.html b/0110/introduction.html
index e20ae71..8e68147 100644
--- a/0110/introduction.html
+++ b/0110/introduction.html
@@ -48,7 +48,7 @@
       <img src="/{{version}}/images/kafka-apis.png" style="float: right; width: 50%;">
       </div>
   <p>
-  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
+  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
 
   <h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
   <p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the topic.</p>
@@ -115,7 +115,7 @@
   Kafka only provides a total order over records <i>within</i> a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.
   </p>
   <h4><a id="intro_multi-tenancy" href="#intro_multi-tenancy">Multi-tenancy</a></h4>
-  <p>You can deploy Kafka as a multi-tenant solution. Multi-tenancy is enabled by configuring which topics can produce or consume data. There is also operations support for quotas.  Administrators can define and enforce quotas on requests to control the broker resources that are used by clients.  For more information, see the <a href="https://kafka.apache.org/documentation/#security">security documentation</a>. </p>
+  <p>You can deploy Kafka as a multi-tenant solution. Multi-tenancy is enabled by configuring which topics can produce or consume data. There is also operations support for quotas.  Administrators can define and enforce quotas on requests to control the broker resources that are used by clients.  For more information, see the <a href="/documentation/#security">security documentation</a>. </p>
   <h4><a id="intro_guarantees" href="#intro_guarantees">Guarantees</a></h4>
   <p>
   At a high-level Kafka gives the following guarantees:
@@ -166,7 +166,7 @@
   As a result of taking storage seriously and allowing the clients to control their read position, you can think of Kafka as a kind of special purpose distributed filesystem dedicated to high-performance, low-latency commit log storage, replication, and propagation.
   </p>
   <p>
-  For details about the Kafka's commit log storage and replication design, please read <a href="https://kafka.apache.org/documentation/#design">this</a> page.
+  For details about the Kafka's commit log storage and replication design, please read <a href="/documentation/#design">this</a> page.
   </p>
   <h4>Kafka for Stream Processing</h4>
   <p>
diff --git a/0110/javadoc/org/apache/kafka/common/MetricName.html b/0110/javadoc/org/apache/kafka/common/MetricName.html
index 935205b..303b81f 100644
--- a/0110/javadoc/org/apache/kafka/common/MetricName.html
+++ b/0110/javadoc/org/apache/kafka/common/MetricName.html
@@ -112,7 +112,7 @@ extends java.lang.Object</pre>
  <p>
 
  Usage looks something like this:
- <pre><code>// set up metrics:
+ <pre class="line-numbers"><code>// set up metrics:
 
  Map&lt;String, String&gt; metricTags = new LinkedHashMap&lt;String, String&gt;();
  metricTags.put("client-id", "producer-1");
diff --git a/0110/javadoc/org/apache/kafka/streams/KafkaStreams.html b/0110/javadoc/org/apache/kafka/streams/KafkaStreams.html
index 52b9c17..f04c4f0 100644
--- a/0110/javadoc/org/apache/kafka/streams/KafkaStreams.html
+++ b/0110/javadoc/org/apache/kafka/streams/KafkaStreams.html
@@ -118,7 +118,7 @@ extends java.lang.Object</pre>
  that is used for reading input and writing output.
  <p>
  A simple example might look like this:
- <pre><code>Map&lt;String, Object&gt; props = new HashMap&lt;&gt;();
+ <pre class="line-numbers"><code>Map&lt;String, Object&gt; props = new HashMap&lt;&gt;();
  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "my-stream-processing-application");
  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
  props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
diff --git a/0110/javadoc/org/apache/kafka/streams/StreamsConfig.html b/0110/javadoc/org/apache/kafka/streams/StreamsConfig.html
index f937be3..28abfa2 100644
--- a/0110/javadoc/org/apache/kafka/streams/StreamsConfig.html
+++ b/0110/javadoc/org/apache/kafka/streams/StreamsConfig.html
@@ -108,7 +108,7 @@ extends <a href="../../../../org/apache/kafka/common/config/AbstractConfig.html"
  <a href="../../../../org/apache/kafka/streams/StreamsConfig.html#consumerPrefix(java.lang.String)"><code>consumerPrefix(String)</code></a> and <a href="../../../../org/apache/kafka/streams/StreamsConfig.html#producerPrefix(java.lang.String)"><code>producerPrefix(String)</code></a>, respectively.
  <p>
  Example:
- <pre><code>// potentially wrong: sets "metadata.max.age.ms" to 1 minute for producer AND consumer
+ <pre class="line-numbers"><code>// potentially wrong: sets "metadata.max.age.ms" to 1 minute for producer AND consumer
  Properties streamsProperties = new Properties();
  streamsProperties.put(ConsumerConfig.METADATA_MAX_AGE_CONFIG, 60000);
  // or
diff --git a/0110/javadoc/org/apache/kafka/streams/kstream/GlobalKTable.html b/0110/javadoc/org/apache/kafka/streams/kstream/GlobalKTable.html
index 6400e9d..faedeb2 100644
--- a/0110/javadoc/org/apache/kafka/streams/kstream/GlobalKTable.html
+++ b/0110/javadoc/org/apache/kafka/streams/kstream/GlobalKTable.html
@@ -104,12 +104,12 @@ public interface <span class="strong">GlobalKTable&lt;K,V&gt;</span></pre>
  <a href="../../../../../org/apache/kafka/streams/KeyValue.html" title="class in org.apache.kafka.streams"><code>KeyValue</code></a> of the left hand side <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html" title="interface in org.apache.kafka.streams.kstream"><code>KStream</code></a> to the key of the right hand side <code>GlobalKTable</code>.
  <p>
  A <code>GlobalKTable</code> is created via a <a href="../../../../../org/apache/kafka/streams/kstream/KStreamBuilder.html" title="class in org.apache.kafka.streams.kstream"><code>KStreamBuilder</code></a>. For example:
- <pre><code>builder.globalTable("topic-name", "queryable-store-name");
+ <pre class="line-numbers"><code>builder.globalTable("topic-name", "queryable-store-name");
  </code></pre>
  all <code>GlobalKTable</code>s are backed by a <a href="../../../../../org/apache/kafka/streams/state/ReadOnlyKeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>ReadOnlyKeyValueStore</code></a> and are therefore queryable via the
  interactive queries API.
  For example:
- <pre><code>final GlobalKTable globalOne = builder.globalTable("g1", "g1-store");
+ <pre class="line-numbers"><code>final GlobalKTable globalOne = builder.globalTable("g1", "g1-store");
  final GlobalKTable globalTwo = builder.globalTable("g2", "g2-store");
  ...
  final KafkaStreams streams = ...;
diff --git a/0110/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html b/0110/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html
index 28e2a96..2ff1200 100644
--- a/0110/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html
+++ b/0110/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html
@@ -106,7 +106,7 @@ extends <a href="../../../../../org/apache/kafka/streams/kstream/Windows.html" t
  <p>
  A <code>JoinWindows</code> instance defines a maximum time difference for a <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html#join(org.apache.kafka.streams.kstream.KStream,%20org.apache.kafka.streams.kstream.ValueJoiner,%20org.apache.kafka.streams.kstream.JoinWindows)"><code>join over two streams</code></a> on the same key.
  In SQL-style you would express this join as
- <pre><code>SELECT * FROM stream1, stream2
+ <pre class="line-numbers"><code>SELECT * FROM stream1, stream2
      WHERE
        stream1.key = stream2.key
        AND
diff --git a/0110/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html b/0110/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html
index 5ae6886..6174d25 100644
--- a/0110/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html
+++ b/0110/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html
@@ -364,7 +364,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -439,7 +439,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String queryableStoreName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
@@ -479,7 +479,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-word";
  long fromTime = ...;
@@ -567,7 +567,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String queryableStoreName = storeSupplier.name();
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(queryableName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-word";
@@ -608,7 +608,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlySessionStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;ReadOnlySessionStore&lt;String, Long&gt;);
  String key = "some-key";
@@ -674,7 +674,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlySessionStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;ReadOnlySessionStore&lt;String, Long&gt;);
  String key = "some-key";
@@ -757,7 +757,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long sumForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -812,7 +812,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  String queryableStoreName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
@@ -861,7 +861,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-key";
  long fromTime = ...;
@@ -968,7 +968,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-key";
@@ -1019,7 +1019,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlySessionStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;ReadOnlySessionStore&lt;String, Long&gt;);
  String key = "some-key";
@@ -1114,7 +1114,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // compute sum
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // compute sum
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlySessionStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;ReadOnlySessionStore&lt;String, Long&gt;);
  String key = "some-key";
@@ -1171,7 +1171,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // some aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some aggregation on value type double
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long aggForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1274,7 +1274,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // some aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some aggregation on value type double
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
@@ -1327,7 +1327,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  <p>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // some windowed aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some windowed aggregation on value type double
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-key";
  long fromTime = ...;
@@ -1442,7 +1442,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local windowed <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // some windowed aggregation on value type Long
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some windowed aggregation on value type Long
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlyWindowStore&lt;String,Long&gt; localWindowStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;windowStore());
  String key = "some-key";
@@ -1497,7 +1497,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/SessionStore.html" title="interface in org.apache.kafka.streams.state"><code>SessionStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // some windowed aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some windowed aggregation on value type double
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlySessionStore&lt;String, Long&gt; sessionStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;sessionStore());
  String key = "some-key";
@@ -1594,7 +1594,7 @@ public interface <span class="strong">KGroupedStream&lt;K,V&gt;</span></pre>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/SessionStore.html" title="interface in org.apache.kafka.streams.state"><code>SessionStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>.
  Use <a href="../../../../../org/apache/kafka/streams/processor/StateStoreSupplier.html#name()"><code>StateStoreSupplier.name()</code></a> to get the store name:
- <pre><code>KafkaStreams streams = ... // some windowed aggregation on value type double
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // some windowed aggregation on value type double
  Sting queryableStoreName = storeSupplier.name();
  ReadOnlySessionStore&lt;String, Long&gt; sessionStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;sessionStore());
  String key = "some-key";
diff --git a/0110/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html b/0110/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html
index b3bbcfd..f317729 100644
--- a/0110/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html
+++ b/0110/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html
@@ -243,7 +243,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -320,7 +320,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String queryableStoreName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
@@ -367,7 +367,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  value as-is.
  Thus, <code>reduce(Reducer, Reducer, String)</code> can be used to compute aggregate functions like sum.
  For sum, the adder and substractor would work as follows:
- <pre><code>public class SumAdder implements Reducer&lt;Integer&gt; {
+ <pre class="line-numbers"><code>public class SumAdder implements Reducer&lt;Integer&gt; {
    public Integer apply(Integer currentAgg, Integer newValue) {
      return currentAgg + newValue;
    }
@@ -388,7 +388,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -436,7 +436,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  value as-is.
  Thus, <code>reduce(Reducer, Reducer, String)</code> can be used to compute aggregate functions like sum.
  For sum, the adder and substractor would work as follows:
- <pre><code>public class SumAdder implements Reducer&lt;Integer&gt; {
+ <pre class="line-numbers"><code>public class SumAdder implements Reducer&lt;Integer&gt; {
    public Integer apply(Integer currentAgg, Integer newValue) {
      return currentAgg + newValue;
    }
@@ -494,7 +494,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  value as-is.
  Thus, <code>reduce(Reducer, Reducer, String)</code> can be used to compute aggregate functions like sum.
  For sum, the adder and substractor would work as follows:
- <pre><code>public class SumAdder implements Reducer&lt;Integer&gt; {
+ <pre class="line-numbers"><code>public class SumAdder implements Reducer&lt;Integer&gt; {
    public Integer apply(Integer currentAgg, Integer newValue) {
      return currentAgg + newValue;
    }
@@ -515,7 +515,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String queryableStoreName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
@@ -567,7 +567,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  Thus, <code>aggregate(Initializer, Aggregator, Aggregator, String)</code> can be used to compute aggregate functions
  like sum.
  For sum, the initializer, adder, and substractor would work as follows:
- <pre><code>// in this example, LongSerde.class must be set as default value serde in StreamsConfig
+ <pre class="line-numbers"><code>// in this example, LongSerde.class must be set as default value serde in StreamsConfig
  public class SumInitializer implements Initializer&lt;Long&gt; {
    public Long apply() {
      return 0L;
@@ -595,7 +595,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -646,7 +646,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  Thus, <code>aggregate(Initializer, Aggregator, Aggregator, String)</code> can be used to compute aggregate functions
  like sum.
  For sum, the initializer, adder, and substractor would work as follows:
- <pre><code>// in this example, LongSerde.class must be set as default value serde in StreamsConfig
+ <pre class="line-numbers"><code>// in this example, LongSerde.class must be set as default value serde in StreamsConfig
  public class SumInitializer implements Initializer&lt;Long&gt; {
    public Long apply() {
      return 0L;
@@ -714,7 +714,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  Thus, <code>aggregate(Initializer, Aggregator, Aggregator, String)</code> can be used to compute aggregate functions
  like sum.
  For sum, the initializer, adder, and substractor would work as follows:
- <pre><code>public class SumInitializer implements Initializer&lt;Long&gt; {
+ <pre class="line-numbers"><code>public class SumInitializer implements Initializer&lt;Long&gt; {
    public Long apply() {
      return 0L;
    }
@@ -741,7 +741,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
  Long countForWord = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -794,7 +794,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  Thus, <code>aggregate(Initializer, Aggregator, Aggregator, String)</code> can be used to compute aggregate functions
  like sum.
  For sum, the initializer, adder, and substractor would work as follows:
- <pre><code>public class SumInitializer implements Initializer&lt;Long&gt; {
+ <pre class="line-numbers"><code>public class SumInitializer implements Initializer&lt;Long&gt; {
    public Long apply() {
      return 0L;
    }
@@ -862,7 +862,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  Thus, <code>aggregate(Initializer, Aggregator, Aggregator, String)</code> can be used to compute aggregate functions
  like sum.
  For sum, the initializer, adder, and substractor would work as follows:
- <pre><code>public class SumInitializer implements Initializer&lt;Long&gt; {
+ <pre class="line-numbers"><code>public class SumInitializer implements Initializer&lt;Long&gt; {
    public Long apply() {
      return 0L;
    }
@@ -889,7 +889,7 @@ public interface <span class="strong">KGroupedTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // counting words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // counting words
  String queryableStoreName = storeSupplier.name();
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-word";
diff --git a/0110/javadoc/org/apache/kafka/streams/kstream/KStream.html b/0110/javadoc/org/apache/kafka/streams/kstream/KStream.html
index 6cf5a0c..f12dcda 100644
--- a/0110/javadoc/org/apache/kafka/streams/kstream/KStream.html
+++ b/0110/javadoc/org/apache/kafka/streams/kstream/KStream.html
@@ -535,7 +535,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  For example, you can use this transformation to set a key for a key-less input record <code>&lt;null,V&gt;</code> by
  extracting a key from the value within your <a href="../../../../../org/apache/kafka/streams/kstream/KeyValueMapper.html" title="interface in org.apache.kafka.streams.kstream"><code>KeyValueMapper</code></a>. The example below computes the new key as the
  length of the value string.
- <pre><code>KStream&lt;Byte[], String&gt; keyLessStream = builder.stream("key-less-topic");
+ <pre class="line-numbers"><code>KStream&lt;Byte[], String&gt; keyLessStream = builder.stream("key-less-topic");
  KStream&lt;Integer, String&gt; keyedStream = keyLessStream.selectKey(new KeyValueMapper&lt;Byte[], String, Integer&gt; {
      Integer apply(Byte[] key, String value) {
          return value.length();
@@ -567,7 +567,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  stateful record transformation).
  <p>
  The example below normalizes the String key to upper-case letters and counts the number of token of the value string.
- <pre><code>KStream&lt;String, String&gt; inputStream = builder.stream("topic");
+ <pre class="line-numbers"><code>KStream&lt;String, String&gt; inputStream = builder.stream("topic");
  KStream&lt;Integer, String&gt; outputStream = inputStream.map(new KeyValueMapper&lt;String, String, KeyValue&lt;String, Integer&gt;&gt; {
      KeyValue&lt;String, Integer&gt; apply(String key, String value) {
          return new KeyValue&lt;&gt;(key.toUpperCase(), value.split(" ").length);
@@ -602,7 +602,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html#transformValues(org.apache.kafka.streams.kstream.ValueTransformerSupplier,%20java.lang.String...)"><code>transformValues(ValueTransformerSupplier, String...)</code></a> for stateful value transformation).
  <p>
  The example below counts the number of token of the value string.
- <pre><code>KStream&lt;String, String&gt; inputStream = builder.stream("topic");
+ <pre class="line-numbers"><code>KStream&lt;String, String&gt; inputStream = builder.stream("topic");
  KStream&lt;String, Integer&gt; outputStream = inputStream.mapValues(new ValueMapper&lt;String, Integer&gt; {
      Integer apply(String value) {
          return value.split(" ").length;
@@ -638,7 +638,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  <p>
  The example below splits input records <code>&lt;null:String&gt;</code> containing sentences as values into their words
  and emit a record <code>&lt;word:1&gt;</code> for each word.
- <pre><code>KStream&lt;byte[], String&gt; inputStream = builder.stream("topic");
+ <pre class="line-numbers"><code>KStream&lt;byte[], String&gt; inputStream = builder.stream("topic");
  KStream&lt;String, Integer&gt; outputStream = inputStream.flatMap(new KeyValueMapper&lt;byte[], String, Iterable&lt;KeyValue&lt;String, Integer&gt;&gt;&gt; {
      Iterable&lt;KeyValue&lt;String, Integer&gt;&gt; apply(byte[] key, String value) {
          String[] tokens = value.split(" ");
@@ -684,7 +684,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  for stateful value transformation).
  <p>
  The example below splits input records <code>&lt;null:String&gt;</code> containing sentences as values into their words.
- <pre><code>KStream&lt;byte[], String&gt; inputStream = builder.stream("topic");
+ <pre class="line-numbers"><code>KStream&lt;byte[], String&gt; inputStream = builder.stream("topic");
  KStream&lt;byte[], String&gt; outputStream = inputStream.flatMapValues(new ValueMapper&lt;String, Iterable&lt;String&gt;&gt; {
      Iterable&lt;String&gt; apply(String value) {
          return Arrays.asList(value.split(" "));
@@ -1087,7 +1087,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  periodic actions can be performed.
  <p>
  In order to assign a state, the state must be created and registered beforehand:
- <pre><code>// create store
+ <pre class="line-numbers"><code>// create store
  StateStoreSupplier myStore = Stores.create("myTransformState")
      .withKeys(...)
      .withValues(...)
@@ -1104,7 +1104,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  <a href="../../../../../org/apache/kafka/streams/processor/ProcessorContext.html" title="interface in org.apache.kafka.streams.processor"><code>ProcessorContext</code></a>.
  To trigger periodic actions via <a href="../../../../../org/apache/kafka/streams/kstream/Transformer.html#punctuate(long)"><code>punctuate()</code></a>, a schedule must be registered.
  The <a href="../../../../../org/apache/kafka/streams/kstream/Transformer.html" title="interface in org.apache.kafka.streams.kstream"><code>Transformer</code></a> must return a <a href="../../../../../org/apache/kafka/streams/KeyValue.html" title="class in org.apache.kafka.streams"><code>KeyValue</code></a> type in <a href="../../../../../org/apache/kafka/streams/kstream/Transformer.html#transform(K,%20V)"><code>transform()</code></a> and <a href="../../../../../org/apache/kafka/streams/ [...]
- <pre><code>new TransformerSupplier() {
+ <pre class="line-numbers"><code>new TransformerSupplier() {
      Transformer get() {
          return new Transformer() {
              private ProcessorContext context;
@@ -1163,7 +1163,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  periodic actions get be performed.
  <p>
  In order to assign a state, the state must be created and registered beforehand:
- <pre><code>// create store
+ <pre class="line-numbers"><code>// create store
  StateStoreSupplier myStore = Stores.create("myValueTransformState")
      .withKeys(...)
      .withValues(...)
@@ -1182,7 +1182,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  registered.
  In contrast to <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html#transform(org.apache.kafka.streams.kstream.TransformerSupplier,%20java.lang.String...)"><code>transform()</code></a>, no additional <a href="../../../../../org/apache/kafka/streams/KeyValue.html" title="class in org.apache.kafka.streams"><code>KeyValue</code></a>
  pairs should be emitted via <a href="../../../../../org/apache/kafka/streams/processor/ProcessorContext.html#forward(K,%20V)"><code>ProcessorContext.forward()</code></a>.
- <pre><code>new ValueTransformerSupplier() {
+ <pre class="line-numbers"><code>new ValueTransformerSupplier() {
      ValueTransformer get() {
          return new ValueTransformer() {
              private StateStore state;
@@ -1235,7 +1235,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  Note that this is a terminal operation that returns void.
  <p>
  In order to assign a state, the state must be created and registered beforehand:
- <pre><code>// create store
+ <pre class="line-numbers"><code>// create store
  StateStoreSupplier myStore = Stores.create("myProcessorState")
      .withKeys(...)
      .withValues(...)
@@ -1252,7 +1252,7 @@ public interface <span class="strong">KStream&lt;K,V&gt;</span></pre>
  <a href="../../../../../org/apache/kafka/streams/processor/ProcessorContext.html" title="interface in org.apache.kafka.streams.processor"><code>ProcessorContext</code></a>.
  To trigger periodic actions via <a href="../../../../../org/apache/kafka/streams/processor/Processor.html#punctuate(long)"><code>punctuate()</code></a>,
  a schedule must be registered.
- <pre><code>new ProcessorSupplier() {
+ <pre class="line-numbers"><code>new ProcessorSupplier() {
      Processor get() {
          return new Processor() {
              private StateStore state;
diff --git a/0110/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html b/0110/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html
index 10c46a2..2ff5648 100644
--- a/0110/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html
+++ b/0110/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html
@@ -819,7 +819,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -853,7 +853,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -910,7 +910,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -946,7 +946,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1007,7 +1007,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1043,7 +1043,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1080,7 +1080,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1118,7 +1118,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1182,7 +1182,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1221,7 +1221,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1260,7 +1260,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(storeName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1329,7 +1329,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -1362,7 +1362,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key);
@@ -1418,7 +1418,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key);
@@ -1452,7 +1452,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key);
@@ -1486,7 +1486,7 @@ extends <a href="../../../../../org/apache/kafka/streams/processor/TopologyBuild
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ...
+ <pre class="line-numbers"><code>KafkaStreams streams = ...
  ReadOnlyKeyValueStore&lt;String,Long&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;String, Long&gt;keyValueStore());
  String key = "some-key";
  Long valueForKey = localStore.get(key);
diff --git a/0110/javadoc/org/apache/kafka/streams/kstream/KTable.html b/0110/javadoc/org/apache/kafka/streams/kstream/KTable.html
index 4fb7e6e..5f49afa 100644
--- a/0110/javadoc/org/apache/kafka/streams/kstream/KTable.html
+++ b/0110/javadoc/org/apache/kafka/streams/kstream/KTable.html
@@ -103,7 +103,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  Some <code>KTable</code>s have an internal state (a <a href="../../../../../org/apache/kafka/streams/state/ReadOnlyKeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>ReadOnlyKeyValueStore</code></a>) and are therefore queryable via the
  interactive queries API.
  For example:
- <pre><code>final KTable table = ...
+ <pre class="line-numbers"><code>final KTable table = ...
      ...
      final KafkaStreams streams = ...;
      streams.start()
@@ -623,7 +623,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // filtering words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // filtering words
  ReadOnlyKeyValueStore&lt;K,V&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;K, V&gt;keyValueStore());
  K key = "some-word";
  V valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -662,7 +662,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // filtering words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // filtering words
  ReadOnlyKeyValueStore&lt;K,V&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;K, V&gt;keyValueStore());
  K key = "some-word";
  V valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -722,7 +722,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // filtering words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // filtering words
  ReadOnlyKeyValueStore&lt;K,V&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;K, V&gt;keyValueStore());
  K key = "some-word";
  V valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -758,7 +758,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  <p>
  To query the local <a href="../../../../../org/apache/kafka/streams/state/KeyValueStore.html" title="interface in org.apache.kafka.streams.state"><code>KeyValueStore</code></a> it must be obtained via
  <a href="../../../../../org/apache/kafka/streams/KafkaStreams.html#store(java.lang.String,%20org.apache.kafka.streams.state.QueryableStoreType)"><code>KafkaStreams#store(...)</code></a>:
- <pre><code>KafkaStreams streams = ... // filtering words
+ <pre class="line-numbers"><code>KafkaStreams streams = ... // filtering words
  ReadOnlyKeyValueStore&lt;K,V&gt; localStore = streams.store(queryableStoreName, QueryableStoreTypes.&lt;K, V&gt;keyValueStore());
  K key = "some-word";
  V valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)
@@ -788,7 +788,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  This is a stateless record-by-record operation.
  <p>
  The example below counts the number of token of the value string.
- <pre><code>KTable&lt;String, String&gt; inputTable = builder.table("topic");
+ <pre class="line-numbers"><code>KTable&lt;String, String&gt; inputTable = builder.table("topic");
  KTable&lt;String, Integer&gt; outputTable = inputTable.mapValue(new ValueMapper&lt;String, Integer&gt; {
      Integer apply(String value) {
          return value.split(" ").length;
@@ -825,7 +825,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  This is a stateless record-by-record operation.
  <p>
  The example below counts the number of token of the value string.
- <pre><code>KTable&lt;String, String&gt; inputTable = builder.table("topic");
+ <pre class="line-numbers"><code>KTable&lt;String, String&gt; inputTable = builder.table("topic");
  KTable&lt;String, Integer&gt; outputTable = inputTable.mapValue(new ValueMapper&lt;String, Integer&gt; {
      Integer apply(String value) {
          return value.split(" ").length;
@@ -871,7 +871,7 @@ public interface <span class="strong">KTable&lt;K,V&gt;</span></pre>
  This is a stateless record-by-record operation.
  <p>
  The example below counts the number of token of the value string.
- <pre><code>KTable&lt;String, String&gt; inputTable = builder.table("topic");
+ <pre class="line-numbers"><code>KTable&lt;String, String&gt; inputTable = builder.table("topic");
  KTable&lt;String, Integer&gt; outputTable = inputTable.mapValue(new ValueMapper&lt;String, Integer&gt; {
      Integer apply(String value) {
          return value.split(" ").length;
@@ -1152,7 +1152,7 @@ void&nbsp;foreach(<a href="../../../../../org/apache/kafka/streams/kstream/Forea
 <div class="block">Convert this changelog stream to a <a href="../../../../../org/apache/kafka/streams/kstream/KStream.html" title="interface in org.apache.kafka.streams.kstream"><code>KStream</code></a> using the given <a href="../../../../../org/apache/kafka/streams/kstream/KeyValueMapper.html" title="interface in org.apache.kafka.streams.kstream"><code>KeyValueMapper</code></a> to select the new key.
  <p>
  For example, you can compute the new key as the length of the value string.
- <pre><code>KTable&lt;String, String&gt; table = builder.table("topic");
+ <pre class="line-numbers"><code>KTable&lt;String, String&gt; table = builder.table("topic");
  KTable&lt;Integer, String&gt; keyedStream = table.toStream(new KeyValueMapper&lt;String, String, Integer&gt; {
      Integer apply(String key, String value) {
          return value.length();
diff --git a/0110/ops.html b/0110/ops.html
index 32d4bbd..9d98959 100644
--- a/0110/ops.html
+++ b/0110/ops.html
@@ -27,10 +27,9 @@
   You have the option of either adding topics manually or having them be created automatically when data is first published to a non-existent topic. If topics are auto-created then you may want to tune the default <a href="#topicconfigs">topic configurations</a> used for auto-created topics.
   <p>
   Topics are added and modified using the topic tool:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --create --topic my_topic_name
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --create --topic my_topic_name
         --partitions 20 --replication-factor 3 --config x=y
-  </pre>
+  </code></pre>
   The replication factor controls how many servers will replicate each message that is written. If you have a replication factor of 3 then up to 2 servers can fail before you will lose access to your data. We recommend you use a replication factor of 2 or 3 so that you can transparently bounce machines without interrupting data consumption.
   <p>
   The partition count controls how many logs the topic will be sharded into. There are several impacts of the partition count. First each partition must fit entirely on a single server. So if you have 20 partitions the full data set (and read and write load) will be handled by no more than 20 servers (not counting replicas). Finally the partition count impacts the maximum parallelism of your consumers. This is discussed in greater detail in the <a href="#intro_consumers">concepts section</a>.
@@ -44,26 +43,22 @@
   You can change the configuration or partitioning of a topic using the same topic tool.
   <p>
   To add partitions you can do
-  <pre class="brush: bash;">
-  &gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --alter --topic my_topic_name
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --alter --topic my_topic_name
         --partitions 40
-  </pre>
+  </code></pre>
   Be aware that one use case for partitions is to semantically partition data, and adding partitions doesn't change the partitioning of existing data so this may disturb consumers if they rely on that partition. That is if data is partitioned by <code>hash(key) % number_of_partitions</code> then this partitioning will potentially be shuffled by adding partitions but Kafka will not attempt to automatically redistribute data in any way.
   <p>
   To add configs:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --zookeeper zk_host:port/chroot --entity-type topics --entity-name my_topic_name --alter --add-config x=y
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --zookeeper zk_host:port/chroot --entity-type topics --entity-name my_topic_name --alter --add-config x=y
+  </code></pre>
   To remove a config:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-configs.sh --zookeeper zk_host:port/chroot --entity-type topics --entity-name my_topic_name --alter --delete-config x
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-configs.sh --zookeeper zk_host:port/chroot --entity-type topics --entity-name my_topic_name --alter --delete-config x
+  </code></pre>
   And finally deleting a topic:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --delete --topic my_topic_name
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --delete --topic my_topic_name
+  </code></pre>
   Topic deletion option is disabled by default. To enable it set the server config
-    <pre class="brush: text;">delete.topic.enable=true</pre>
+    <pre class="line-numbers"><code class="language-text">delete.topic.enable=true</code></pre>
   <p>
   Kafka does not currently support reducing the number of partitions for a topic.
   <p>
@@ -80,9 +75,8 @@
   </ol>
 
   Syncing the logs will happen automatically whenever the server is stopped other than by a hard kill, but the controlled leadership migration requires using a special setting:
-  <pre class="brush: text;">
-      controlled.shutdown.enable=true
-  </pre>
+  <pre class="line-numbers"><code class="language-text">      controlled.shutdown.enable=true
+  </code></pre>
   Note that controlled shutdown will only succeed if <i>all</i> the partitions hosted on the broker have replicas (i.e. the replication factor is greater than 1 <i>and</i> at least one of these replicas is alive). This is generally what you want since shutting down the last replica would make that topic partition unavailable.
 
   <h4><a id="basic_ops_leader_balancing" href="#basic_ops_leader_balancing">Balancing leadership</a></h4>
@@ -90,20 +84,18 @@
   Whenever a broker stops or crashes leadership for that broker's partitions transfers to other replicas. This means that by default when the broker is restarted it will only be a follower for all its partitions, meaning it will not be used for client reads and writes.
   <p>
   To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to either node 5 or 9 because it is earlier in the replica list. You can have the Kafka cluster try to restore leadership to the restored replicas by running the command:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot
+  </code></pre>
 
   Since running this command can be tedious you can also configure Kafka to do this automatically by setting the following configuration:
-  <pre class="brush: text;">
-      auto.leader.rebalance.enable=true
-  </pre>
+  <pre class="line-numbers"><code class="language-text">      auto.leader.rebalance.enable=true
+  </code></pre>
 
   <h4><a id="basic_ops_racks" href="#basic_ops_racks">Balancing Replicas Across Racks</a></h4>
   The rack awareness feature spreads replicas of the same partition across different racks. This extends the guarantees Kafka provides for broker-failure to cover rack-failure, limiting the risk of data loss should all the brokers on a rack fail at once. The feature can also be applied to other broker groupings such as availability zones in EC2.
   <p></p>
   You can specify that a broker belongs to a particular rack by adding a property to the broker config:
-  <pre class="brush: text;">   broker.rack=my-rack-id</pre>
+  <pre class="line-numbers"><code class="language-text">   broker.rack=my-rack-id</code></pre>
   When a topic is <a href="#basic_ops_add_topic">created</a>, <a href="#basic_ops_modify_topic">modified</a> or replicas are <a href="#basic_ops_cluster_expansion">redistributed</a>, the rack constraint will be honoured, ensuring replicas span as many racks as they can (a partition will span min(#racks, replication-factor) different racks).
   <p></p>
   The algorithm used to assign replicas to brokers ensures that the number of leaders per broker will be constant, regardless of how brokers are distributed across racks. This ensures balanced throughput.
@@ -123,11 +115,10 @@
   The source and destination clusters are completely independent entities: they can have different numbers of partitions and the offsets will not be the same. For this reason the mirror cluster is not really intended as a fault-tolerance mechanism (as the consumer position will be different); for that we recommend using normal in-cluster replication. The mirror maker process will, however, retain and use the message key for partitioning so order is preserved on a per-key basis.
   <p>
   Here is an example showing how to mirror a single topic (named <i>my-topic</i>) from an input cluster:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-mirror-maker.sh
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-mirror-maker.sh
         --consumer.config consumer.properties
         --producer.config producer.properties --whitelist my-topic
-  </pre>
+  </code></pre>
   Note that we specify the list of topics with the <code>--whitelist</code> option. This option allows any regular expression using <a href="http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html">Java-style regular expressions</a>. So you could mirror two topics named <i>A</i> and <i>B</i> using <code>--whitelist 'A|B'</code>. Or you could mirror <i>all</i> topics using <code>--whitelist '*'</code>. Make sure to quote any regular expression to ensure the shell doesn't try [...]
   <p>
   Sometimes it is easier to say what it is that you <i>don't</i> want. Instead of using <code>--whitelist</code> to say what you want
@@ -139,12 +130,11 @@
 
   <h4><a id="basic_ops_consumer_lag" href="#basic_ops_consumer_lag">Checking consumer position</a></h4>
   Sometimes it's useful to see the position of your consumers. We have a tool that will show the position of all consumers in a consumer group as well as how far behind the end of the log they are. To run this tool on a consumer group named <i>my-group</i> consuming a topic named <i>my-topic</i> would look like this:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper localhost:2181 --group test
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper localhost:2181 --group test
   Group           Topic                          Pid Offset          logSize         Lag             Owner
   my-group        my-topic                       0   0               0               0               test_jkreps-mn-1394154511599-60744496-0
   my-group        my-topic                       1   0               0               0               test_jkreps-mn-1394154521217-1a0be913-0
-  </pre>
+  </code></pre>
 
 
   NOTE: Since 0.9.0.0, the kafka.tools.ConsumerOffsetChecker tool has been deprecated. You should use the kafka.admin.ConsumerGroupCommand (or the bin/kafka-consumer-groups.sh script) to manage consumer groups, including consumers created with the <a href="http://kafka.apache.org/documentation.html#newconsumerapi">new consumer API</a>.
@@ -157,27 +147,24 @@
 
   For example, to list all consumer groups across all topics:
 
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --bootstrap-server broker1:9092 --list
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --bootstrap-server broker1:9092 --list
 
   test-consumer-group
-  </pre>
+  </code></pre>
 
   To view offsets as in the previous example with the ConsumerOffsetChecker, we "describe" the consumer group like this:
 
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --bootstrap-server broker1:9092 --describe --group test-consumer-group
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --bootstrap-server broker1:9092 --describe --group test-consumer-group
 
   TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG        CONSUMER-ID                                       HOST                           CLIENT-ID
   test-foo                       0          1               3               2          consumer-1-a5d61779-4d04-4c50-a6d6-fb35d942642d   /127.0.0.1                     consumer-1
-  </pre>
+  </code></pre>
 
   If you are using the old high-level consumer and storing the group metadata in ZooKeeper (i.e. <code>offsets.storage=zookeeper</code>), pass
   <code>--zookeeper</code> instead of <code>bootstrap-server</code>:
 
-  <pre class="brush: bash;">
-  &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
-  </pre>
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
+  </code></pre>
 
   <h4><a id="basic_ops_cluster_expansion" href="#basic_ops_cluster_expansion">Expanding your cluster</a></h4>
 
@@ -199,16 +186,14 @@
   For instance, the following example will move all partitions for topics foo1,foo2 to the new set of brokers 5,6. At the end of this move, all partitions for topics foo1 and foo2 will <i>only</i> exist on brokers 5,6.
   <p>
   Since the tool accepts the input list of topics as a json file, you first need to identify the topics you want to move and create the json file as follows:
-  <pre class="brush: bash;">
-  > cat topics-to-move.json
+  <pre class="line-numbers"><code class="language-bash">  > cat topics-to-move.json
   {"topics": [{"topic": "foo1"},
               {"topic": "foo2"}],
   "version":1
   }
-  </pre>
+  </code></pre>
   Once the json file is ready, use the partition reassignment tool to generate a candidate assignment:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
   Current partition replica assignment
 
   {"version":1,
@@ -230,11 +215,10 @@
                 {"topic":"foo1","partition":1,"replicas":[5,6]},
                 {"topic":"foo2","partition":1,"replicas":[5,6]}]
   }
-  </pre>
+  </code></pre>
   <p>
   The tool generates a candidate assignment that will move all partitions from topics foo1,foo2 to brokers 5,6. Note, however, that at this point, the partition movement has not started, it merely tells you the current assignment and the proposed new assignment. The current assignment should be saved in case you want to rollback to it. The new assignment should be saved in a json file (e.g. expand-cluster-reassignment.json) to be input to the tool with the --execute option as follows:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file expand-cluster-reassignment.json --execute
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file expand-cluster-reassignment.json --execute
   Current partition replica assignment
 
   {"version":1,
@@ -256,11 +240,10 @@
                 {"topic":"foo1","partition":1,"replicas":[5,6]},
                 {"topic":"foo2","partition":1,"replicas":[5,6]}]
   }
-  </pre>
+  </code></pre>
   <p>
   Finally, the --verify option can be used with the tool to check the status of the partition reassignment. Note that the same expand-cluster-reassignment.json (used with the --execute option) should be used with the --verify option:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file expand-cluster-reassignment.json --verify
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file expand-cluster-reassignment.json --verify
   Status of partition reassignment:
   Reassignment of partition [foo1,0] completed successfully
   Reassignment of partition [foo1,1] is in progress
@@ -268,7 +251,7 @@
   Reassignment of partition [foo2,0] completed successfully
   Reassignment of partition [foo2,1] completed successfully
   Reassignment of partition [foo2,2] completed successfully
-  </pre>
+  </code></pre>
 
   <h5><a id="basic_ops_partitionassignment" href="#basic_ops_partitionassignment">Custom partition assignment and migration</a></h5>
   The partition reassignment tool can also be used to selectively move replicas of a partition to a specific set of brokers. When used in this manner, it is assumed that the user knows the reassignment plan and does not require the tool to generate a candidate reassignment, effectively skipping the --generate step and moving straight to the --execute step
@@ -276,13 +259,11 @@
   For instance, the following example moves partition 0 of topic foo1 to brokers 5,6 and partition 1 of topic foo2 to brokers 2,3:
   <p>
   The first step is to hand craft the custom reassignment plan in a json file:
-  <pre class="brush: bash;">
-  > cat custom-reassignment.json
+  <pre class="line-numbers"><code class="language-bash">  > cat custom-reassignment.json
   {"version":1,"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},{"topic":"foo2","partition":1,"replicas":[2,3]}]}
-  </pre>
+  </code></pre>
   Then, use the json file with the --execute option to start the reassignment process:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --execute
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --execute
   Current partition replica assignment
 
   {"version":1,
@@ -296,15 +277,14 @@
   "partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},
                 {"topic":"foo2","partition":1,"replicas":[2,3]}]
   }
-  </pre>
+  </code></pre>
   <p>
   The --verify option can be used with the tool to check the status of the partition reassignment. Note that the same expand-cluster-reassignment.json (used with the --execute option) should be used with the --verify option:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --verify
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --verify
   Status of partition reassignment:
   Reassignment of partition [foo1,0] completed successfully
   Reassignment of partition [foo2,1] completed successfully
-  </pre>
+  </code></pre>
 
   <h4><a id="basic_ops_decommissioning_brokers" href="#basic_ops_decommissioning_brokers">Decommissioning brokers</a></h4>
   The partition reassignment tool does not have the ability to automatically generate a reassignment plan for decommissioning brokers yet. As such, the admin has to come up with a reassignment plan to move the replica for all partitions hosted on the broker to be decommissioned, to the rest of the brokers. This can be relatively tedious as the reassignment needs to ensure that all the replicas are not moved from the decommissioned broker to only one other broker. To make this process eff [...]
@@ -315,14 +295,12 @@
   For instance, the following example increases the replication factor of partition 0 of topic foo from 1 to 3. Before increasing the replication factor, the partition's only replica existed on broker 5. As part of increasing the replication factor, we will add more replicas on brokers 6 and 7.
   <p>
   The first step is to hand craft the custom reassignment plan in a json file:
-  <pre class="brush: bash;">
-  > cat increase-replication-factor.json
+  <pre class="line-numbers"><code class="language-bash">  > cat increase-replication-factor.json
   {"version":1,
   "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}
-  </pre>
+  </code></pre>
   Then, use the json file with the --execute option to start the reassignment process:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --execute
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --execute
   Current partition replica assignment
 
   {"version":1,
@@ -332,20 +310,18 @@
   Successfully started reassignment of partitions
   {"version":1,
   "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}
-  </pre>
+  </code></pre>
   <p>
   The --verify option can be used with the tool to check the status of the partition reassignment. Note that the same increase-replication-factor.json (used with the --execute option) should be used with the --verify option:
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --verify
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --verify
   Status of partition reassignment:
   Reassignment of partition [foo,0] completed successfully
-  </pre>
+  </code></pre>
   You can also verify the increase in replication factor with the kafka-topics tool:
-  <pre class="brush: bash;">
-  > bin/kafka-topics.sh --zookeeper localhost:2181 --topic foo --describe
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-topics.sh --zookeeper localhost:2181 --topic foo --describe
   Topic:foo	PartitionCount:1	ReplicationFactor:3	Configs:
     Topic: foo	Partition: 0	Leader: 5	Replicas: 5,6,7	Isr: 5,6,7
-  </pre>
+  </code></pre>
 
   <h4><a id="rep-throttle" href="#rep-throttle">Limiting Bandwidth Usage during Data Migration</a></h4>
   Kafka lets you apply a throttle to replication traffic, setting an upper bound on the bandwidth used to move replicas from machine to machine. This is useful when rebalancing a cluster, bootstrapping a new broker or adding or removing brokers, as it limits the impact these data-intensive operations will have on users.
@@ -353,15 +329,14 @@
   There are two interfaces that can be used to engage a throttle. The simplest, and safest, is to apply a throttle when invoking the kafka-reassign-partitions.sh, but kafka-configs.sh can also be used to view and alter the throttle values directly.
   <p></p>
   So for example, if you were to execute a rebalance, with the below command, it would move partitions at no more than 50MB/s.
-  <pre class="brush: bash;">$ bin/kafka-reassign-partitions.sh --zookeeper myhost:2181--execute --reassignment-json-file bigger-cluster.json —throttle 50000000</pre>
+  <pre class="line-numbers"><code class="language-bash">$ bin/kafka-reassign-partitions.sh --zookeeper myhost:2181--execute --reassignment-json-file bigger-cluster.json —throttle 50000000</code></pre>
   When you execute this script you will see the throttle engage:
-  <pre class="brush: bash;">
-  The throttle limit was set to 50000000 B/s
-  Successfully started reassignment of partitions.</pre>
+  <pre class="line-numbers"><code class="language-bash">  The throttle limit was set to 50000000 B/s
+  Successfully started reassignment of partitions.</code></pre>
   <p>Should you wish to alter the throttle, during a rebalance, say to increase the throughput so it completes quicker, you can do this by re-running the execute command passing the same reassignment-json-file:</p>
-  <pre class="brush: bash;">$ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181  --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
+  <pre class="line-numbers"><code class="language-bash">$ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181  --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
   There is an existing assignment running.
-  The throttle limit was set to 700000000 B/s</pre>
+  The throttle limit was set to 700000000 B/s</code></pre>
 
   <p>Once the rebalance completes the administrator can check the status of the rebalance using the --verify option.
       If the rebalance has completed, the throttle will be removed via the --verify command. It is important that
@@ -369,43 +344,40 @@
       the --verify option. Failure to do so could cause regular replication traffic to be throttled. </p>
   <p>When the --verify option is executed, and the reassignment has completed, the script will confirm that the throttle was removed:</p>
 
-  <pre class="brush: bash;">
-  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181  --verify --reassignment-json-file bigger-cluster.json
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-reassign-partitions.sh --zookeeper localhost:2181  --verify --reassignment-json-file bigger-cluster.json
   Status of partition reassignment:
   Reassignment of partition [my-topic,1] completed successfully
   Reassignment of partition [mytopic,0] completed successfully
-  Throttle was removed.</pre>
+  Throttle was removed.</code></pre>
 
   <p>The administrator can also validate the assigned configs using the kafka-configs.sh. There are two pairs of throttle
       configuration used to manage the throttling process. The throttle value itself. This is configured, at a broker
       level, using the dynamic properties: </p>
 
-  <pre class="brush: text;">leader.replication.throttled.rate
-  follower.replication.throttled.rate</pre>
+  <pre class="line-numbers"><code class="language-text">leader.replication.throttled.rate
+  follower.replication.throttled.rate</code></pre>
 
   <p>There is also an enumerated set of throttled replicas: </p>
 
-  <pre class="brush: text;">leader.replication.throttled.replicas
-  follower.replication.throttled.replicas</pre>
+  <pre class="line-numbers"><code class="language-text">leader.replication.throttled.replicas
+  follower.replication.throttled.replicas</code></pre>
 
   <p>Which are configured per topic. All four config values are automatically assigned by kafka-reassign-partitions.sh
       (discussed below). </p>
   <p>To view the throttle limit configuration:</p>
 
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh --describe --zookeeper localhost:2181 --entity-type brokers
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh --describe --zookeeper localhost:2181 --entity-type brokers
   Configs for brokers '2' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000
-  Configs for brokers '1' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000</pre>
+  Configs for brokers '1' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000</code></pre>
 
   <p>This shows the throttle applied to both leader and follower side of the replication protocol. By default both sides
       are assigned the same throttled throughput value. </p>
 
   <p>To view the list of throttled replicas:</p>
 
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh --describe --zookeeper localhost:2181 --entity-type topics
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh --describe --zookeeper localhost:2181 --entity-type topics
   Configs for topic 'my-topic' are leader.replication.throttled.replicas=1:102,0:101,
-      follower.replication.throttled.replicas=1:101,0:102</pre>
+      follower.replication.throttled.replicas=1:101,0:102</code></pre>
 
   <p>Here we see the leader throttle is applied to partition 1 on broker 102 and partition 0 on broker 101. Likewise the
       follower throttle is applied to partition 1 on
@@ -432,12 +404,12 @@
   <p><i>(2) Ensuring Progress:</i></p>
   <p>If the throttle is set too low, in comparison to the incoming write rate, it is possible for replication to not
       make progress. This occurs when:</p>
-  <pre>max(BytesInPerSec) > throttle</pre>
+  <pre>max(BytesInPerSec) > throttle</code></pre>
   <p>
       Where BytesInPerSec is the metric that monitors the write throughput of producers into each broker. </p>
   <p>The administrator can monitor whether replication is making progress, during the rebalance, using the metric:</p>
 
-  <pre>kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)</pre>
+  <pre>kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)</code></pre>
 
   <p>The lag should constantly decrease during replication.  If the metric does not decrease the administrator should
       increase the
@@ -451,76 +423,64 @@
   It is possible to set custom quotas for each (user, client-id), user or client-id group.
   <p>
   Configure custom quota for (user=user1, client-id=clientA):
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
   Updated config for entity: user-principal 'user1', client-id 'clientA'.
-  </pre>
+  </code></pre>
 
   Configure custom quota for user=user1:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
   Updated config for entity: user-principal 'user1'.
-  </pre>
+  </code></pre>
 
   Configure custom quota for client-id=clientA:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA
   Updated config for entity: client-id 'clientA'.
-  </pre>
+  </code></pre>
 
   It is possible to set default quotas for each (user, client-id), user or client-id group by specifying <i>--entity-default</i> option instead of <i>--entity-name</i>.
   <p>
   Configure default client-id quota for user=userA:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-default
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-default
   Updated config for entity: user-principal 'user1', default client-id.
-  </pre>
+  </code></pre>
 
   Configure default quota for user:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
   Updated config for entity: default user-principal.
-  </pre>
+  </code></pre>
 
   Configure default quota for client-id:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
   Updated config for entity: default client-id.
-  </pre>
+  </code></pre>
 
   Here's how to describe the quota for a given (user, client-id):
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
   Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  </code></pre>
   Describe quota for a given user:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-name user1
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-name user1
   Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  </code></pre>
   Describe quota for a given client-id:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type clients --entity-name clientA
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type clients --entity-name clientA
   Configs for client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  </code></pre>
   If entity name is not specified, all entities of the specified type are described. For example, describe all users:
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users
   Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   Configs for default user-principal are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  </code></pre>
   Similarly for (user, client):
-  <pre class="brush: bash;">
-  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-type clients
+  <pre class="line-numbers"><code class="language-bash">  > bin/kafka-configs.sh  --zookeeper localhost:2181 --describe --entity-type users --entity-type clients
   Configs for user-principal 'user1', default client-id are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
   Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
-  </pre>
+  </code></pre>
   <p>
   It is possible to set default quotas that apply to all client-ids by setting these configs on the brokers. These properties are applied only if quota overrides or defaults are not configured in Zookeeper. By default, each client-id receives an unlimited quota. The following sets the default quota per producer and consumer client-id to 10MB/sec.
-  <pre class="brush: text;">
-    quota.producer.default=10485760
+  <pre class="line-numbers"><code class="language-text">    quota.producer.default=10485760
     quota.consumer.default=10485760
-  </pre>
+  </code></pre>
   Note that these properties are being deprecated and may be removed in a future release. Defaults configured using kafka-configs.sh take precedence over these properties.
 
   <h3><a id="datacenters" href="#datacenters">6.2 Datacenters</a></h3>
@@ -559,8 +519,7 @@
   <p>
   <h4><a id="prodconfig" href="#prodconfig">A Production Server Config</a></h4>
   Here is an example production server configuration:
-  <pre class="brush: text;">
-  # ZooKeeper
+  <pre class="line-numbers"><code class="language-text">  # ZooKeeper
   zookeeper.connect=[list of ZooKeeper servers]
 
   # Log configuration
@@ -574,7 +533,7 @@
   auto.create.topics.enable=false
   min.insync.replicas=2
   queued.max.requests=[number of concurrent requests]
-  </pre>
+  </code></pre>
 
   Our client configuration varies a fair amount between different use cases.
 
@@ -585,11 +544,10 @@
   LinkedIn is currently running JDK 1.8 u5 (looking to upgrade to a newer version) with the G1 collector. If you decide to use the G1 collector (the current default) and you are still on JDK 1.7, make sure you are on u51 or newer. LinkedIn tried out u21 in testing, but they had a number of problems with the GC implementation in that version.
 
   LinkedIn's tuning looks like this:
-  <pre class="brush: text;">
-  -Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
+  <pre class="line-numbers"><code class="language-text">  -Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
   -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
   -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80
-  </pre>
+  </code></pre>
 
   For reference, here are the stats on one of LinkedIn's busiest clusters (at peak):
   <ul>
@@ -651,7 +609,7 @@
   When Pdflush cannot keep up with the rate of data being written it will eventually cause the writing process to block incurring latency in the writes to slow down the accumulation of data.
   <p>
   You can see the current state of OS memory usage by doing
-  <pre class="brush: bash;"> &gt; cat /proc/meminfo </pre>
+  <pre class="line-numbers"><code class="language-bash"> &gt; cat /proc/meminfo </code></pre>
   The meaning of these values are described in the link above.
   <p>
   Using pagecache has several advantages over an in-process cache for storing data that will be written out to disk:
@@ -1242,7 +1200,7 @@
   Use the following configuration option to specify which metrics
   you want collected:
 
-<pre>metrics.recording.level="info"</pre>
+<pre>metrics.recording.level="info"</code></pre>
 
 <h5><a id="kafka_streams_thread_monitoring" href="#kafka_streams_thread_monitoring">Thread Metrics</a></h5>
 All the following metrics have a recording level of ``info``:
diff --git a/0110/protocol.html b/0110/protocol.html
index 4042223..7a69e8f 100644
--- a/0110/protocol.html
+++ b/0110/protocol.html
@@ -22,7 +22,7 @@
     <div class="right">
         <h1>Kafka protocol guide</h1>
 
-<p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="https://kafka.apache.org/documentation.html#design">here</a></p>
+<p>This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described <a href="/documentation.html#design">here</a></p>
 
 <ul class="toc">
     <li><a href="#protocol_preliminaries">Preliminaries</a>
@@ -183,10 +183,10 @@ Kafka request. SASL/GSSAPI authentication is performed starting with this packet
 
 <p>All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:</p>
 
-<pre>
+<pre class="line-numbers"><code class="language-java">
 RequestOrResponse => Size (RequestMessage | ResponseMessage)
 Size => int32
-</pre>
+</code></pre>
 
 <table class="data-table"><tbody>
 <tr><th>Field</th><th>Description</th></tr>
diff --git a/0110/quickstart.html b/0110/quickstart.html
index c50df29..d3fee15 100644
--- a/0110/quickstart.html
+++ b/0110/quickstart.html
@@ -27,10 +27,9 @@ Since Kafka console scripts are different for Unix-based and Windows platforms,
 
 <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/{{fullDotVersion}}/kafka_2.11-{{fullDotVersion}}.tgz" title="Kafka downloads">Download</a> the {{fullDotVersion}} release and un-tar it.
 
-<pre class="brush: bash;">
-&gt; tar -xzf kafka_2.11-{{fullDotVersion}}.tgz
+<pre class="line-numbers"><code class="language-bash">&gt; tar -xzf kafka_2.11-{{fullDotVersion}}.tgz
 &gt; cd kafka_2.11-{{fullDotVersion}}
-</pre>
+</code></pre>
 
 <h4><a id="quickstart_startserver" href="#quickstart_startserver">Step 2: Start the server</a></h4>
 
@@ -38,32 +37,28 @@ Since Kafka console scripts are different for Unix-based and Windows platforms,
 Kafka uses <a href="https://zookeeper.apache.org/">ZooKeeper</a> so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
 </p>
 
-<pre class="brush: bash;">
-&gt; bin/zookeeper-server-start.sh config/zookeeper.properties
+<pre class="line-numbers"><code class="language-bash">&gt; bin/zookeeper-server-start.sh config/zookeeper.properties
 [2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
 ...
-</pre>
+</code></pre>
 
 <p>Now start the Kafka server:</p>
-<pre class="brush: bash;">
-&gt; bin/kafka-server-start.sh config/server.properties
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-server-start.sh config/server.properties
 [2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
 [2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
 ...
-</pre>
+</code></pre>
 
 <h4><a id="quickstart_createtopic" href="#quickstart_createtopic">Step 3: Create a topic</a></h4>
 
 <p>Let's create a topic named "test" with a single partition and only one replica:</p>
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
+</code></pre>
 
 <p>We can now see that topic if we run the list topic command:</p>
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --list --zookeeper localhost:2181
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --list --zookeeper localhost:2181
 test
-</pre>
+</code></pre>
 <p>Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.</p>
 
 <h4><a id="quickstart_send" href="#quickstart_send">Step 4: Send some messages</a></h4>
@@ -72,21 +67,19 @@ test
 <p>
 Run the producer and then type a few messages into the console to send to the server.</p>
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
 This is a message
 This is another message
-</pre>
+</code></pre>
 
 <h4><a id="quickstart_consume" href="#quickstart_consume">Step 5: Start a consumer</a></h4>
 
 <p>Kafka also has a command line consumer that will dump out messages to standard output.</p>
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
 This is a message
 This is another message
-</pre>
+</code></pre>
 <p>
 If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.
 </p>
@@ -100,16 +93,14 @@ All of the command line tools have additional options; running the command with
 <p>
 First we make a config file for each of the brokers (on Windows use the <code>copy</code> command instead):
 </p>
-<pre class="brush: bash;">
-&gt; cp config/server.properties config/server-1.properties
+<pre class="line-numbers"><code class="language-bash">&gt; cp config/server.properties config/server-1.properties
 &gt; cp config/server.properties config/server-2.properties
-</pre>
+</code></pre>
 
 <p>
 Now edit these new files and set the following properties:
 </p>
-<pre class="brush: text;">
-
+<pre class="line-numbers"><code class="language-text">
 config/server-1.properties:
     broker.id=1
     listeners=PLAINTEXT://:9093
@@ -119,29 +110,26 @@ config/server-2.properties:
     broker.id=2
     listeners=PLAINTEXT://:9094
     log.dir=/tmp/kafka-logs-2
-</pre>
+</code></pre>
 <p>The <code>broker.id</code> property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.</p>
 <p>
 We already have Zookeeper and our single node started, so we just need to start the two new nodes:
 </p>
-<pre class="brush: bash;">
-&gt; bin/kafka-server-start.sh config/server-1.properties &amp;
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-server-start.sh config/server-1.properties &amp;
 ...
 &gt; bin/kafka-server-start.sh config/server-2.properties &amp;
 ...
-</pre>
+</code></pre>
 
 <p>Now create a new topic with a replication factor of three:</p>
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
+</code></pre>
 
 <p>Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:</p>
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
 Topic:my-replicated-topic	PartitionCount:1	ReplicationFactor:3	Configs:
 	Topic: my-replicated-topic	Partition: 0	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
-</pre>
+</code></pre>
 <p>Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.</p>
 <ul>
   <li>"leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
@@ -152,60 +140,53 @@ Topic:my-replicated-topic	PartitionCount:1	ReplicationFactor:3	Configs:
 <p>
 We can run the same command on the original topic we created to see where it is:
 </p>
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
 Topic:test	PartitionCount:1	ReplicationFactor:1	Configs:
 	Topic: test	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
-</pre>
+</code></pre>
 <p>So there is no surprise there&mdash;the original topic has no replicas and is on server 0, the only server in our cluster when we created it.</p>
 <p>
 Let's publish a few messages to our new topic:
 </p>
-<pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
 ...
 my test message 1
 my test message 2
 ^C
-</pre>
+</code></pre>
 <p>Now let's consume these messages:</p>
-<pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
 ...
 my test message 1
 my test message 2
 ^C
-</pre>
+</code></pre>
 
 <p>Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:</p>
-<pre class="brush: bash;">
-&gt; ps aux | grep server-1.properties
+<pre class="line-numbers"><code class="language-bash">&gt; ps aux | grep server-1.properties
 7564 ttys002    0:15.91 /System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...
 &gt; kill -9 7564
-</pre>
+</code></pre>
 
 On Windows use:
-<pre class="brush: bash;">
-&gt; wmic process get processid,caption,commandline | find "java.exe" | find "server-1.properties"
+<pre class="line-numbers"><code class="language-bash">&gt; wmic process get processid,caption,commandline | find "java.exe" | find "server-1.properties"
 java.exe    java  -Xmx1G -Xms1G -server -XX:+UseG1GC ... build\libs\kafka_2.11-{{fullDotVersion}}.jar"  kafka.Kafka config\server-1.properties    644
 &gt; taskkill /pid 644 /f
-</pre>
+</code></pre>
 
 <p>Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:</p>
 
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
 Topic:my-replicated-topic	PartitionCount:1	ReplicationFactor:3	Configs:
 	Topic: my-replicated-topic	Partition: 0	Leader: 2	Replicas: 1,2,0	Isr: 2,0
-</pre>
+</code></pre>
 <p>But the messages are still available for consumption even though the leader that took the writes originally is down:</p>
-<pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
 ...
 my test message 1
 my test message 2
 ^C
-</pre>
+</code></pre>
 
 
 <h4><a id="quickstart_kafkaconnect" href="#quickstart_kafkaconnect">Step 7: Use Kafka Connect to import/export data</a></h4>
@@ -221,9 +202,8 @@ Kafka topic to a file.</p>
 
 <p>First, we'll start by creating some seed data to test with:</p>
 
-<pre class="brush: bash;">
-&gt; echo -e "foo\nbar" > test.txt
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; echo -e "foo\nbar" > test.txt
+</code></pre>
 
 <p>Next, we'll start two connectors running in <i>standalone</i> mode, which means they run in a single, local, dedicated
 process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect
@@ -231,9 +211,8 @@ process, containing common configuration such as the Kafka brokers to connect to
 The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector
 class to instantiate, and any other configuration required by the connector.</p>
 
-<pre class="brush: bash;">
-&gt; bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
+</code></pre>
 
 <p>
 These sample configuration files, included with Kafka, use the default local cluster configuration you started earlier
@@ -250,11 +229,10 @@ by examining the contents of the output file:
 </p>
 
 
-<pre class="brush: bash;">
-&gt; cat test.sink.txt
+<pre class="line-numbers"><code class="language-bash">&gt; cat test.sink.txt
 foo
 bar
-</pre>
+</code></pre>
 
 <p>
 Note that the data is being stored in the Kafka topic <code>connect-test</code>, so we can also run a console consumer to see the
@@ -262,18 +240,16 @@ data in the topic (or use custom consumer code to process it):
 </p>
 
 
-<pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
+<pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
 {"schema":{"type":"string","optional":false},"payload":"foo"}
 {"schema":{"type":"string","optional":false},"payload":"bar"}
 ...
-</pre>
+</code></pre>
 
 <p>The connectors continue to process data, so we can add data to the file and see it move through the pipeline:</p>
 
-<pre class="brush: bash;">
-&gt; echo "Another line" >> test.txt
-</pre>
+<pre class="line-numbers"><code class="language-bash">&gt; echo "Another line" >> test.txt
+</code></pre>
 
 <p>You should see the line appear in the console consumer output and in the sink file.</p>
 
diff --git a/0110/security.html b/0110/security.html
index 7d92ec3..8cbaec6 100644
--- a/0110/security.html
+++ b/0110/security.html
@@ -42,8 +42,7 @@
         <li><h4><a id="security_ssl_key" href="#security_ssl_key">Generate SSL key and certificate for each Kafka broker</a></h4>
             The first step of deploying one or more brokers with the SSL support is to generate the key and the certificate for each machine in the cluster. You can use Java's keytool utility to accomplish this task.
             We will generate the key into a temporary keystore initially so that we can export and sign it later with CA.
-            <pre class="brush: bash;">
-            keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA</pre>
+            <pre class="line-numbers"><code class="language-bash">            keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA</code></pre>
 
             You need to specify two parameters in the above command:
             <ol>
@@ -53,7 +52,7 @@
             <br>
         Note: By default the property <code>ssl.endpoint.identification.algorithm</code> is not defined, so hostname verification is not performed. In order to enable hostname verification, set the following property:
 
-        <pre class="brush: text;">	ssl.endpoint.identification.algorithm=HTTPS </pre>
+        <pre class="line-numbers"><code class="language-text">	ssl.endpoint.identification.algorithm=HTTPS </code></pre>
 
         Once enabled, clients will verify the server's fully qualified domain name (FQDN) against one of the following two fields:
         <ol>
@@ -62,45 +61,37 @@
         </ol>
         <br>
         Both fields are valid, RFC-2818 recommends the use of SAN however. SAN is also more flexible, allowing for multiple DNS entries to be declared. Another advantage is that the CN can be set to a more meaningful value for authorization purposes. To add a SAN field  append the following argument <code> -ext SAN=DNS:{FQDN} </code> to the keytool command:
-        <pre class="brush: bash;">
-        keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -ext SAN=DNS:{FQDN}
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">        keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -ext SAN=DNS:{FQDN}
+        </code></pre>
         The following command can be run afterwards to verify the contents of the generated certificate:
-        <pre class="brush: bash;">
-        keytool -list -v -keystore server.keystore.jks
-        </pre>
+        <pre class="line-numbers"><code class="language-bash">        keytool -list -v -keystore server.keystore.jks
+        </code></pre>
         </li>
         <li><h4><a id="security_ssl_ca" href="#security_ssl_ca">Creating your own CA</a></h4>
             After the first step, each machine in the cluster has a public-private key pair, and a certificate to identify the machine. The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine.<p>
             Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. A certificate authority (CA) is responsible for signing certificates. CA works likes a government that issues passports—the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed [...]
-            <pre class="brush: bash;">
-            openssl req -new -x509 -keyout ca-key -out ca-cert -days 365</pre>
+            <pre class="line-numbers"><code class="language-bash">            openssl req -new -x509 -keyout ca-key -out ca-cert -days 365</code></pre>
 
             The generated CA is simply a public-private key pair and certificate, and it is intended to sign other certificates.<br>
 
             The next step is to add the generated CA to the **clients' truststore** so that the clients can trust this CA:
-            <pre class="brush: bash;">
-            keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert</pre>
+            <pre class="line-numbers"><code class="language-bash">            keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert</code></pre>
 
             <b>Note:</b> If you configure the Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" on the <a href="#config_broker">Kafka brokers config</a> then you must provide a truststore for the Kafka brokers as well and it should have all the CA certificates that clients' keys were signed by.
-            <pre class="brush: bash;">
-            keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert</pre>
+            <pre class="line-numbers"><code class="language-bash">            keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert</code></pre>
 
             In contrast to the keystore in step 1 that stores each machine's own identity, the truststore of a client stores all the certificates that the client should trust. Importing a certificate into one's truststore also means trusting all certificates that are signed by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that it has issued. This attribute is called the chain of trust, and it is particularly useful w [...]
 
         <li><h4><a id="security_ssl_signing" href="#security_ssl_signing">Signing the certificate</a></h4>
             The next step is to sign all certificates generated by step 1 with the CA generated in step 2. First, you need to export the certificate from the keystore:
-            <pre class="brush: bash;">
-            keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file</pre>
+            <pre class="line-numbers"><code class="language-bash">            keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file</code></pre>
 
             Then sign it with the CA:
-            <pre class="brush: bash;">
-            openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}</pre>
+            <pre class="line-numbers"><code class="language-bash">            openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}</code></pre>
 
             Finally, you need to import both the certificate of the CA and the signed certificate into the keystore:
-            <pre class="brush: bash;">
-            keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
-            keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed</pre>
+            <pre class="line-numbers"><code class="language-bash">            keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
+            keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed</code></pre>
 
             The definitions of the parameters are the following:
             <ol>
@@ -125,23 +116,21 @@
             keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
             openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
             keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
-            keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed</pre></li>
+            keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed</code></pre></li>
         <li><h4><a id="security_configbroker" href="#security_configbroker">Configuring Kafka Brokers</a></h4>
             Kafka Brokers support listening for connections on multiple ports.
             We need to configure the following property in server.properties, which must have one or more comma-separated values:
-            <pre>listeners</pre>
+            <pre>listeners</code></pre>
 
             If SSL is not enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports will be necessary.
-            <pre class="brush: text;">
-            listeners=PLAINTEXT://host.name:port,SSL://host.name:port</pre>
+            <pre class="line-numbers"><code class="language-text">            listeners=PLAINTEXT://host.name:port,SSL://host.name:port</code></pre>
 
             Following SSL configs are needed on the broker side
-            <pre class="brush: text;">
-            ssl.keystore.location=/var/private/ssl/server.keystore.jks
+            <pre class="line-numbers"><code class="language-text">            ssl.keystore.location=/var/private/ssl/server.keystore.jks
             ssl.keystore.password=test1234
             ssl.key.password=test1234
             ssl.truststore.location=/var/private/ssl/server.truststore.jks
-            ssl.truststore.password=test1234</pre>
+            ssl.truststore.password=test1234</code></pre>
 
             Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
 
@@ -156,7 +145,7 @@
             </ol>
             If you want to enable SSL for inter-broker communication, add the following to the broker properties file (it defaults to PLAINTEXT)
             <pre>
-            security.inter.broker.protocol=SSL</pre>
+            security.inter.broker.protocol=SSL</code></pre>
 
             <p>
             Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the <a href="http://www.oracle.com/technetwork/java/javase/downloads/index.html">JCE Unlimited Strength Jurisdiction Policy Files</a> must be obtained and installed in the JDK/JRE. See the
@@ -165,42 +154,40 @@
 
             <p>
             The JRE/JDK will have a default pseudo-random number generator (PRNG) that is used for cryptography operations, so it is not required to configure the
-            implementation used with the <pre>ssl.secure.random.implementation</pre>. However, there are performance issues with some implementations (notably, the
-            default chosen on Linux systems, <pre>NativePRNG</pre>, utilizes a global lock). In cases where performance of SSL connections becomes an issue,
-            consider explicitly setting the implementation to be used. The <pre>SHA1PRNG</pre> implementation is non-blocking, and has shown very good performance
+            implementation used with the <pre>ssl.secure.random.implementation</code></pre>. However, there are performance issues with some implementations (notably, the
+            default chosen on Linux systems, <pre>NativePRNG</code></pre>, utilizes a global lock). In cases where performance of SSL connections becomes an issue,
+            consider explicitly setting the implementation to be used. The <pre>SHA1PRNG</code></pre> implementation is non-blocking, and has shown very good performance
             characteristics under heavy load (50 MB/sec of produced messages, plus replication traffic, per-broker).
             </p>
 
             Once you start the broker you should be able to see in the server.log
             <pre>
-            with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)</pre>
+            with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)</code></pre>
 
             To check quickly if  the server keystore and truststore are setup properly you can run the following command
-            <pre>openssl s_client -debug -connect localhost:9093 -tls1</pre> (Note: TLSv1 should be listed under ssl.enabled.protocols)<br>
+            <pre>openssl s_client -debug -connect localhost:9093 -tls1</code></pre> (Note: TLSv1 should be listed under ssl.enabled.protocols)<br>
             In the output of this command you should see server's certificate:
             <pre>
             -----BEGIN CERTIFICATE-----
             {variable sized random bytes}
             -----END CERTIFICATE-----
             subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=Sriharsha Chintalapani
-            issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com</pre>
+            issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com</code></pre>
             If the certificate does not show up or if there are any other error messages then your keystore is not setup properly.</li>
 
         <li><h4><a id="security_configclients" href="#security_configclients">Configuring Kafka Clients</a></h4>
             SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported. The configs for SSL will be the same for both producer and consumer.<br>
             If client authentication is not required in the broker, then the following is a minimal configuration example:
-            <pre class="brush: text;">
-            security.protocol=SSL
+            <pre class="line-numbers"><code class="language-text">            security.protocol=SSL
             ssl.truststore.location=/var/private/ssl/client.truststore.jks
-            ssl.truststore.password=test1234</pre>
+            ssl.truststore.password=test1234</code></pre>
 
             Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
 
             If client authentication is required, then a keystore must be created like in step 1 and the following must also be configured:
-            <pre class="brush: text;">
-            ssl.keystore.location=/var/private/ssl/client.keystore.jks
+            <pre class="line-numbers"><code class="language-text">            ssl.keystore.location=/var/private/ssl/client.keystore.jks
             ssl.keystore.password=test1234
-            ssl.key.password=test1234</pre>
+            ssl.key.password=test1234</code></pre>
 			
             Other configuration settings that may also be needed depending on our requirements and the broker configuration:
                 <ol>
@@ -212,9 +199,8 @@
                 </ol>
     <br>
             Examples using console-producer and console-consumer:
-            <pre class="brush: bash;">
-            kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config client-ssl.properties
-            kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties</pre>
+            <pre class="line-numbers"><code class="language-bash">            kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config client-ssl.properties
+            kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties</code></pre>
         </li>
     </ol>
     <h3><a id="security_sasl" href="#security_sasl">7.3 Authentication using SASL</a></h3>
@@ -278,17 +264,16 @@
                     <a href="#security_sasl_scram_clientconfig">SCRAM</a>.
                     For example, <a href="#security_sasl_gssapi_clientconfig">GSSAPI</a>
                     credentials may be configured as:
-                    <pre class="brush: text;">
-        KafkaClient {
+                    <pre class="line-numbers"><code class="language-text">        KafkaClient {
         com.sun.security.auth.module.Krb5LoginModule required
         useKeyTab=true
         storeKey=true
         keyTab="/etc/security/keytabs/kafka_client.keytab"
         principal="kafka-client-1@EXAMPLE.COM";
-    };</pre>
+    };</code></pre>
                 </li>
                 <li>Pass the JAAS config file location as JVM parameter to each client JVM. For example:
-                    <pre class="brush: bash;">    -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf</pre></li>
+                    <pre class="line-numbers"><code class="language-bash">    -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf</code></pre></li>
 	</ol>
                 </li>
             </ol>
@@ -319,11 +304,11 @@
             <li>Configure a SASL port in server.properties, by adding at least one of
                 SASL_PLAINTEXT or SASL_SSL to the <i>listeners</i> parameter, which
                 contains one or more comma-separated values:
-                <pre>    listeners=SASL_PLAINTEXT://host.name:port</pre>
+                <pre>    listeners=SASL_PLAINTEXT://host.name:port</code></pre>
                 If you are only configuring a SASL port (or if you want
                 the Kafka brokers to authenticate each other using SASL) then make sure
                 you set the same SASL protocol for inter-broker communication:
-                <pre>    security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)</pre></li>
+                <pre>    security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)</code></pre></li>
             <li>Select one or more  <a href="#security_sasl_mechanism">supported mechanisms</a>
                 to enable in the broker and follow the steps to configure SASL for the mechanism.
                 To enable multiple mechanisms in the broker, follow the steps
@@ -350,16 +335,14 @@
             <li><b>Create Kerberos Principals</b><br>
             If you are using the organization's Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).</br>
             If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:
-                <pre class="brush: bash;">
-        sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
-        sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"</pre></li>
+                <pre class="line-numbers"><code class="language-bash">        sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
+        sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"</code></pre></li>
             <li><b>Make sure all hosts can be reachable using hostnames</b> - it is a Kerberos requirement that all your hosts can be resolved with their FQDNs.</li>
         </ol>
         <li><h5><a id="security_sasl_kerberos_brokerconfig" href="#security_sasl_kerberos_brokerconfig">Configuring Kafka Brokers</a></h5>
         <ol>
             <li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab):
-            <pre class="brush: text;">
-        KafkaServer {
+            <pre class="line-numbers"><code class="language-text">        KafkaServer {
             com.sun.security.auth.module.Krb5LoginModule required
             useKeyTab=true
             storeKey=true
@@ -374,14 +357,14 @@
         storeKey=true
         keyTab="/etc/security/keytabs/kafka_server.keytab"
         principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
-        };</pre>
+        };</code></pre>
 
             </li>
             <tt>KafkaServer</tt> section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It
             allows the broker to login using the keytab specified in this section. See <a href="#security_sasl_brokernotes">notes</a> for more details on Zookeeper SASL configuration.
             <li>Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
                 <pre>    -Djava.security.krb5.conf=/etc/kafka/krb5.conf
-        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre>
+        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre>
             </li>
             <li>Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting kafka broker.</li>
             <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>. For example:
@@ -389,9 +372,9 @@
         security.inter.broker.protocol=SASL_PLAINTEXT
         sasl.mechanism.inter.broker.protocol=GSSAPI
         sasl.enabled.mechanisms=GSSAPI
-            </pre>
+            </code></pre>
             </li>We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is "kafka/kafka1.hostname.com@EXAMPLE.com", so:
-            <pre>    sasl.kerberos.service.name=kafka</pre>
+            <pre>    sasl.kerberos.service.name=kafka</code></pre>
 
         </ol></li>
         <li><h5><a id="security_sasl_kerberos_clientconfig" href="#security_kerberos_sasl_clientconfig">Configuring Kafka Clients</a></h5>
@@ -410,25 +393,25 @@
         useKeyTab=true \
         storeKey=true  \
         keyTab="/etc/security/keytabs/kafka_client.keytab" \
-        principal="kafka-client-1@EXAMPLE.COM";</pre>
+        principal="kafka-client-1@EXAMPLE.COM";</code></pre>
 
                    For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used
                    along with "useTicketCache=true" as in:
                 <pre>
     sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
-        useTicketCache=true;</pre>
+        useTicketCache=true;</code></pre>
 
                    JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
                    as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
                    <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</li>
                 <li>Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client.</li>
                 <li>Optionally pass the krb5 file locations as JVM parameters to each client JVM (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
-                <pre>    -Djava.security.krb5.conf=/etc/kafka/krb5.conf</pre></li>
+                <pre>    -Djava.security.krb5.conf=/etc/kafka/krb5.conf</code></pre></li>
                 <li>Configure the following properties in producer.properties or consumer.properties:
                 <pre>
     security.protocol=SASL_PLAINTEXT (or SASL_SSL)
     sasl.mechanism=GSSAPI
-    sasl.kerberos.service.name=kafka</pre></li>
+    sasl.kerberos.service.name=kafka</code></pre></li>
             </ol>
         </li>
         </ol>
@@ -442,26 +425,25 @@
         <li><h5><a id="security_sasl_plain_brokerconfig" href="#security_sasl_plain_brokerconfig">Configuring Kafka Brokers</a></h5>
             <ol>
             <li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
-                <pre class="brush: text;">
-        KafkaServer {
+                <pre class="line-numbers"><code class="language-text">        KafkaServer {
             org.apache.kafka.common.security.plain.PlainLoginModule required
             username="admin"
             password="admin-secret"
             user_admin="admin-secret"
             user_alice="alice-secret";
-        };</pre>
+        };</code></pre>
                 This configuration defines two users (<i>admin</i> and <i>alice</i>). The properties <tt>username</tt> and <tt>password</tt>
                 in the <tt>KafkaServer</tt> section are used by the broker to initiate connections to other brokers. In this example,
                 <i>admin</i> is the user for inter-broker communication. The set of properties <tt>user_<i>userName</i></tt> defines
                 the passwords for all users that connect to the broker and the broker validates all client connections including
                 those from other brokers using these properties.</li>
             <li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
-                <pre>    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</pre></li>
+                <pre>    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre></li>
             <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>. For example:
                 <pre>    listeners=SASL_SSL://host.name:port
         security.inter.broker.protocol=SASL_SSL
         sasl.mechanism.inter.broker.protocol=PLAIN
-        sasl.enabled.mechanisms=PLAIN</pre></li>
+        sasl.enabled.mechanisms=PLAIN</code></pre></li>
             </ol>
         </li>
 
@@ -471,10 +453,9 @@
             <li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
                 The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
                 The following is an example configuration for a client for the PLAIN mechanism:
-                <pre class="brush: text;">
-    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
+                <pre class="line-numbers"><code class="language-text">    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
         username="alice" \
-        password="alice-secret";</pre>
+        password="alice-secret";</code></pre>
                 <p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure
                 the user for client connections. In this example, clients connect to the broker as user <i>alice</i>.
                 Different clients within a JVM may connect as different users by specifying different user names
... 155489 lines suppressed ...


Mime
View raw message