kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gwens...@apache.org
Subject [1/2] kafka-site git commit: added new meetup links on events page and remove docs subnav on old versions of the docs
Date Thu, 17 Nov 2016 23:47:25 GMT
Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 9aa3bae98 -> e18293b6c


added new meetup links on events page and remove docs subnav on old versions of the docs

hide subnav on prior docs and make 07 docs consistent with others


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/e8fdfce2
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/e8fdfce2
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/e8fdfce2

Branch: refs/heads/asf-site
Commit: e8fdfce251c576e65a2c746aad5f57e0a8e0d8d9
Parents: 9aa3bae
Author: Derrick Or <derrickor@gmail.com>
Authored: Wed Nov 9 15:35:51 2016 -0800
Committer: Derrick Or <derrickor@gmail.com>
Committed: Thu Nov 17 13:03:49 2016 -0800

----------------------------------------------------------------------
 0100/documentation.html |  1 -
 07/configuration.html   | 19 +++++-------
 07/documentation.html   |  9 ++++--
 07/performance.html     | 22 ++++++--------
 07/quickstart.html      | 46 +++++++++++++---------------
 08/documentation.html   |  1 -
 081/documentation.html  |  1 -
 082/documentation.html  |  1 -
 090/documentation.html  |  1 -
 events.html             | 71 ++++++++++++++++++++++++++++++--------------
 styles.css              |  4 +++
 11 files changed, 96 insertions(+), 80 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/0100/documentation.html
----------------------------------------------------------------------
diff --git a/0100/documentation.html b/0100/documentation.html
index 85a9701..9bfbb1d 100644
--- a/0100/documentation.html
+++ b/0100/documentation.html
@@ -197,4 +197,3 @@
     <!--#include virtual="streams.html" -->
 
 <!--#include virtual="../includes/_footer.htm" -->
-<!--#include virtual="../includes/_docs_footer.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/07/configuration.html
----------------------------------------------------------------------
diff --git a/07/configuration.html b/07/configuration.html
index 92fb139..27a965f 100644
--- a/07/configuration.html
+++ b/07/configuration.html
@@ -1,10 +1,8 @@
-<!--#include virtual="../includes/_header.htm" -->
-
-<h2> Configuration </h2>
+<h2 id="configuration"> Configuration </h2>
 
 <h3> Important configuration properties for Kafka broker: </h3>
 
-<p>More details about server configuration can be found in the scala class <code>kafka.server.KafkaConfig</code>.</p>

+<p>More details about server configuration can be found in the scala class <code>kafka.server.KafkaConfig</code>.</p>
 
 <table class="data-table">
 <tr>
@@ -25,7 +23,7 @@
 <tr>
      <td><code>log.flush.interval</code></td>
      <td>500</td>
-     <td>Controls the number of messages accumulated in each topic (partition) before
the data is flushed to disk and made available to consumers.</td>  
+     <td>Controls the number of messages accumulated in each topic (partition) before
the data is flushed to disk and made available to consumers.</td>
 </tr>
 <tr>
     <td><code>log.default.flush.scheduler.interval.ms</code></td>
@@ -138,7 +136,7 @@
 
 <h3> Important configuration properties for the high-level consumer: </h3>
 
-<p>More details about consumer configuration can be found in the scala class <code>kafka.consumer.ConsumerConfig</code>.</p>

+<p>More details about consumer configuration can be found in the scala class <code>kafka.consumer.ConsumerConfig</code>.</p>
 
 <table class="data-table">
 <tr>
@@ -169,7 +167,7 @@
 <tr>
     <td><code>backoff.increment.ms</code></td>
     <td>1000</td>
-    <td>This parameter avoids repeatedly polling a broker node which has no new data.
We will backoff every time we get an empty set 
+    <td>This parameter avoids repeatedly polling a broker node which has no new data.
We will backoff every time we get an empty set
 from the broker for this time period</td>
 </tr>
 <tr>
@@ -228,7 +226,7 @@ the size of those queues</td>
 
 <h3> Important configuration properties for the producer: </h3>
 
-<p>More details about producer configuration can be found in the scala class <code>kafka.producer.ProducerConfig</code>.</p>

+<p>More details about producer configuration can be found in the scala class <code>kafka.producer.ProducerConfig</code>.</p>
 
 <table class="data-table">
 <tr>
@@ -330,7 +328,7 @@ the size of those queues</td>
 <tr>
     <td><code>event.handler</code></td>
     <td><code>kafka.producer.async.EventHandler&lt;T&gt;</code></td>
-    <td>the class that implements <code>kafka.producer.async.IEventHandler&lt;T&gt;</code>
used to dispatch a batch of produce requests, using an instance of <code>kafka.producer.SyncProducer</code>.

+    <td>the class that implements <code>kafka.producer.async.IEventHandler&lt;T&gt;</code>
used to dispatch a batch of produce requests, using an instance of <code>kafka.producer.SyncProducer</code>.
 </td>
 </tr>
 <tr>
@@ -350,6 +348,3 @@ the size of those queues</td>
     <td>the <code>java.util.Properties()</code> object used to initialize
the custom <code>callback.handler</code> through its <code>init()</code>
API</td>
 </tr>
 </table>
-
-
-<!--#include virtual="../includes/_footer.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/07/documentation.html
----------------------------------------------------------------------
diff --git a/07/documentation.html b/07/documentation.html
index bb35352..d798423 100644
--- a/07/documentation.html
+++ b/07/documentation.html
@@ -11,9 +11,12 @@
 			<li><a href="/07/quickstart.html">Quickstart</a> &ndash; Get up
and running quickly.
 			<li><a href="/07/configuration.html">Configuration</a> &ndash; All
the knobs.
 			<li><a href="/07/performance.html">Performance</a> &ndash; Some
performance results.
-			<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations">Operations</a>
&ndash; Notes on running the system.
-			<li><a href="http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs">API
Docs</a> &ndash; Scaladoc for the api.
+			<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations" target="_blank">Operations</a>
&ndash; Notes on running the system.
+			<li><a href="http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs"
target="_blank">API Docs</a> &ndash; Scaladoc for the api.
 		</ul>
 
+    <!--#include virtual="quickstart.html" -->
+		<!--#include virtual="configuration.html" -->
+		<!--#include virtual="performance.html" -->
+
 <!--#include virtual="../includes/_footer.htm" -->
-<!--#include virtual="../includes/_docs_footer.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/07/performance.html
----------------------------------------------------------------------
diff --git a/07/performance.html b/07/performance.html
index e1c8420..7d65caf 100644
--- a/07/performance.html
+++ b/07/performance.html
@@ -1,6 +1,4 @@
-<!--#include virtual="../includes/_header.htm" -->
-
-<h2>Performance Results</h2>
+<h2 id="performance">Performance Results</h2>
 <p>The following tests give some basic information on Kafka throughput as the number
of topics, consumers and producers and overall data size varies. Since Kafka nodes are independent,
these tests are run with a single producer, consumer, and broker machine. Results can be extrapolated
for a larger cluster.
 </p>
 
@@ -46,13 +44,13 @@ The below graph is an experiment where we used 40 producers and varied
the numbe
 
 <p>&nbsp;../run-simulator.sh -kafkaServer=localhost -numTopic=10&nbsp;  -reportFile=report-html/data
-time=15 -numConsumer=20 -numProducer=40  -xaxis=numTopic</p>
 
-<p>It will run a simulator with 40 producer and 20 consumer threads 
+<p>It will run a simulator with 40 producer and 20 consumer threads
           producing/consuming from a local kafkaserver.&nbsp; The simulator is going
to
-          run 15 minutes and the results are going to be saved under 
+          run 15 minutes and the results are going to be saved under
           report-html/data</p>
 
-<p>and they will be plotted from there. Basically it will write MB of 
-          data consumed/produced, number of messages consumed/produced given a 
+<p>and they will be plotted from there. Basically it will write MB of
+          data consumed/produced, number of messages consumed/produced given a
           number of topic and report.html will plot the charts.</p>
 
 
@@ -63,9 +61,9 @@ The below graph is an experiment where we used 40 producers and varied the
numbe
 
 
       <p>#!/bin/bash<br />
-      			 
+
          for i in 1 10 20 30 40 50;<br />
-     
+
          do<br />
 
          &nbsp; ../kafka-server.sh server.properties 2>&amp;1 >kafka.out&amp;<br
/>
@@ -73,11 +71,9 @@ The below graph is an experiment where we used 40 producers and varied
the numbe
   &nbsp;../run-simulator.sh -kafkaServer=localhost -numTopic=$i&nbsp;  -reportFile=report-html/data
-time=15 -numConsumer=20 -numProducer=40  -xaxis=numTopic<br />
          &nbsp;../stop-server.sh<br />
 	  &nbsp;rm -rf /tmp/kafka-logs<br />
-     
+
          &nbsp;sleep 300<br />
-    	   
+
          done</p>
 
 <p>The charts similar to above graphs can be plotted with report.html automatically.</p>
-
-<!--#include virtual="../includes/_footer.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/07/quickstart.html
----------------------------------------------------------------------
diff --git a/07/quickstart.html b/07/quickstart.html
index 51e61ad..228f7f6 100644
--- a/07/quickstart.html
+++ b/07/quickstart.html
@@ -1,7 +1,5 @@
-<!--#include virtual="../includes/_header.htm" -->
+<h2 id="quickstart">Quick Start</h2>
 
-<h2>Quick Start</h2>
-	
 <h3> Step 1: Download the code </h3>
 
 <a href="../downloads.html" title="Kafka downloads">Download</a> a recent stable
release.
@@ -15,20 +13,20 @@
 
 <h3>Step 2: Start the server</h3>
 
-Kafka brokers and consumers use this for co-ordination. 
+Kafka brokers and consumers use this for co-ordination.
 <p>
 First start the zookeeper server. You can use the convenience script packaged with kafka
to get a quick-and-dirty single-node zookeeper instance.
 
 <pre>
 <b>&gt; bin/zookeeper-server-start.sh config/zookeeper.properties</b>
-[2010-11-21 23:45:02,335] INFO Reading configuration from: config/zookeeper.properties 
+[2010-11-21 23:45:02,335] INFO Reading configuration from: config/zookeeper.properties
 ...
 </pre>
 
 Now start the Kafka server:
 <pre>
 <b>&gt; bin/kafka-server-start.sh config/server.properties</b>
-jkreps-mn-2:kafka-trunk jkreps$ bin/kafka-server-start.sh config/server.properties 
+jkreps-mn-2:kafka-trunk jkreps$ bin/kafka-server-start.sh config/server.properties
 [2010-11-21 23:51:39,608] INFO starting log cleaner every 60000 ms (kafka.log.LogManager)
 [2010-11-21 23:51:39,628] INFO connecting to ZK: localhost:2181 (kafka.server.KafkaZooKeeper)
 ...
@@ -39,7 +37,7 @@ jkreps-mn-2:kafka-trunk jkreps$ bin/kafka-server-start.sh config/server.properti
 Kafka comes with a command line client that will take input from standard in and send it
out as messages to the Kafka cluster. By default each line will be sent as a separate message.
The topic <i>test</i> is created automatically when messages are sent to it. Omitting
logging you should see something like this:
 
 <pre>
-&gt; <b>bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic test</b>

+&gt; <b>bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic test</b>
 This is a message
 This is another message
 </pre>
@@ -57,7 +55,7 @@ This is another message
 If you have each of the above commands running in a different terminal then you should now
be able to type messages into the producer terminal and see them appear in the consumer terminal.
 </p>
 <p>
-Both of these command line tools have additional options. Running the command with no arguments
will display usage information documenting them in more detail.	
+Both of these command line tools have additional options. Running the command with no arguments
will display usage information documenting them in more detail.
 </p>
 
 <h3>Step 5: Write some code</h3>
@@ -100,7 +98,7 @@ Producer&lt;String, String&gt; producer = new Producer&lt;String,
String&gt;(con
 <pre>
 <small>// The message is sent to a randomly selected partition registered in ZK</small>
 ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic",
"test-message");
-producer.send(data);	
+producer.send(data);
 </pre>
 </li>
 <li>Send multiple messages to multiple topics in one request
@@ -113,7 +111,7 @@ ProducerData&lt;String, String&gt; data2 = new ProducerData&lt;String,
String&gt
 List&lt;ProducerData&lt;String, String&gt;&gt; dataForMultipleTopics = new
ArrayList&lt;ProducerData&lt;String, String&gt;&gt;();
 dataForMultipleTopics.add(data1);
 dataForMultipleTopics.add(data2);
-producer.send(dataForMultipleTopics);	
+producer.send(dataForMultipleTopics);
 </pre>
 </li>
 <li>Send a message with a partition key. Messages with the same key are sent to the
same partition
@@ -139,7 +137,7 @@ ProducerConfig config = new ProducerConfig(props);
 Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
 </pre>
 </li>
-<li>Use custom Encoder 
+<li>Use custom Encoder
 <p>The producer takes in a required config parameter <code>serializer.class</code>
that specifies an <code>Encoder&lt;T&gt;</code> to convert T to a Kafka
Message. Default is the no-op kafka.serializer.DefaultEncoder.
 Here is an example of a custom Encoder -</p>
 <pre>
@@ -158,23 +156,23 @@ props.put("serializer.class", "xyz.TrackingDataSerializer");
 </pre>
 </li>
 <li>Using static list of brokers, instead of zookeeper based broker discovery
-<p>Some applications would rather not depend on zookeeper. In that case, the config
parameter <code>broker.list</code> 
-can be used to specify the list of all brokers in the Kafka cluster.- the list of all brokers
in your Kafka cluster in the following format - 
+<p>Some applications would rather not depend on zookeeper. In that case, the config
parameter <code>broker.list</code>
+can be used to specify the list of all brokers in the Kafka cluster.- the list of all brokers
in your Kafka cluster in the following format -
 <code>broker_id1:host1:port1, broker_id2:host2:port2...</code></p>
 <pre>
 <small>// you can stop the zookeeper instance as it is no longer required</small>
-./bin/zookeeper-server-stop.sh	
+./bin/zookeeper-server-stop.sh
 <small>// create the producer config object </small>
 Properties props = new Properties();
 props.put(“broker.list”, “0:localhost:9092”);
 props.put("serializer.class", "kafka.serializer.StringEncoder");
 ProducerConfig config = new ProducerConfig(props);
 <small>// send a message using default partitioner </small>
-Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);

+Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
 List&lt;String&gt; messages = new java.util.ArrayList&lt;String&gt;();
 messages.add("test-message");
 ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic",
messages);
-producer.send(data);	
+producer.send(data);
 </pre>
 </li>
 <li>Use the asynchronous producer along with GZIP compression. This buffers writes
in memory until either <code>batch.size</code> or <code>queue.time</code>
is reached. After that, data is sent to the Kafka brokers
@@ -197,7 +195,7 @@ producer.send(data);
 
 <h5>Log4j appender </h5>
 
-Data can also be produced to a Kafka server in the form of a log4j appender. In this way,
minimal code needs to be written in order to send some data across to the Kafka server. 
+Data can also be produced to a Kafka server in the form of a log4j appender. In this way,
minimal code needs to be written in order to send some data across to the Kafka server.
 Here is an example of how to use the Kafka Log4j appender -
 
 Start by defining the Kafka appender in your log4j.properties file.
@@ -221,12 +219,12 @@ log4j.logger.your.test.package=INFO, KAFKA
 Data can be sent using a log4j appender as follows -
 
 <pre>
-Logger logger = Logger.getLogger([your.test.class])    
+Logger logger = Logger.getLogger([your.test.class])
 logger.info("message from log4j appender");
 </pre>
 
-If your log4j appender fails to send messages, please verify that the correct 
-log4j properties file is being used. You can add 
+If your log4j appender fails to send messages, please verify that the correct
+log4j properties file is being used. You can add
 <code>-Dlog4j.debug=true</code> to your VM parameters to verify this.
 
 <h4>Consumer Code</h4>
@@ -245,11 +243,11 @@ ConsumerConfig consumerConfig = new ConsumerConfig(props);
 ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);
 
 // create 4 partitions of the stream for topic “test”, to allow 4 threads to consume
-Map&lt;String, List&lt;KafkaStream&lt;Message&gt;&gt;&gt; topicMessageStreams
= 
+Map&lt;String, List&lt;KafkaStream&lt;Message&gt;&gt;&gt; topicMessageStreams
=
     consumerConnector.createMessageStreams(ImmutableMap.of("test", 4));
 List&lt;KafkaStream&lt;Message&gt;&gt; streams = topicMessageStreams.get("test");
 
-// create list of 4 threads to consume from each of the partitions 
+// create list of 4 threads to consume from each of the partitions
 ExecutorService executor = Executors.newFixedThreadPool(4);
 
 // consume the messages in the threads
@@ -258,7 +256,7 @@ for(final KafkaStream&lt;Message&gt; stream: streams) {
     public void run() {
       for(MessageAndMetadata msgAndMetadata: stream) {
         // process message (msgAndMetadata.message())
-      }	
+      }
     }
   });
 }
@@ -305,5 +303,3 @@ while (true) {
   }
 }
 </pre>
-
-<!--#include virtual="../includes/_footer.htm" -->
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/08/documentation.html
----------------------------------------------------------------------
diff --git a/08/documentation.html b/08/documentation.html
index e794dc6..772177e 100644
--- a/08/documentation.html
+++ b/08/documentation.html
@@ -104,4 +104,3 @@
     <!--#include virtual="tools.html" -->
 
 <!--#include virtual="../includes/_footer.htm" -->
-<!--#include virtual="../includes/_docs_footer.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/081/documentation.html
----------------------------------------------------------------------
diff --git a/081/documentation.html b/081/documentation.html
index 5ccb7ea..8e08651 100644
--- a/081/documentation.html
+++ b/081/documentation.html
@@ -120,4 +120,3 @@
     <!--#include virtual="ops.html" -->
 
 <!--#include virtual="../includes/_footer.htm" -->
-<!--#include virtual="../includes/_docs_footer.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/082/documentation.html
----------------------------------------------------------------------
diff --git a/082/documentation.html b/082/documentation.html
index 2dcf738..4f9fe70 100644
--- a/082/documentation.html
+++ b/082/documentation.html
@@ -120,4 +120,3 @@
     <!--#include virtual="ops.html" -->
 
 <!--#include virtual="../includes/_footer.htm" -->
-<!--#include virtual="../includes/_docs_footer.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/090/documentation.html
----------------------------------------------------------------------
diff --git a/090/documentation.html b/090/documentation.html
index 60e42a2..6754281 100644
--- a/090/documentation.html
+++ b/090/documentation.html
@@ -178,4 +178,3 @@
     <!--#include virtual="connect.html" -->
 
 <!--#include virtual="../includes/_footer.htm" -->
-<!--#include virtual="../includes/_docs_footer.htm" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/events.html
----------------------------------------------------------------------
diff --git a/events.html b/events.html
index 56c6760..ffb6309 100644
--- a/events.html
+++ b/events.html
@@ -4,30 +4,57 @@
 	<!--#include virtual="includes/_nav.htm" -->
 	<div class="right">
 		<h1>Events</h1>
-		<h3>Meetups</h3>
 
-		<p>
-		Meetups focused on Kafka and the Kafka ecosystem are currently running in the following
locations:
-		</p>
+		<section style="overflow:hidden;">
+			<h2 style="margin-top:4rem;">Meetups</h2>
+			<p>Meetups focused on Kafka and the Kafka ecosystem are currently running in the
following locations:</p>
 
-		<ul>
-			<li><a href="https://www.meetup.com/http-kafka-apache-org/">Bay Area</a></li>
-			<li><a href="https://www.meetup.com/Apache-Kafka-ATL/">Atlanta, GA</a></li>
-			<li><a href="https://www.meetup.com/Austin-Apache-Kafka-Meetup-Stream-Data-Platform/">Austin,
TX</a></li>
-			<li><a href="https://www.meetup.com/Beijing-Kafka-Meetup/">Beijing, China</a></li>
-			<li><a href="https://www.meetup.com/Chicago-Area-Kafka-Enthusiasts/">Chicago,
IL</a></li>
-			<li><a href="https://www.meetup.com/Front-Range-Apache-Kafka/">Denver, CO</a></li>
-			<li><a href="http://www.meetup.com/Apache-Kafka-London/">London, England</a></li>
-			<li><a href="https://www.meetup.com/apachekafkamadrid/">Madrid, Spain</a></li>
-			<li><a href="http://www.meetup.com/Kafka-Montreal-Meetup/">Montréal, Canada</a></li>
-			<li><a href="https://www.meetup.com/Apache-Kafka-NYC/">New York, NY</a></li>
-			<li><a href="https://www.meetup.com/Paris-Apache-Kafka-Meetup/">Paris, France</a></li>
-			<li><a href="http://www.meetup.com/Apache-Kafka-San-Francisco/">San Francisco,
CA</a></li>
-			<li><a href="https://www.meetup.com/Seattle-Apache-Kafka-Meetup/">Seattle,
WA</a></li>
-			<li><a href="https://www.meetup.com/Logger/">Toronto, Canada</a></li>
-			<li><a href="https://www.meetup.com/Kafka-Meetup-Utrecht/">Utrecht, Holland</a></li>
-			<li><a href="https://www.meetup.com/Apache-Kafka-DC/">Washington, DC</a></li>
-		</ul>
+			<div style="float:left; width: 28rem;">
+				<h5 style="margin-bottom:0;">North America</h5>
+				<ul>
+					<li><a href="https://www.meetup.com/http-kafka-apache-org/" target="_blank">Bay
Area</a></li>
+					<li><a href="https://www.meetup.com/Apache-Kafka-ATL/" target="_blank">Atlanta,
GA</a></li>
+					<li><a href="https://www.meetup.com/Austin-Apache-Kafka-Meetup-Stream-Data-Platform/"
target="_blank">Austin, TX</a></li>
+					<li><a href="https://www.meetup.com/Chicago-Area-Kafka-Enthusiasts/" target="_blank">Chicago,
IL</a></li>
+					<li><a href="https://www.meetup.com/Front-Range-Apache-Kafka/" target="_blank">Denver,
CO</a></li>
+					<li><a href="http://www.meetup.com/Kafka-Montreal-Meetup/" target="_blank">Montréal,
Canada</a></li>
+					<li><a href="https://www.meetup.com/Apache-Kafka-NYC/" target="_blank">New
York, NY</a></li>
+					<li><a href="http://www.meetup.com/Apache-Kafka-San-Francisco/" target="_blank">San
Francisco, CA</a></li>
+					<li><a href="https://www.meetup.com/Seattle-Apache-Kafka-Meetup/" target="_blank">Seattle,
WA</a></li>
+					<li><a href="https://www.meetup.com/Logger/" target="_blank">Toronto, Canada</a></li>
+					<li><a href="https://www.meetup.com/Apache-Kafka-DC/" target="_blank">Washington,
DC</a></li>
+					<li><a href="https://www.meetup.com/Boston-Apache-kafka-Meetup/" target="_blank">Bostom,
MA</a></li>
+					<li><a href="https://www.meetup.com/Minneapolis-Apache-Kafka/" target="_blank">Minneapolis,
MN</a></li>
+					<li><a href="https://www.meetup.com/Portland-Apache-Kafka/" target="_blank">Portland,
OR</a></li>
+					<li><a href="https://www.meetup.com/Apache-Kafka-Phoenix-Meetup/" target="_blank">Phoenix,
AZ</a></li>
+				</ul>
+			</div>
+			<div style="float:left; width: 28rem;">
+				<h5 style="margin-bottom:0;">Europe</h5>
+				<ul>
+					<li><a href="http://www.meetup.com/Apache-Kafka-London/" target="_blank">London,
England</a></li>
+					<li><a href="https://www.meetup.com/apachekafkamadrid/" target="_blank">Madrid,
Spain</a></li>
+					<li><a href="https://www.meetup.com/Paris-Apache-Kafka-Meetup/" target="_blank">Paris,
France</a></li>
+					<li><a href="https://www.meetup.com/Kafka-Meetup-Utrecht/" target="_blank">Utrecht,
Holland</a></li>
+				</ul>
+			</div>
+			<div style="float:left; width: 28rem;">
+				<h5 style="margin-bottom:0;">Asia</h5>
+				<ul>
+					<li><a href="https://www.meetup.com/Beijing-Kafka-Meetup/" target="_blank">Beijing,
China</a></li>
+				</ul>
+
+				<h5 style="margin-bottom:0; margin-top:4rem">Australia</h5>
+				<ul>
+					<li><a href="http://www.meetup.com/Apache-Kafka-Sydney-Meetup/" target="_blank">Sydney,
Australia</a></li>
+				</ul>
+			</div>
+		</section>
+		<section style="border-top:.1rem solid #dedede; margin-top:6rem;">
+			<h2 style="margin-top:4rem;">Kafka Summit</h2>
+			<p>Conference for anyone currently using or excited to learn about Kafka and real
time data streaming.</p>
+			<p style="margin-top:2rem;"><a class="btn btn--secondary btn--sm" href="http://kafka-summit.org"
target="_blank">View times &amp; locations</a></p>
+		</section>
 <script>
 // Show selected style on nav item
 $(function() { $('.b-nav__events').addClass('selected'); });

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/e8fdfce2/styles.css
----------------------------------------------------------------------
diff --git a/styles.css b/styles.css
index 8d10bcb..ac3efeb 100644
--- a/styles.css
+++ b/styles.css
@@ -53,6 +53,9 @@ h3, h4 {
 	font-size: 2rem;
 	font-weight: 700;
 }
+h5 {
+	font-size: 1.6rem;
+}
 h3.bullet {
 	margin-bottom: 1rem;
 }
@@ -97,6 +100,7 @@ img {
 }
 ul {
 	padding-left: 2rem;
+	margin:1rem 0 1rem 0;
 }
 .toc {
 	padding: 0;


Mime
View raw message