flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gwen Shapira <gshap...@cloudera.com>
Subject Re: 1.6 release date?
Date Tue, 06 Jan 2015 01:07:27 GMT
doesn't look like they will overlap. KafkaSinkTest should use:
<currentdir>/target/kafka-logs

On Mon, Jan 5, 2015 at 5:02 PM, Foo Lim <foo.lim@vungle.com> wrote:
> Hi Gwen,
>
> Kafka:
> log.dirs=/tmp/kafka-logs
> (the default)
>
> ZK:
> dataDir=/vagrant/zookeeper-3.4.6/data
> (changed from default)
>
> Thanks
>
>
> On Mon, Jan 5, 2015 at 4:39 PM, Gwen Shapira <gshapira@cloudera.com> wrote:
>>
>> mmmm... normally traces of Kafka or ZK are not an issue. Can you share
>> which directories does Kafka use for logs? (i.e. log.dir in Kafka
>> configuration)? And where does ZK store its data?
>>
>> Thanks,
>> Gwen
>>
>> On Mon, Jan 5, 2015 at 11:43 AM, Foo Lim <foo.lim@vungle.com> wrote:
>> > Thanks Gwen!
>> >
>> > -DskipTests worked! It built
>> > "dist/target/flume-kafka-sink-dist-0.5.0-bin.zip". I didn't know this
>> > was an
>> > option.
>> >
>> > The README said
>> > ## Dependency Versions
>> > - Apache Flume - 1.5.0
>> > - Apache Kafka - 0.8.1.1
>> >
>> > ## Prerequisites
>> > - Java 1.6 or higher
>> > - [Apache Maven 3](http://maven.apache.org)
>> > - An [Apache Flume](https://flume.apache.org) installation (See the
>> > dependent version above)
>> > - An [Apache Kafka](http://kafka.apache.org) installation (See the
>> > dependent
>> > version above)
>> >
>> > so I thought I had to have both running.
>> >
>> > Anyway, I stopped both kafka & zookeeper, but I'm still getting the same
>> > error
>> >
>> > kafka.common.NotLeaderForPartitionException
>> >
>> > Does my machine have to NOT have any traces of kafka or ZK?
>> >
>> >
>> > Now, on to testing if this sink works..
>> >
>> > Thanks again!
>> >
>> > Foo
>> >
>> >
>> > On Sun, Jan 4, 2015 at 5:19 PM, Gwen Shapira <gshapira@cloudera.com>
>> > wrote:
>> >>
>> >> I think the only thing that's failing in the build are the unit tests.
>> >>
>> >> 1. To build without unit tests: "mvn clean install -DskipTests"
>> >> 2. I suspect you are building Flume on a machine that has either Kafka
>> >> or Zookeeper installed, and we don't randomize ports properly in the
>> >> tests, therefore creating a mess. You can build on a dev machine not
>> >> running Kafka and Zookeeper. I'll create a patch making sure we don't
>> >> accidentally pick used ports for tests.
>> >>
>> >> Gwen
>> >>
>> >> On Mon, Dec 29, 2014 at 7:31 PM, Frank Yao <baniu.yao@gmail.com> wrote:
>> >> > you can get source code from 1.6 repo and modify pom.xml to adjust
>> >> > code
>> >> > to
>> >> > be compitable with 1.5. I have done this once and it works well
>> >> > Frank Yao
>> >> > @Vipshop, Shanghai
>> >> > from iPhone
>> >> >
>> >> > 在 2014年12月30日,10:47,Foo Lim <foo.lim@vungle.com>
写道:
>> >> >
>> >> > Hi again Frank,
>> >> >
>> >> > You are using flume 1.6. I'm trying to get the sink running on 1.5.2
>> >> > for a production machine. Thanks..
>> >> >
>> >> > On Mon, Dec 29, 2014 at 3:53 PM, Frank Yao <baniu.yao@gmail.com>
>> >> > wrote:
>> >> >
>> >> > hi foo
>> >> >
>> >> >
>> >> > it seems your stack showed exception was caused by kafka itself
>> >> >
>> >> >
>> >> > Failed to add leader for partitions
>> >> >
>> >> >
>> >> >
>> >> > I have used kafka sink and source of flume 1.6 for several weeks and
>> >> > it
>> >> >
>> >> > works well.
>> >> >
>> >> >
>> >> > Could you please try to use kafka console producer first to test if
>> >> > the
>> >> >
>> >> > partitionis okay or not?
>> >> >
>> >> > Frank Yao
>> >> >
>> >> > @Vipshop, Shanghai
>> >> >
>> >> > from iPhone
>> >> >
>> >> >
>> >> > 在 2014年12月30日,04:21,Foo Lim <foo.lim@vungle.com>
写道:
>> >> >
>> >> >
>> >> > BTW, I followed the directions & ran
>> >> >
>> >> >
>> >> > ~/flume-ng-kafka-sink$ mvn clean install
>> >> >
>> >> >
>> >> > On Mon, Dec 29, 2014 at 12:10 PM, Foo Lim <foo.lim@vungle.com>
wrote:
>> >> >
>> >> >
>> >> > Hi Gwen,
>> >> >
>> >> >
>> >> >
>> >> > Thanks for the reply.
>> >> >
>> >> >
>> >> >
>> >> > I'll try the CDH jar file. Where do I put it in the flume directory
>> >> >
>> >> > structure?
>> >> >
>> >> >
>> >> >
>> >> > I have kafka_2.9.2-0.8.1.1 running. Here's the test error (which
>> >> > keeps
>> >> >
>> >> >
>> >> > repeating) in the project
>> >> >
>> >> >
>> >> > git@github.com:thilinamb/flume-ng-kafka-sink.git
>> >> >
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,028] INFO Verifying properties
>> >> >
>> >> >
>> >> > (kafka.utils.VerifiableProperties)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,029] INFO Property client.id is overridden to
>> >> >
>> >> >
>> >> > group_1 (kafka.utils.VerifiableProperties)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,030] INFO Property metadata.broker.list is
>> >> >
>> >> >
>> >> > overridden to vagrant-ubuntu-precise-64:50753
>> >> >
>> >> >
>> >> > (kafka.utils.VerifiableProperties)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,030] INFO Property request.timeout.ms is
>> >> >
>> >> >
>> >> > overridden to 30000 (kafka.utils.VerifiableProperties)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,031] INFO Fetching metadata from broker
>> >> >
>> >> >
>> >> > id:0,host:vagrant-ubuntu-precise-64,port:50753 with correlation id
18
>> >> >
>> >> >
>> >> > for 1 topic(s) Set(custom-topic) (kafka.client.ClientUtils$)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,032] INFO Connected to
>> >> >
>> >> >
>> >> > vagrant-ubuntu-precise-64:50753 for producing
>> >> >
>> >> >
>> >> > (kafka.producer.SyncProducer)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,035] INFO Disconnecting from
>> >> >
>> >> >
>> >> > vagrant-ubuntu-precise-64:50753 (kafka.producer.SyncProducer)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,036] INFO Closing socket connection to
>> >> >
>> >> >
>> >> > /10.0.2.15. (kafka.network.Processor)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,038] WARN [KafkaApi-0] Offset request with
>> >> >
>> >> >
>> >> > correlation id 0 from client
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > group_1-ConsumerFetcherThread-group_1_vagrant-ubuntu-precise-64-1419883349017-325c06d5-0-0
>> >> >
>> >> >
>> >> > on partition [custom-topic,1] failed due to Leader not local for
>> >> >
>> >> >
>> >> > partition [custom-topic,1] on broker 0 (kafka.server.KafkaApis)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,040] WARN
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > [group_1_vagrant-ubuntu-precise-64-1419883349017-325c06d5-leader-finder-thread],
>> >> >
>> >> >
>> >> > Failed to add leader for partitions
>> >> > [custom-topic,1],[custom-topic,0];
>> >> >
>> >> >
>> >> > will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
>> >> >
>> >> >
>> >> > kafka.common.NotLeaderForPartitionException
>> >> >
>> >> >
>> >> > at sun.reflect.GeneratedConstructorAccessor1.newInstance(Unknown
>> >> > Source)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >> >
>> >> >
>> >> > at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> >> >
>> >> >
>> >> > at java.lang.Class.newInstance(Class.java:379)
>> >> >
>> >> >
>> >> > at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:73)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:160)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:60)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:179)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:174)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>> >> >
>> >> >
>> >> > at scala.collection.immutable.Map$Map2.foreach(Map.scala:130)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:174)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:86)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:76)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>> >> >
>> >> >
>> >> > at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:76)
>> >> >
>> >> >
>> >> > at
>> >> >
>> >> >
>> >> >
>> >> > kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
>> >> >
>> >> >
>> >> > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,045] INFO
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > [ConsumerFetcherThread-group_1_vagrant-ubuntu-precise-64-1419883349017-325c06d5-0-0],
>> >> >
>> >> >
>> >> > Shutting down (kafka.consumer.ConsumerFetcherThread)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,039] INFO
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > [ConsumerFetcherThread-group_1_vagrant-ubuntu-precise-64-1419883349017-325c06d5-0-0],
>> >> >
>> >> >
>> >> > Starting  (kafka.consumer.ConsumerFetcherThread)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,046] INFO
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > [ConsumerFetcherThread-group_1_vagrant-ubuntu-precise-64-1419883349017-325c06d5-0-0],
>> >> >
>> >> >
>> >> > Shutdown completed (kafka.consumer.ConsumerFetcherThread)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,047] INFO Closing socket connection to
>> >> >
>> >> >
>> >> > /10.0.2.15. (kafka.network.Processor)
>> >> >
>> >> >
>> >> > [2014-12-29 20:02:34,048] INFO
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > [ConsumerFetcherThread-group_1_vagrant-ubuntu-precise-64-1419883349017-325c06d5-0-0],
>> >> >
>> >> >
>> >> > Stopped  (kafka.consumer.ConsumerFetcherThread)
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Dec 26, 2014 at 4:06 PM, Gwen Shapira <gshapira@cloudera.com>
>> >> > wrote:
>> >> >
>> >> >
>> >> > I can't say when's the 1.6 release, but I have other solutions :)
>> >> >
>> >> >
>> >> >
>> >> > 1. The packages that are part of CDH5.3 release will contain that
>> >> > jar.
>> >> >
>> >> >
>> >> > Perhaps use this distro? Or even just get the RPM, unpackage and dig
>> >> > the
>> >> > jar
>> >> >
>> >> >
>> >> > out?
>> >> >
>> >> >
>> >> > 2. Let us know what's the compilation error, perhaps we can help
>> >> > there?
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Dec 26, 2014 at 3:30 PM, Foo Lim <foo.lim@vungle.com>
wrote:
>> >> >
>> >> >
>> >> >
>> >> > Hi all,
>> >> >
>> >> >
>> >> >
>> >> > Happy holidays! Just wondering if there's any ETA on a 1.6 release.
>> >> >
>> >> >
>> >> > Looking forward to the kafka sink plugin that I can't get to compile
>> >> >
>> >> >
>> >> > independently. :-/
>> >> >
>> >> >
>> >> >
>> >> > Thanks!
>> >> >
>> >> >
>> >> >
>> >> >
>> >
>> >
>
>

Mime
View raw message