flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward sanks " <edsa...@hotmail.com>
Subject Re: is collectorSink(dest, prefix, millis, format) broken or am istupid?
Date Fri, 16 Sep 2011 23:38:23 GMT
Steve,

If you noticed last week my mail about flume-0.9.4 hitting roof with just 3 syslogTcp streams
on an aws large machine, you may want to explore going to latest code as well. Having said
that I am yet to prove that point.

Ed.
-----Original Message-----
From: Stephen Layland <stephen.layland@gmail.com>
Date: Fri, 16 Sep 2011 23:16:49 
To: <flume-user@incubator.apache.org>
Subject: is collectorSink(dest, prefix, millis, format) broken or am i
 stupid?

Hi!


Forgive the n00b question, but I'm trying to benchmark flume while building out a hadoop based
central log store and am coming across some weirdness.  The flume-conf.xml has the default
flume.collector.output.format set to 'avrojson'.  I had two simple configs: 


test1: syslogTcp(5140) | collectorSink("hdfs://...", "test", 30000, "avrodata")
test2: syslogTcp(5140) | collectorSink("hdfs://...", "test", 30000, "raw") 


I then mapped a test flume node to each of these logical nodes in turn (exec map node1 test1;
exec refreshAll) and tested it out but the actual dfs files are all appear to be the same
size and all appear to be avronjson? 


Am I doing something wrong here?


Using flume version: 0.9.4-cdh3u1.


Thanks,


-Steve

Mime
View raw message