flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jagadish Bihani <jagadish.bih...@pubmatic.com>
Subject HDFS file rolling behaviour
Date Thu, 13 Sep 2012 08:56:56 GMT

I use two flume agents:
1. flume_agent 1 which is a source with (exec source -file channel -avro 
2. flume_agent 2 which is a dest with (avro source -file channel - HDFS 

I have observed that for HDFS sink with rolling by *file size/number of 
events* it
creates a lot of simultaneous connections to source's avro sink. But
while rolling by *time interval* it does it *one by one* i.e. opens 1 
HDFS file write to
it and then close it.  I expect for other rolling intervals too same 
thing should happen
i.e.  first open file and if x number of events are written to it then 
roll it and open another
and so on.

In my case my data ingestion works fine with "time" based rolling but in 
cases due to the above behaviour I get exceptions like:
-- too many open files
-- timeout related exceptions for file channel and few more exceptions.

I can increase the values of the parameters giving exceptions but I dont 
know what
adverse effects it may have.

Can somebody throw some light on the rolling based on file size/number 
of events ?


View raw message