I don't think is possible to write one hdfs file per event with the default sink.
But it shouldn't be too hard to extend it to do what you want.

Kafka works best with small messages, not big files.
Maybe it would be a better option to send the files directly the HDFS Http server or create an NFS gateway.


On 8 October 2015 at 22:59, Yosi Botzer <yosi.botzer@gmail.com> wrote:

I have kafka service with different topics. The events sent to kafka are in avro format.

I s there a way with flume to read events from kafka and writing them to specific folder in HDFS according to timestamp field which exists in the events?

According to this documentation: https://flume.apache.org/FlumeUserGuide.html#hdfs-sink

It is only possible to create text or sequence files