I'm trying to use SpoolDir to FileChannel to HDFS Sink and I get an exception with the file it's quite large.

I'm using this configure:
tier1.sources  = source1
tier1.channels = channel1
tier1.sinks    = k1

tier1.sources.source1.type = spooldir
tier1.sources.source1.channels = channel1
tier1.sources.source1.spoolDir = /tmp/flume1
tier1.sources.source1.fileHeader = true
tier1.sources.source1.deserializer = org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder
tier1.sources.source1.deserializer.maxBlobLength = 1000000000

tier1.sinks.k1.type = hdfs
tier1.sinks.k1.channel = channel1
tier1.sinks.k1.hdfs.path = /tmp/

tier1.channels = channel1
tier1.channels.channel1.type = file
tier1.channels.channel1.capacity = 10000000
tier1.channels.channel1.dataDirs = /tmp
tier1.channels.channel1.maxFileSize = 1000000000
tier1.channels.channel1.minimumRequiredSpace =  500000000


I have tried with different sizes of baches and for example with a file of 30MB works but a file with a 150MB size it doesn't work. 


2017-02-09 14:24:55,693 ERROR org.apache.flume.SinkRunner: Unable to deliver event. Exception follows.
java.lang.IllegalStateException: Log is closed
at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at org.apache.flume.channel.file.Log.getFlumeEventQueue(Log.java:585)
at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.<init>(FileChannel.java:436)
at org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:356)
at org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:122)
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)