How big  are your events? 10000 capacity doesn't seem like it should run into any issues, but since it is all on memory, it's possible your channel is eating up all your memory.
Note: channel capacity is the number of events, not the physical size.

You can verify what is going on by setting up ganglia or using something like jconsole to get counter data via jmx: you'll want to pull the channelFillPercentage.

On 01/17/2013 07:15 AM, Mohit Anchlia wrote:
channel transaction is 500 and I've not set any batchsize parameter.

On Wed, Jan 16, 2013 at 1:49 PM, Bhaskar V. Karambelkar <bhaskarvk@gmail.com> wrote:
What is the channel transaction capacity and HDFS batch size ?


On Wed, Jan 16, 2013 at 1:52 PM, Mohit Anchlia <mohitanchlia@gmail.com> wrote:
I often get out of memory even when there is no load on the system. I am wondering what's the best way to debug this. I have heap size set to 2G and memory capacity is 10000
 
13/01/16 09:09:38 ERROR hdfs.HDFSEventSink: process failed



java.lang.OutOfMemoryError: Java heap space
	at java.util.Arrays.copyOf(Arrays.java:2786)



	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)

	at java.io.DataOutputStream.write(DataOutputStream.java:90)

	at org.apache.hadoop.io.Text.write(Text.java:282)

... 11 lines omitted ...
	at java.lang.Thread.run(Thread.java:662)



Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.OutOfMemoryError: Java heap space



	at java.util.Arrays.copyOf(Arrays.java:2786)

	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)