David,

This is insightful - I found the need to place an idleTimeout value in the Flume config, but we were not running out of memory, we just found out that lots of unclosed .tmp files got left laying around when the roll occurred. I believe these are registering as under-replicated blocks as well - in my pseudo-distributed testbed, I have 5 under-replicated blocks...when the replication factor for pseudo-mode is "1" - and so we don't like them in the actual cluster.

Can you tell me, in your research, have you found a good way to close the .tmp files out so they are properly acknowledged by HDFS/BucketWriter? Or is simply renaming them sufficient? I've been concerned that the manual rename approach might leave some floating metadata around, which is not ideal.

If you're not sure, don't sweat it, obviously. I was just wondering if you already knew and could save me some empirical research time...

Thanks!
Devin Suiter
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Mon, Nov 11, 2013 at 10:01 AM, David Sinclair <dsinclair@chariotsolutions.com> wrote:
Hi all,

I have been investigating an OutOfMemory error when using the HDFS event sink. I have determined the problem to be with the 

WriterLinkedHashMap sfWriters;

Depending on how you generate your file name/directory path, you can run out of memory pretty quickly. You need to either set the idleTimeout to some non-zero value or set the number of maxOpenFiles.

The map keeps references to BucketWriter around longer than they are needed. I was able to reproduce this consistently and took a heap dump to verify that objects being kept around.

I will update this Jira to reflect my findings


dave