flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Sammer <esam...@cloudera.com>
Subject Re: Configure directory and file permissions on native file system
Date Wed, 03 Aug 2011 17:31:23 GMT

I agree this is crazy and needs to be fixed. The only question in my
mind is how. Java doesn't allow one to atomically create a file with
specific permissions so there's always a window where one must create
a file and then change permissions (which still suffer from a strange
API due to the cross platform goals of Java). Hadoop skirts this issue
when security is enabled by using native code and JNI. We could do the
same at the cost of some (some may argue a lot of) complexity. I'm
99.9% sure dfs.umask only applies to HDFS (where the API can be
specialized and controlled).

On Tue, Aug 2, 2011 at 5:47 PM, Dan Everton <dan@iocaine.org> wrote:
> Howdy,
> Is there a way to configure what permissions Flume creates files and
> directories with? In our configuration we're not currently writing to
> HDFS but to the local file system of the collector.
> In this case it looks like Flume delegates to Hadoop's local file system
> support which has a crazy default of 777 for files. This value doesn't
> seem to be configurable nor does it seem to be affected by the
> "dfs.umask" property. This leaves the log files world readable and
> executable which is dangerous. The intermediate directories are at least
> not created world writeable with 755 permissions, but this still isn't
> ideal.
> Cheers,
> Dan

Eric Sammer
twitter: esammer
data: www.cloudera.com

View raw message