你修改下hdfs sink的writeFormat即可。

On Tue, Feb 19, 2013 at 11:09 AM, 周梦想 <ablozhou@gmail.com> wrote:
hello,
I put some data to hdfs via flume 1.3.1,but it changed!

source data:
[zhouhh@Hadoop47 ~]$  echo "<13>Mon Feb 18 18:25:26 2013 hello world zhh " | nc -v hadoop48 5140
Connection to hadoop48 5140 port [tcp/*] succeeded!

the flume agent received:
13/02/19 10:43:46 INFO hdfs.BucketWriter: Creating hdfs://Hadoop48:54310/flume//FlumeData.1361241606972.tmp
13/02/19 10:44:16 INFO hdfs.BucketWriter: Renaming hdfs://Hadoop48:54310/flume/FlumeData.1361241606972.tmp to hdfs://Hadoop48:54310/flume/FlumeData.1361241606972

the content in hdfs:

[zhouhh@Hadoop47 ~]$ hadoop fs -cat  hdfs://Hadoop48:54310/flume/FlumeData.1361241606972
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable▒.FI▒Z▒Q{2▒,\<▒U▒Y)Mon Feb 18 18:25:26 2013 hello world zhh 
[zhouhh@Hadoop47 ~]$

I don't know why there is some data like "org.apache.hadoop.io.LongWritable",there are some bugs?

Best Regards,
Andy