flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From prabhu k <prabhu.fl...@gmail.com>
Subject Re: tail source exec unable to HDFS sink.
Date Tue, 18 Sep 2012 09:21:08 GMT
Hi Nitin,

While executing flume-ng, i have updated the flume_test.txt file,still
unable to do HDFS sink.

Thanks,
Prabhu.

On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar <nitinpawar432@gmail.com>wrote:

> Hi Prabhu,
>
> are you sure there is continuous text being written to your file
> flume_test.txt.
>
> if nothing is written to that file, flume will not write anything into
> hdfs.
>
> On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <prabhu.flume@gmail.com> wrote:
> > Hi Brock,
> >
> > Thanks for the reply.
> >
> > As per your suggestion, i have modified,but still same issue.
> >
> > My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us
> know is
> > there any incompatible version?
> >
> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <brock@cloudera.com>
> wrote:
> >>
> >> Hi,
> >>
> >> I believe, this line:
> >> agent1.sinks.HDFS.hdfs.type = hdfs
> >>
> >> should be:
> >> agent1.sinks.HDFS.type = hdfs
> >>
> >> Brock
> >>
> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <prabhu.flume@gmail.com>
> wrote:
> >> > Hi Users,
> >> >
> >> > I have followed the below link for sample text file to HDFS sink using
> >> > tail
> >> > source.
> >> >
> >> >
> >> >
> http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
> >> >
> >> > I have executed flume-ng like as below command. it seems got stuck.
> and
> >> > attached flume.conf file.
> >> >
> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
> >> >
> >> >
> >> > flume.conf
> >> > ==========
> >> > agent1.sources = tail
> >> > agent1.channels = MemoryChannel-2
> >> > agent1.sinks = HDFS
> >> >
> >> > agent1.sources.tail.type = exec
> >> > agent1.sources.tail.command = tail -F
> >> >
> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> > agent1.sources.tail.channels = MemoryChannel-2
> >> >
> >> > agent1.sources.tail.interceptors = hostint
> >> > agent1.sources.tail.interceptors.hostint.type =
> >> > org.apache.flume.interceptor.HostInterceptor$Builder
> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
> >> > agent1.sources.tail.interceptors.hostint.useIP = false
> >> >
> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> >> > agent1.sinks.HDFS.hdfs.type = hdfs
> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
> >> >
> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
> >> > agent1.channels.MemoryChannel-2.type = memory
> >> >
> >> >
> >> >
> >> > flume.log
> >> > ==========
> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> lifecycle
> >> > supervisor 1
> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
> >> > manager
> >> > starting
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Configuration provider starting
> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> lifecycle
> >> > supervisor 10
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Reloading configuration file:conf/flume.conf
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS
> Agent:
> >> > agent1
> >> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty
> for:
> >> > HDFS.Removed.
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
> >> > configuration contains configuration  for agents: [agent1]
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Creating channels
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > created channel MemoryChannel-2
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > new
> >> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> >> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> >> >
> >> >
> channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281}
> }
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > Channel MemoryChannel-2
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > Source tail
> >> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> >> > command:tail -F
> >> >
> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> >
> >> > Please suggest and help me on this issue.
> >>
> >>
> >>
> >> --
> >> Apache MRUnit - Unit testing MapReduce -
> >> http://incubator.apache.org/mrunit/
> >
> >
>
>
>
> --
> Nitin Pawar
>

Mime
View raw message