flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mardan Khan <mardan8...@gmail.com>
Subject Re: Broken Pip error
Date Wed, 25 Jul 2012 18:49:23 GMT
Tariq

Let to telling you that i am using hadoop 0.20.0 which i have downloaded
from apache website. I did not configure the hadoop which is automatically
downloaded with CHD4
I think my hadoop is accessible.

The configuration file as:


*Core-site.xml*

<configuration>
<property>
    <name>fs.default.name</name>
    <value>hdfs://134.83.35.24:9000</value>
  </property>
</configuration>

*
hdfs-site.xml*


<configuration>

 <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>

</configuration>


*
mapred-site.xml*

<configuration>

<property>
    <name>mapred.job.tracker</name>
    <value>134.83.35.24:9001</value>
  </property>
</configuration>




*Flume configuration*

agent.sources = avro-AppSrv-source
agent.sinks = hdfs-Cluster1-sink
agent.channels = mem-channel-1
# set channel for sources, sinks
# properties of avro-AppSrv-source
agent.sources.avro-AppSrv-source.type = SEQ

agent.sources.avro-AppSrv-source.bind = localhost
agent.sources.avro-AppSrv-source.port = 10000

agent.sources.avro-AppSrv-source.channels = mem-channel-1

# properties of mem-channel-1
agent.channels.mem-channel-1.type = memory
agent.channels.mem-channel-1.capacity = 1000
agent.channels.mem-channel-1.transactionCapacity = 100
# properties of hdfs-Cluster1-sink
agent.sinks.hdfs-Cluster1-sink.type = hdfs

agent.sinks.hdfs-Cluster1-sink.channel = mem-channel-1
agent.sinks.hdfs-Cluster1-sink.hdfs.path = hdfs://134.83.35.24:9000/flume
agent.sinks.hdfs-Cluster-sin.hdfs.rollInterval = 30
agent.sinks.hdfs-Cluster-sin.hdfs.rollsize= 1024
agent.sinks.hdfs-Cluster-sin.hdfs.batchSize= 1
agent.sinks.hdfs-Cluster-sin.hdfs.fileType = DataStream
agent.sinks.hdfs-Cluster-sin.hdfs.writeFormat = writable



Please any suggestion


Thanks


On Wed, Jul 25, 2012 at 3:43 PM, Mohammad Tariq <dontariq@gmail.com> wrote:

> Hello mardan,
>
>        It seems the host where your NameNode is running is not
> reachable or your are trying catch some other host. Could you please
> show us your conf file??
>
> Regards,
>     Mohammad Tariq
>
>
> On Wed, Jul 25, 2012 at 8:03 PM, mardan Khan <mardan8310@gmail.com> wrote:
> > Hi,=
> >
> > I am getting the following warring and error message
> >
> >
> >
> > Warring:               Unexpected error reading responses on connection
> > Thread[IPC Client (421539177) connection to hadoopmr/134.83.35.24:9000from
> > root,5,main]
> >
> >
> > Error: Broken Pipe
> >
> >
> > 12/07/25 15:26:26 ERROR hdfs.HDFSEventSink: close on
> > hdfs://hadoopmr.brunel.ac.uk:9000/flume/FlumeData; called
> > org.apache.flume.sink.hdfs.HDFSEventSink$3@1d162212
> > java.io.IOException: Failed on local exception: java.io.IOException:
> Broken
> > pipe; Host Details : local host is: "java.net.UnknownHostException:
> brunel:
> > brunel"; destination host is: "hadoopmr.brunel.ac.uk":9000;
> >     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
> >     at org.apache.hadoop.ipc.Client.call(Client.java:1165)
> >     at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184)
> >     at $Proxy9.getFileInfo(Unknown Source)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >     at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >     at java.lang.reflect.Method.invoke(Method.java:597)
> >     at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:165)
> >     at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:84)
> >     at $Proxy9.getFileInfo(Unknown Source)
> >
> >
> > What is the reason of the error.
> >
> >
> > I am using hadoop version 0.20.0 and flume1.x
> >
> >
> > Thanks
>

Mime
View raw message