flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward sanks " <edsa...@hotmail.com>
Subject Re: CPU hits rootf with 3 streams of syslogTcp
Date Fri, 09 Sep 2011 19:50:53 GMT
That makes me wonder if there was any real change done to the code to address the problem or
if it was just a magic. The trunk is just about 2-3 months ahead of 0.9.4. If dev team could
throw light on this that helps. Should this be posted again on flume dev list to get any better
-----Original Message-----
From: Eran Kutner <eran@gigya.com>
Date: Fri, 9 Sep 2011 19:45:20 
To: <flume-user@incubator.apache.org>
Subject: Re: CPU hits rootf with 3 streams of syslogTcp

I've seen very excessive CPU usage with 0.9.4 as well. You should really try 0.9.5, when I
compiled the latest sources from trunk (taken from Apache) I saw dramatic improvement in performance.



On Fri, Sep 9, 2011 at 19:30, Edward sanks <edsanks@hotmail.com <mailto:edsanks@hotmail.com>
> wrote:


Any idea why kicking 3 streams of syslogTcp takes flume process on 64-bit  centos5.4 to 120+%?
This behavior continues even when the there is no more traffic. And it hits this condition
on a large AWS machine with 7.5GB RAM. the amount of data transferred on these 3 streams was
only few KB. 

I am running 0.9.4 distro. Let me know if I have missed any patches on top of this distro
I should pick up.

When stack is dumped for that process, I see exactly 6 threads in the following stack trace:

Thread XXXXX: (state = IN_NATIVE)
 - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[], int, int, int)
@bci=0 (Compiled frame; information may be imprecise)
  - java.net.SocketInputStream.read(byte[], int, int) @bci=84, line=129 (Compiled frame)
 - java.io.BufferedInputStream.fill() @bci=175, line=218 (Compiled frame)
 - java.io.BufferedInputStream.read1(byte[], int, int) @bci=44, line=258 (Compiled frame)

 - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=317 (Compiled frame)
 - org.apache.thrift.transport.TIOStreamTransport.read(byte[], int, int) @bci=25, line=127
(Compiled frame)
  - com.cloudera.flume.handlers.thrift.TStatsTransport.read(byte[], int, int) @bci=7, line=59
(Compiled frame)
 - org.apache.thrift.transport.TTransport.readAll(byte[], int, int) @bci=22, line=84 (Compiled
 - org.apache.thrift.protocol.TBinaryProtocol.readAll(byte[], int, int) @bci=12, line=378
(Compiled frame)
 - org.apache.thrift.protocol.TBinaryProtocol.readI32() @bci=52, line=297 (Compiled frame)
  - org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin() @bci=1, line=204 (Compiled
 - com.cloudera.flume.handlers.thrift.ThriftFlumeEventServer$Processor.process(org.apache.thrift.protocol.TProtocol,
org.apache.thrift.protocol.TProtocol) @bci=1, line=224 (Interpreted frame) 
 - org.apache.thrift.server.TSaneThreadPoolServer$WorkerProcess.run() @bci=150, line=280
(Interpreted frame)
 - java.util.concurrent.ThreadPoolExecutor$Worker.runTask(java.lang.Runnable) @bci=59, line=886
(Interpreted frame) 
 - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=28, line=908 (Interpreted frame)
 - java.lang.Thread.run() @bci=11, line=662 (Interpreted frame)


View raw message