flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Hsieh <...@cloudera.com>
Subject Re: CPU hits rootf with 3 streams of syslogTcp
Date Sat, 10 Sep 2011 20:07:23 GMT
Hi,

We've been working on several places currently.  We cut release 0.9.4 with
less testing than we would have liked because we wanted to have a clean
point where  pre-apache development and post apache incubation development.


Currently, there are two efforts going on -- bug fixes on the current trunk,
and a new branch (FLUME-728) that is rearchitecting some of the core to make
it easier to understand and less prone to some of the tricky and hard to
diagnose problems in the current implementation.

Jon.

On Fri, Sep 9, 2011 at 12:50 PM, Edward sanks <edsanks@hotmail.com> wrote:

> That makes me wonder if there was any real change done to the code to
> address the problem or if it was just a magic. The trunk is just about 2-3
> months ahead of 0.9.4. If dev team could throw light on this that helps.
> Should this be posted again on flume dev list to get any better attention?
> -----Original Message-----
> From: Eran Kutner <eran@gigya.com>
> Date: Fri, 9 Sep 2011 19:45:20
> To: <flume-user@incubator.apache.org>
> Subject: Re: CPU hits rootf with 3 streams of syslogTcp
>
> I've seen very excessive CPU usage with 0.9.4 as well. You should really
> try 0.9.5, when I compiled the latest sources from trunk (taken from Apache)
> I saw dramatic improvement in performance.
>
>
> -eran
>
>
>
>
> On Fri, Sep 9, 2011 at 19:30, Edward sanks <edsanks@hotmail.com <mailto:
> edsanks@hotmail.com> > wrote:
>
>
>  Folks,
>
>
> Any idea why kicking 3 streams of syslogTcp takes flume process on 64-bit
>  centos5.4 to 120+%? This behavior continues even when the there is no more
> traffic. And it hits this condition on a large AWS machine with 7.5GB RAM.
> the amount of data transferred on these 3 streams was only few KB.
>
>
> I am running 0.9.4 distro. Let me know if I have missed any patches on top
> of this distro I should pick up.
>
>
> When stack is dumped for that process, I see exactly 6 threads in the
> following stack trace:
>
>
>
> Thread XXXXX: (state = IN_NATIVE)
>  - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[],
> int, int, int) @bci=0 (Compiled frame; information may be imprecise)
>   - java.net.SocketInputStream.read(byte[], int, int) @bci=84, line=129
> (Compiled frame)
>  - java.io.BufferedInputStream.fill() @bci=175, line=218 (Compiled frame)
>  - java.io.BufferedInputStream.read1(byte[], int, int) @bci=44, line=258
> (Compiled frame)
>  - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=317
> (Compiled frame)
>  - org.apache.thrift.transport.TIOStreamTransport.read(byte[], int, int)
> @bci=25, line=127 (Compiled frame)
>   - com.cloudera.flume.handlers.thrift.TStatsTransport.read(byte[], int,
> int) @bci=7, line=59 (Compiled frame)
>  - org.apache.thrift.transport.TTransport.readAll(byte[], int, int)
> @bci=22, line=84 (Compiled frame)
>  - org.apache.thrift.protocol.TBinaryProtocol.readAll(byte[], int, int)
> @bci=12, line=378 (Compiled frame)
>  - org.apache.thrift.protocol.TBinaryProtocol.readI32() @bci=52, line=297
> (Compiled frame)
>   - org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin() @bci=1,
> line=204 (Compiled frame)
>  -
> com.cloudera.flume.handlers.thrift.ThriftFlumeEventServer$Processor.process(org.apache.thrift.protocol.TProtocol,
> org.apache.thrift.protocol.TProtocol) @bci=1, line=224 (Interpreted frame)
>  - org.apache.thrift.server.TSaneThreadPoolServer$WorkerProcess.run()
> @bci=150, line=280 (Interpreted frame)
>  -
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(java.lang.Runnable)
> @bci=59, line=886 (Interpreted frame)
>  - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=28, line=908
> (Interpreted frame)
>  - java.lang.Thread.run() @bci=11, line=662 (Interpreted frame)
>
>
> Thanks,
>  Ed.
>



-- 
// Jonathan Hsieh (shay)
// Software Engineer, Cloudera
// jon@cloudera.com

Mime
View raw message