flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Schneider <ch...@christopher-schneider.com>
Subject Re: Fastest way to get data into flume?
Date Fri, 28 Mar 2014 16:50:39 GMT
Thanks for the netcat ack setting - I'll give that a shot.

As for data ingestion, we're aiming for ~10000 events per second on our
development VMs, since we want to be able to run the system in time
acceleration (with simulation of incoming data). Each event is ~100-300
bytes.

I've gotten massive speedups with the Thrift source -> Memory -> HDFS when
I start batching the Thrift source into larger and larger batches.  Since
the requirements for consistency are much less during development and demo,
it's easy to raise that batch size arbitrarily high, so I think that
solution will work for now.

Thank you everybody for the help thinking about this, and clarifying what
I'm seeing as reasonable / unreasonable.  I think a batched Thrift source
will sustain me for now, so I can move on with my project, and loop back
when I have better numbers for what I really need on my VM.


On Fri, Mar 28, 2014 at 3:23 AM, ed <edorsey@gmail.com> wrote:

> Hello,
>
> It looks like you had acks turned on in the config you posted for your
> netcat source.  You might want to try turning them off:
>
> agent1.sources.netcatSource.ack-every-event = false
>
>
> We've gotten up around 1400 events per second on a single netcat source
> feeding 2 HDFS sinks without any issues (using a memory channel).  This is
> on a live network so we've never tested above that as that's the max
> throughput of the events we're storing.
>
> Best,
>
> Ed
>
>
> On Fri, Mar 28, 2014 at 5:58 AM, Asim Zafir <asim.zafir@gmail.com> wrote:
>
>> How much data are ingesting per minute or second bases?
>> How many source we are taking here ?
>> What kind of channel are you using currently and what is the memory
>> /storage footprint on the source as well as sink?
>> is it a uniform distribution of traffic? if not, what is the max peak of
>> the data throughput you you expect from a given source?
>>
>>
>>
>> On Thu, Mar 27, 2014 at 11:07 AM, Andrew Ehrlich <andrew@aehrlich.com>wrote:
>>
>>> What about having more than one flume agent?
>>>
>>> You could have two agents that read the small messages and sink to HDFS,
>>> or two agents that read the messages, serialize them, and send them to a
>>> third agent which sinks them into HDFS.
>>>
>>>
>>> On Thu, Mar 27, 2014 at 9:43 AM, Chris Schneider <
>>> chris@christopher-schneider.com> wrote:
>>>
>>>> I have a fair bit of data continually being created in the form of
>>>> smallish messages (a few hundred bytes), which needs to enter flume, and
>>>> eventually sink into HDFS.
>>>>
>>>> I need to be sure that the data lands in persistent storage and won't
>>>> be lost, but otherwise throughput isn't important. It just needs to be fast
>>>> enough to not back up.
>>>>
>>>> I'm running into a bottleneck in the initial ingestion of data.
>>>>
>>>> I've tried the netcat source, and the thrift source but both have
>>>> capped out at a thousand or so records per second.
>>>>
>>>> Batching up the thrift api items into sets of 10 and using appendBatch
>>>> is a pretty large speedup, but still not enough.
>>>>
>>>> Here's a gist of my ruby test script, and some example runs, and my
>>>> config.
>>>>
>>>> https://gist.github.com/cschneid/9792305
>>>>
>>>>
>>>> 1.  Are there any obvious performance changes I can do to speed up
>>>> ingestion?
>>>> 2. How fast can flume reasonably go? Should I switch my source to be
>>>> something else that's faster? What?
>>>> 3. Is there a better tool for this kind of task? (rapid, safe ingestion
>>>> small messages).
>>>>
>>>> Thanks!
>>>> Chris
>>>>
>>>
>>>
>>
>

Mime
View raw message