flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sverre Bakke <sverre.ba...@gmail.com>
Subject How to handle ChannelFullException
Date Thu, 29 Jan 2015 15:47:57 GMT
Hi,

I have a syslogtcp source using a default memory channel and Kafka
sink. When producing data as fast as possible (3000 syslog events in a
second), the source seems to accept all the data, but will crash due
to ChannelFullException when adding the event to the channel.

Is there any way to throttle or otherwise wait receiving more syslog
events before channel is available again rather than crashing because
the channel is full? I would prefer that Flume would accept syslog
events slower rather than crashing and dropping events.

29 Jan 2015 16:26:56,721 ERROR [New I/O  worker #2]
(org.apache.flume.source.SyslogTcpSource$syslogTcpHandler.messageReceived:94)
 - Error writting to channel, event dropped

Also, the syslogtcp seems to keep the syslog headers regardless of the
keepFields setting, is there any common reason for why this might
happen? In contrast, the multiport syslog tcp listener works as
expected with this particular setting.

Mime
View raw message