flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gwen Shapira <gshap...@cloudera.com>
Subject Re: Getting data from IBM MQ to Hadoop
Date Thu, 07 May 2015 09:26:52 GMT
Hi Chhaya,

First, it looks like one agent should be enough. Don't run agents on the
Hadoop cluster itself (i.e not on data nodes). You can give it its own
machine, share it with other "edge node" services (like Hue) or install it
on MQ machine (if the machine is not too busy).

Second, destination should have probably been named "source", i.e. thats
the queue or topic that contains the data in JMS.

There is a nice example in the docs:

a1.sources = r1a1.channels = c1a1.sources.r1.type =
jmsa1.sources.r1.channels = c1a1.sources.r1.initialContextFactory =
org.apache.activemq.jndi.ActiveMQInitialContextFactorya1.sources.r1.connectionFactory
= GenericConnectionFactorya1.sources.r1.providerURL =
tcp://mqserver:61616a1.sources.r1.destinationName =
BUSINESS_DATAa1.sources.r1.destinationType = QUEUE


On Thu, May 7, 2015 at 1:50 AM, Vishwakarma, Chhaya <
Chhaya.Vishwakarma@teradata.com> wrote:

>  Hi All,
>
>
>
> I want to read data from IBM MQ and put it  into HDFs.
>
>
>
> Looked into JMS source of flume, seems it can connect to IBM MQ, but I’m
> not understanding what does “destinationType” and “destinationName” mean in
> the list of required properties. Can someone please explain?
>
>
>
> Also, how I should be configuring my flume agents
>
>
>
> flumeAgent1(runs on the machine same as MQ) reads MQ data ----à
> flumeAgent2(Runs on Hadoop cluster) writes into Hdfs
>
> OR only one agent is enough on Hadoop cluster
>
>
>
> Can someone help me in understanding how MQs can be integrated with flume
>
>
>
> Thanks,
>
> Chhaya
>
>
>

Mime
View raw message