flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From khadar basha <khadar...@gmail.com>
Subject Re: regarding flume
Date Sun, 29 Jul 2012 06:48:20 GMT
Hi  Hari,

i am also facing same problem.  Here is my configuration.


flume-conf.properties file
==================
agent2Test1.sources = seqGenSrc
agent2Test1.channels = memoryChannel
agent2Test1.sinks = loggerSink

# For each one of the sources, the type is defined
*agent2Test1.sources.seqGenSrc.type = avro*
agent2Test1.sources.seqGenSrc.bind=localhost
agent2Test1.sources.seqGenSrc.port=41414

# interceptors for host and date
agent2Test1.sources.seqGenSrc.interceptors = time hostInterceptor
agent2Test1.sources.seqGenSrc.interceptors.hostInterceptor.type =
org.apache.flume.interceptor.HostInterceptor$Builder
agent2Test1.sources.seqGenSrc.interceptors.hostInterceptor.hostHeader = host
agent2Test1.sources.seqGenSrc.interceptors.hostInterceptor.useIP = false
agent2Test1.sources.seqGenSrc.interceptors.hostInterceptor.hostHeader.preserveExisting
= false
agent2Test1.sources.seqGenSrc.interceptors.time.type =
org.apache.flume.interceptor.TimestampInterceptor$Builder

# The channel can be defined as follows.
agent2Test1.sources.seqGenSrc.channels = memoryChannel

# Each sink's type must be defined
agent2Test1.sinks.loggerSink.type = hdfs
agent2Test1.sinks.loggerSink.hdfs.path =
hdfs://hadoopHost:8020/data/%Y/%m/%d/%{host}/Logs

agent2Test1.sinks.loggerSink.hdfs.fileType = DataStream

#Specify the channel the sink should use
agent2Test1.sinks.loggerSink.channel = memoryChannel

# Each channel's type is defined.
agent2Test1.channels.memoryChannel.type = memory

# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent2Test1.channels.memoryChannel.capacity = 1000



Sample java program to generate the log message:
=====================================


package com.test;


import org.apache.flume.clients.log4jappender.Log4jAppender;
import org.apache.log4j.Logger;
import org.apache.log4j.MDC;
import org.apache.log4j.PatternLayout;

import java.util.UUID;


public class Main {
    static Logger log = Logger.getLogger(Main.class);

    public static void main(String[] args) {
        try {
         Log4jAppender appender = new Log4jAppender();
            appender.setHostname("localhost");
            appender.setPort(41414);
            appender.setLayout(new PatternLayout("%d [%c] (%t) <%X{user}
%X{field}> %m"));
         //   appender.setReconnectAttempts(100);

           appender.activateOptions();

            log.addAppender(appender);

            MDC.put("user", "chris");
          //  while (true) {
                MDC.put("field", UUID.randomUUID().toString());
                log.info("=====> Hello World");
                try {
                    throw new Exception("Testing");
                } catch (Exception e) {
                    log.error("Gone wrong ===>", e);
                }
            //}
                System.in.read();
                System.in.read();
        }
        catch (Exception e) {
            e.printStackTrace();
        }
    }
}

I am receiving the only body messages as follows in HDFS.

*=====> Hello World *
*Gone wrong ===>*

I am missing any config here ?

khadar

On Sun, Jul 29, 2012 at 10:04 AM, Hari Shreedharan <
hshreedharan@cloudera.com> wrote:

>  + user@
>
> Thamatam,
>
> The Log4J appender adds the date, log level and logger name to the flume
> event headers and the text of the log event to the flume event body. The
> reason the log level and time are missing is that these are in the headers
> and the text serializer does not serialize the headers.
>
> To write to a file or HDFS, please use a Serializer together with the
> RollingFileSink or HDFSEventSink. Please take a look at the plain text
> serializer or Avro serializer to understand this better.
>
> Thanks,
> Hari
>
> --
> Hari Shreedharan
>
> On Saturday, July 28, 2012 at 5:47 PM, thamatam Jayaprakash wrote:
>
> Hi Hari,
>
>
> Actually im unable to send to this mail to the user and dev group so, im
> mailing to you .
>
> Could you pls point me where im going wrong.
> *Please suggest me which log appender need to use for custom logging
> pattern and appender.*
>
> Im working on Flume 1.1.0 and 1.2.0 . We are not able set log pattern and
> We are using log4jappender
> log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
>
> but we are getting plain test
>
> *Example if i log following mssages :*
>
> 17:42:55,928  INFO SimpleJdbcServlet:69 - doGet of SimpleJdbcServlet
> ended...
> 17:43:03,489  INFO HelloServlet:29 - HelloServlet of doGet started...
> 17:43:03,489  INFO HelloServlet:33 -
>  Hello from Simple Servlet
> 17:43:03,489  INFO HelloServlet:35 - HelloServlet of doGet end...
> 17:47:46,000  INFO HelloServlet:29 - HelloServlet of doGet started...
> 17:47:46,001  INFO HelloServlet:33 -
>  Hello from Simple Servlet
> 17:47:46,001  INFO HelloServlet:35 - HelloServlet of doGet end...
>
> *Using Flume in Hadoop im getting only the following logs:*
>
> doGet of SimpleJdbcServlet ended...
> HelloServlet of doGet started...
>
> HelloServlet of doGet end...
> HelloServlet of doGet started...
>
> *
>
> Thanks in advance.
> *
> --
> JP
>
>
>
> --
> Jayaprakash
>
>
>


-- 
Thanks,
Khadar

Mime
View raw message