logging-log4cxx-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lazaro Munoz" <lazaro.mu...@glsettle-us.com>
Subject RE: A design problem!
Date Tue, 01 Nov 2005 20:28:03 GMT
Hi, there won't be any loss of data (log entries) since it will always
reside in the original log file (composite).  The log file manager
process will continue to scan the original log (until told otherwise)
long after the original server has gone down.  In fact it need never go
down, if you are going to append to the same log file when the
application is brought up back up.

The only "trick" is recovery of the log file manager to avoid
duplicating entries if it has to be recovered; but if the entries have
some time-stamps on it, they can easily be used to find its place in
both the composite log and what it has already written out to the client
log files.

--laz


=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
Lazaro Munoz
GL SETTLE, US
Five Hanover Sq., 19th Floor
New York, NY 10004
212 785-4171 (voice)
212 785-4175 (fax)
lazaro.munoz@glsettle-us.com
www.glsettle-us.com


-----Original Message-----
From: thomas.hawker@oocl.com [mailto:thomas.hawker@oocl.com] 
Sent: Tuesday, November 01, 2005 1:34 PM
To: log4cxx-user@logging.apache.org
Subject: RE: A design problem!

Hi!

The only thing you have to be careful about with Laz's approach is a
failure of the application (program, process, or platform) while
buffering all those records.  The solution is viable only if you don't
care about potential data loss.  If data loss is unacceptable, which
might be the case when diagnosing a difficult-to-reproduce problem, you
have to use another technique.

I like Renny's suggestion of a timed-logger pool the best.  However, you
do not need to "destroy" the logger on timeout.  You can have two pools,
one of "active" loggers waiting to timeout and one of "discarded"
loggers that can be redirected to another client's file.  This way you
won't be creating and deleting all those loggers, you'll minimize file
access operations, etc.  If memory is critical, you can even put a limit
on the second pool and destroy items if the secondary pool gets too
large.

The difficulty with a pool solution is that it behaves undesirably when
many requests arrive from many different clients in a short interval.
Then you have to have one logger for each client.  While on the average
this may be significantly less than all 2000 clients, there is nothing
to prevent trying to have all 2000 loggers open at once, either.  This
idea is like the qsort algorithm:  in general it works very well but you
still have to allow for it to be worst case. 

If you know something about the clients that allows you to group them
into some meaningful arrangement, you can reduce the number of loggers
by having one per group.  However, that still leaves you with the data
loss or data selection problem.

FWIW, I have stringent fault tolerance requirements that do not allow me
to lose incoming requests at a partial stage of commitment.  Before I
return to the client, I must guarantee that the request submission was
saved to backup storage or not process it at all.  This is complicated
by the fact that this logic is actually distributed among multiple
independent servers which don't support two-phase commit.  The only
logical (and workable) solution I could find was to write everything to
one [recovery] file and sort it out later.

Cheers!
 
Tom Hawker
Home    408-274-4128
Office  408-576-6591
Mobile  408-835-3643

-----Original Message-----
From: Lazaro Munoz [mailto:lazaro.munoz@glsettle-us.com] 
Sent: Tuesday, November 1, 2005 10:06 AM
To: Log4CXX User
Subject: RE: A design problem!

Hi, You also keep with a single logger with entries that specify which
client a particular entry applies to, and build an auxiliary logger
management process that wakes up periodically and picks up where it left
off and farms out the entries to appropriate discrete client files.
Since this is done periodically, it can even be optimized to collect
client entries together, open that clients log files, spit out the
entries and go on to the next client, thereby having only one file open
by the process at a time.
--laz

=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
Lazaro Munoz
GL SETTLE, US
Five Hanover Sq., 19th Floor
New York, NY 10004
212 785-4171 (voice)
212 785-4175 (fax)
lazaro.munoz@glsettle-us.com
www.glsettle-us.com


-----Original Message-----
From: renny.koshy@rubixinfotech.com
[mailto:renny.koshy@rubixinfotech.com] 
Sent: Tuesday, November 01, 2005 12:02 PM
To: Log4CXX User
Subject: RE: A design problem!

Ken

Another suggestion may be:

1. Setup a "Logger" pool... then have a pointer to the Logger in your
main
code.
2. At the start of request processing for a client, set the Logger
pointer
to the appropriate logger in the pool, if one exists, or create a new
one
and save in the pool
3. After a certain timeout, destroy the logger -- so if no requests are
received from a client in lets say 5 minutes, the logger expires

Renny Koshy
President & CEO

--------------------------------------------
RUBIX Information Technologies, Inc.
www.rubixinfotech.com


 

             "Jeff Davidson"

             <jeffd.rgs@gmail.

             com>
To 
                                       "'Log4CXX User'"

             11/01/2005 11:40          <log4cxx-user@logging.apache.org>

             AM
cc 
 

 
Subject 
             Please respond to         [SPAM] RE: A design problem!

              "Log4CXX User"

             <log4cxx-user@log

             ging.apache.org>

 

 

 





Ken,

Would it be feasible to write to a (SQL) database appender? Later, you
would
simply query the results for each IP address as needed, instead of
forcing
the sorting of log records at the time that they are logged.

I'm very new to log4cxx so I don't really know how much effort would be
required, but if it were me I think I'd want a general solution that
offloads the sorting and processing of individual log records until
someone
(me?) actually requests them.

Anyway, it's food for thought.

Cheers,
~Jeff D.


-----Original Message-----
From: Ken [mailto:yongkwai@gmail.com]
Sent: Monday, October 31, 2005 6:27 PM
To: Log4CXX User
Subject: Re: A design problem!

Hello,
    Currently, My program have 1 thread deal with all clients request,
have
only one appender with "setFile" call changes the log file for each
client
request. But the problem is when I enable the log, the cpu usage is a
little
high, disable the log everything is ok. So I think "setFile" call will
cause
file frequently opened & closed, will this elevate the cpu usage?
     Maarten's NDC suggestion is what I thought before. The problem is
each
client will have a series of request, each of them will last variable
time.
So NDC may overlap the log content. Create many appender just like
Thomas
said will challenge the OS limit. Of course use awk, sed, Excel are the
best
and easy solutions, but for some reason we only can show the log file to
related people with their part. I will study use awk or Excel to achieve
this later. However I still like write to different files. Thanks.

--

Ken








IMPORTANT NOTICE
Email from OOCL is confidential and may be legally privileged.  If it is
not intended for you, please delete it immediately unread.  The internet
cannot guarantee that this communication is free of viruses,
interception or interference and anyone who communicates with us by
email is taken to accept the risks in so doing.  Without limitation,
OOCL and its affiliates accept no liability whatsoever and howsoever
arising in connection with the use of this email.  Under no
circumstances shall this email constitute a binding agreement to carry
or for provision of carriage services by OOCL, which is subject to the
availability of carrier's equipment and vessels and the terms and
conditions of OOCL's standard bill of lading which is also available at
http://www.oocl.com.



Mime
View raw message