ant-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matt Benson <>
Subject Re: How to pipe output to input?
Date Thu, 23 Jun 2005 16:38:32 GMT
--- Steve Loughran <> wrote:

> Matt Benson wrote:
> > --- Steve Loughran <> wrote:
> >>>-Matt
> >>
> >>Interesting design. I would have done a <namedpipe
> >>id="" /> datatype 
> >>that did the synchronisation; producers and
> >>consumers would sync by 
> >>choice of pipename.
> > 
> > 
> > That could work, for isolation.  How would you
> specify
> > the roles of producer/consumer?  Not that it
> matters
> > too much, because:
> It depends on the role. If you ask a named pipe for
> an input stream, you 
> get an input stream, become a reader and block until
> there is a writer. 
> If you ask one for an output stream, you get an
> output stream and block 
> once the buffer is full. If >1 thing wants to be a
> reader or writer, 
> signal a fault. (in unix you can have multiple
> readers or writers, but I 
> dont want to complicate close() semantics with new
> reference counted 
> closers)

Okay, still there would be a syntax issue at some
point for configuration of producer or consumer.  But
if other issues are insurmountable that question is,
again, moot.  As for multiple producers/consumers, Ant
already contains support for multiple writers in the
oata.util.OutputStreamFunneler class, which I had to
add to get right the behavior of output & error going
to a single stream as is often the case.  Multiple
readers plays with my head; in plain I/O usage (not
message queues or anything) each reader would (one
would think) want all data, so multiple copies or the
illusion thereof would be desirable... seems like it
would be best, then, to allow multiple writers to a
pipe, because we can, but to disallow multiple
readers.  The workaround for multiple readers would
then be to attach a single pass-through reader whose
output was split to multiple (differently) named
> > 
> >>But like you say, the problem
> >>becomes process 
> >>co-ordination; handling failures and propagating
> >>them gracefully, or 
> >>just how to keep processes around until their
> >>consumer/producer was 
> >>there. This is the stuff we normally delegate to
> the
> >>OS, or, for 
> >>distrubution, to some message queue. Though named
> >>pipes work across 
> >>clusters, at least on the apollo and hpux cluster
> >>systems.
> >>
> > 
> > This grows beyond my level of understanding.  :)
> ahh, go and build it, you will soon learn. I've used
> unix named pipes in 
> the past, and think they are a very cool form of
> IPC; like a socket but 
> with better decoupling of produce/consume.

Hmm.  I don't know for sure to which part you were
referring.  My lack here is that of having a clue how
to delegate this to the OS as you suggested,
especially in elegant Java form, or further how to
make such applicable to I/O-intensive processes that
are NOT native processes.  Java is like a womb I am
reluctant to leave, at least in this context.  Or,
"Sounds like a PITA."  ;)

> To unsubscribe, e-mail:
> For additional commands, e-mail:

Yahoo! Sports 
Rekindle the Rivalries. Sign up for Fantasy Football

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message