serf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Stein <>
Subject Re: svn commit: r1713936 - in /serf/trunk: buckets/allocator.c buckets/log_wrapper_buckets.c outgoing.c test/mock_buckets.c
Date Thu, 12 Nov 2015 20:57:46 GMT
On Thu, Nov 12, 2015 at 12:00 PM, Bert Huijben <> wrote:

>                 Hi,
> Yes… and I think I found why… (after >6 hours of trying to get things
> working for the ssl bucket; mostly succeeding via a specific ‘mark unread’
> function that removes the previous read from memory).
> The whole idea of checking if buckets are completely drained is falling
> down for buckets that we ‘write’ to the socket and then find that the
> network buffers are full.

Ah ha!

Okay... so the failure is "not drained [upon return to context run]", as
opposed to "attempted read after EAGAIN/EOF" or "destroyed with no recorded

> At that point we have to stop draining and there is no way to mark all
> buckets in the chain (aggregate within aggregate, inside custom wrapper,
> etc.) as ‘ok not to empty’, as there is no way to walk the entire chain
> upto the inner bucket that was last read.



> ·         We might want to use different allocators for the input and
> output (during debugging), to tag their usecase.

Possible, but likely prone to other failures, such as constructing from the
wrong allocator.

How about a flag in the allocator within the tracking code to say "reading
from 'write' buckets" that gets flipped on/off around that portion. We
don't really intermingle reading from both types of buckets. And even if we
*did* read from some response buckets in there, all that happens is a
relaxed check for that bucket. It "shouldn't" break anything (so he says...)

Would that work?


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message