On Thu, Nov 12, 2015 at 12:00 PM, Bert Huijben <bert@qqmail.nl> wrote:
> Hi,
>
>
>
> Yes… and I think I found why… (after >6 hours of trying to get things
> working for the ssl bucket; mostly succeeding via a specific ‘mark unread’
> function that removes the previous read from memory).
>
>
>
> The whole idea of checking if buckets are completely drained is falling
> down for buckets that we ‘write’ to the socket and then find that the
> network buffers are full.
>
Ah ha!
Okay... so the failure is "not drained [upon return to context run]", as
opposed to "attempted read after EAGAIN/EOF" or "destroyed with no recorded
error".
> At that point we have to stop draining and there is no way to mark all
> buckets in the chain (aggregate within aggregate, inside custom wrapper,
> etc.) as ‘ok not to empty’, as there is no way to walk the entire chain
> upto the inner bucket that was last read.
>
Gotcha.
>...
> · We might want to use different allocators for the input and
> output (during debugging), to tag their usecase.
>
Possible, but likely prone to other failures, such as constructing from the
wrong allocator.
How about a flag in the allocator within the tracking code to say "reading
from 'write' buckets" that gets flipped on/off around that portion. We
don't really intermingle reading from both types of buckets. And even if we
*did* read from some response buckets in there, all that happens is a
relaxed check for that bucket. It "shouldn't" break anything (so he says...)
Would that work?
Cheers,
-g
|