serf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bert Huijben" <b...@qqmail.nl>
Subject RE: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c outgoing.c serf_private.h
Date Sat, 14 Nov 2015 14:50:44 GMT
> -----Original Message-----
> From: Ivan Zhakov [mailto:ivan@visualsvn.com]
> Sent: zaterdag 14 november 2015 14:25
> To: rhuijben@apache.org
> Cc: dev@serf.apache.org
> Subject: Re: svn commit: r1714297 - in /serf/trunk: buckets/event_buckets.c
> outgoing.c serf_private.h
> 
> On 14 November 2015 at 12:36,  <rhuijben@apache.org> wrote:
> > Author: rhuijben
> > Date: Sat Nov 14 09:36:08 2015
> > New Revision: 1714297
> >
> > URL: http://svn.apache.org/viewvc?rev=1714297&view=rev
> > Log:
> > Redefine the event bucket as a wrapping bucket, to remove dependencies
> on
> > implementation details on the 'aggregation' stream around the event
> bucket.
> >
> > Calculate total number of bytes read to increase event value.
> >
> > @@ -128,7 +215,7 @@ static void serf_event_destroy(serf_buck
> >  const serf_bucket_type_t serf_bucket_type__event = {
> >      "EVENT",
> >      serf_event_read,
> > -    serf_event_readline,
> > +    serf_default_readline,
> Is it intentional? Why event bucket doesn't forward readline calls
> directly to wrapped bucket?

3th reply...

I added a bit of accounting to the event bucket. The accounting part makes it harder to forward
the changes, and makes it easier to fall back to default implementations.


If you would like to improve on we might want to split the implementation to an event (start
reading, stop reading, destruct) and an accounting bucket (done reading; amount read was).

Combining the two buckets into one will allow a callback implementation to call get_remaining()
in the start reading call to see if it already knows the size ahead of time. In case of an
accounting bucket that behavior might belong in the bucket itself.


All of this is to allow the decision on how big chunks will be to be delayed as long as possible.
Writing exactly what we have in ram is just more efficient than making all chunks exactly
the maximum size. Our buckets are extremely useful in allowing this.

I implemented the simple case for headers in r1714307 and a similar testcase in r1714328 to
show what the breaking part is all about.


Next up is the body transfer scheduling to complete the http/2 code to fully functional.

	Bert


Mime
View raw message