serf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bert Huijben" <b...@qqmail.nl>
Subject RE: [serf-dev] http 2 support
Date Sat, 17 Oct 2015 18:45:52 GMT


> -----Original Message-----
> From: serf-dev@googlegroups.com [mailto:serf-dev@googlegroups.com]
> On Behalf Of Lieven Govaerts
> Sent: zaterdag 17 oktober 2015 20:02
> To: serf-dev@googlegroups.com
> Subject: [serf-dev] http 2 support
> 
> Hi Bert,
> 
> I see that you're starting to work on HTTP 2.0 support. Great!
> 
> May I suggest you propose how you want to attack this to the list?
> This is quite a big feature, and more people than just you need to
> know how this code is gonna work.
> 
> I wonder for instance if you plan to develop everything from scratch,
> or if you plan to use some of the code from apache httpd (etc.).

Answering that last part first: httpd doesn't have much code to borrow. Our buckets are good
enough if not better to handle these things than the current new brigade usage in httpd. The
reusable parts of the code in http2 are all in nghttp2 (See http://nghttp2.org/). But I doubt
that we want to use that tool to replace our context loop.


I'm thinking that if we want to perform the http communication in the same zero copy way as
we currently do for http/1.1 we should at least implement our own framing handling. As all
blocks are length prefixed this should be easier than in http/1.1. Most of it is in today's
patches and this code is even pretty transparent for iovecs, assuming large enough chunks
will be used. I'm guessing most interesting servers will listen if we tell them that we allow
huge chunks.

On top of that http abstracts streams which closely map to how we currently handle our buckets.
We don't need to much there as we can reuse quite a bit of the design that we already use
for chunking. (The decode part is already implemented via the two bucket types I just added).


I was postponing the interesting decision of whether we should implement our own "hpack" header
compression library until after I get the basic framing done. I don't think it will be that
hard to implement it from scratch or use the existing nghttp2 implementation.
(For the prototype I was thinking about announcing that we have no header cache, so we just
get full headers or one of the headers from the predefined set)


After the header parsing we can basically fall through in our already event driven request
handling. Eventually we should probably optimize some of that to work better with servers
that really schedule multiple streams in parallel, but I doubt if any of our current users
is really interesting in those scenarios.


For Subversion the most obvious benefit would be that we can rely on framing, instead of having
to test for chunking and the fallback to other systems. As the server acknowledges which headers
are received and which not we should also be able to restart requests more safely. 

But all of that is in the far future. I was now trying to get some basic simple request in
the style of serf_get working as a (multi)weekend project, while trying to keep the rest of
the code stable. In this step I would just hardcode the request and its headers to avoid all
hpack details, or perhaps just encode them in the simplest way. 
(I'm just hoping that the server that I'm currently working against doesn't return its headers
in the response in the Huffman encoding)



Getting http2 support in my httpd on my FreeBSD box isn't that easy, as ALPN requires openssl
>= 1.0.2, which isn't the default on many platforms yet.


	Bert


Mime
View raw message