serf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Evgeny Kotkov <>
Subject [PATCH] HTTP/2: Avoid accessing a destroyed allocator for HPACK buckets
Date Fri, 17 Mar 2017 11:54:29 GMT

This patch prevents Serf from accessing an already destroyed allocator
when freeing HTTP/2 HPACK buckets in error scenarios.  One possible
reproduction script for the issue is:

    (This is not 100% stable, but close to that.  DEBUG_DOUBLE_FREE
    should be defined.)

    > serf_get -c4 -n4 --http2
    Error running context: (120153) HTTP2 flow control limits exceeded
    Aborted (core dumped)

See serf_http2__stream_setup_next_request() around http2_stream.c:378.
The HPACK bucket is created using the request->allocator, and is then
added into the pump's output queue.  Certain error conditions, such as
the one above, can cause the request to be destroyed _before_ the pump
with the bucket still alive.  Attempting to destroy this bucket will then
result in accessing an already destroyed request->allocator.

While the HTTP/2 implementation can handle error situations with enqueued
output that contains buckets from a request, it requires marking a request
with SERF_WRITING_STARTED, to allow the special cleanup logic in
outgoing_request.c:47 to kick in.  This is how things currently work
for HTTP/2 requests with bodies, where the body bucket originates from
a serf_request_t.

Luckily, in this particular case with HPACK buckets and header-only
requests, it's possible to avoid using the request->allocator, and
allocate the new bucket using the HTTP/2 stream's allocator.  Thus,
the lifetime of the new bucket will no longer depend on the underlying

Please see the attached patch with the fix described above.  The log
message is included in the beginning of the patch file.

Evgeny Kotkov

View raw message