Re: Fragmentation for headers: why jumbo != continuation.

Flow control of headers ends up being problematic even when there is no
compressor, since, on the flip side of this coin, you can't always predict
which headers are necessary for interpretation of a request, and thus you
end up with smart attackers sending all but that last thing for every
stream and affecting a much more effective slowloris attack.

It is especially likely for proxies which are executing DoS or other
security protections to need to do this. Thankfully, in general these are
the only ones that do need to do that.

I like some of the parts of what you proposed there, and don't like other
parts. I'd prefer discussing that in a separate thread, though so we don't
dilute this conversation too much.

On Thu, Jul 10, 2014 at 4:38 PM, Greg Wilkins <> wrote:

> Roberto,
> I agree that this is a concern, even if large headers are only 0.01% (or
> whatever it is ) of the traffic.
> But I don't think fragmentation on it's own is sufficient. You need
> fragmentation and flow control.       To achieve this we need to stop
> treating headers as a special case and forget about any deadlines by the
> end of the year. I would propose that:
>    - We remove HEADERS, CONTINUATION and PUSH_PROMISE from the
>    specification
>    - We retain END_SEGMENT in DATA frames
>    - Streams are created by sending a stream SETTINGs frame with a
>    PROTOCOL parameter!
> We now have a multiplexed framing layer that is totally devoid of any
> knowledge of HTTP!   The framing supports segmented data streams that are
> flow controlled and of unlimited size.    We then come up with an mapping
> of HTTP semantics to this framing layer:
>    - HTTP streams start with a SETTINGS frame that has PROTOCOL=h2
>    - Odd data segments on the stream carry header/trailers.  So a Stream
>    with  1 segment is just headers. A stream with 3 segments is headers, data,
>    trailers etc.
> Now we have to work out how to encode the headers into those data frames.
> The stateless parts of HPACK are a pretty reasonable start, using Static-H
> gives a 0.66 compression factor.  However, I think there are probably other
> alternatives that are less order dependent - eg sending the header set
> mutations only on stream 0 and normal decoding does not mutate the table.
> If we wanted to make HTTP a bit of a special case, we could go to Linear-H,
> with 0.31 compression factor, but then decoding of the headers must be done
> in the order they are sent - making header compression part of the framing
> layer.... but I could live with a little bit of conflation for efficiency
> purposes:)
> With this scheme, we could even support lazy proxies that would send
> HTTP/1.1 by having PROTOCOL=h1 on a stream and just sending the http/1
> bytes unaltered.   Websocket could be supported the same way or it too
> could have it's own segmented data mapping.
> Even if we go for a less drastic way to do fragmentation of headers, I
> think the process has to be the same - start with data frame semantics and
> work out how to transport compressed headers.  Don't come up with a
> different fragmentation/flowcontrol regime based on the content of the
> frame.
> cheers
> On 11 July 2014 06:27, Roberto Peon <> wrote:
>> There are two separate reasons to fragment headers
>> 1) Dealing with headers of size > X when the max frame-size is <= X.
>> 2) Reducing buffer consumption and latency.
>> Most of the discussion thus far has focused on #1.
>> I'm going to ignore it, as those discussions are occurring elsewhere, and
>> in quite some depth :)
>> I wanted to be sure we were also thinking about #2.
>> Without the ability to fragment headers on the wire, one must know the
>> size of the entire set of headers before any of it may be transmitted.
>> This implies that one must encode the entire set of headers before
>> sending if one will ever do transformation of the headers. Encoding the
>> headers in a different HPACK context would count as a transformation, even
>> if none of the headers were modified.
>> This means that the protocol, if it did not have the ability to fragment,
>> would require increased buffering and increased latency for any proxy by
>> design.
>> This is not currently true for HTTP/1-- the headers can be sent/received
>> in a streaming fashion, and implementations may, at their option, choose to
>> buffer in order to simplify code.
>> -=R
> --
> Greg Wilkins <>
> HTTP, SPDY, Websocket server and client that
> scales
>  advice and support for jetty and cometd.

Received on Friday, 11 July 2014 00:42:16 UTC