Re: Stuck in a train -- reading HTTP/2 draft.

In message <53A13961.8090103@treenet.co.nz>, Amos Jeffries writes:

>>>>>> All this trouble could be avoided by only submitting headers for
>>>>>> decompression, as a unit, when the END_HEADERS have been received.
>>>>
>>>> That creates a nice state exhaustion/denial of service opportunity
>>>> that we decided not to permit.
>>>
>>> I really don't understand that answer:  Buffering the compressed
>>> header will take up less space than buffering the uncompressed
>>> header ?
>>>
>> Who's buffering headers? The whole point is that we're streaming them
>> through the HPACK context, *not* buffering them.

I'm starting to really hate the entire HEADER+CONTINUATION kludge
upon kludge upon kludge hackery.

My preference would be to impose sanity by simply removing CONTINUATION
and telling cookie monsters that if their HPACK compressed HTTP
headers do not fit in 16k, they should consider a diet.

If the WG insist on supporting ridiculously large HTTP headers, we
should eat one of the reserved bits in the frame header to indicate
the precense of a length-extension-field is present in front of the
payload, so that all headers can be fit into one single frame that way.

The difference between receiving cookie-monster frames and
uninterposable chains of HEADERS+CONTINUATION is that the former
is far simpler to implement, faster to process and takes up less
bandwidth.

The storage requirements will be the exact same.

A SETTINGS_MAX_FRAME_SIZE defaulting to 16k-1 would apply negative
social pressure on cookie-monsters, and avoid their WG-sanctioned
escape-hatch becoming a gateway for DoS.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Wednesday, 18 June 2014 10:34:12 UTC