On 30 May 2014 16:51, "Martin J. Dürst" <duerst@it.aoyama.ac.jp> wrote:
> This is just a thought:
>
> Would it be possible to allow arbitrarily large amounts of header data
> (either via continuations or via multiple header frames), but to limit
> compression to a single header frame.
>
> While in general, there is a stronger need to compress larger stuff, such
> a solution could come with various benefits:
> - Simplified compression (less/no state)
> - Keep the main benefit (quick start)
> - Penalty against large amounts of header data
> (because that's not the way to do things anyway)
>
> Regards, Martin.
>
>
If you send SETTINGS_HEADER_TABLE_SIZE=0 and a HEADERS with [0x30,0x20] in
the first block fragment you effectively disable the context, and are left
with only Huffman coding (which has a per-frame context).
As Roberto reminded me yesterday, the thing about a header block is that
when it ends, you get everything else in the reference set (carried over
from the previous header block). The biggest gain in HPACK compression
comes from not actually sending identical headers again and again, which
means not only sharing context between multiple frames, but between frames
from multiple streams. I don't know if, in practice, any per-frame
compression scheme would come close to HPACK's connection-based delta
compression, and that would be a big hit to the protocol's appeal.
--
Matthew Kerwin
http://matthew.kerwin.net.au/