On 8 July 2014 11:02,  <> wrote:
> compressed_size = uncompressed_size[max] * 2 / header_table_size
> And that's your setting.  I must be missing your point?? (An endpoint should know how much it wants to commit for the uncompressed data and how much it wants to commit for the table.)

WTF?  That would be completely nonsensical.  The formula only produces
an upper bound, not a sensible value.

Actually, the more complete formula is slightly worse with Huffman
encoding and the reference set:

uncompressed_size[max] = (compressed_size + 1) * (header_table_size -
32) / 2 / (5/8)

>>So yes, constrained, but not really reasonable.
> Can you please clarify what's not reasonable?  Not a reasonable estimation? Or, not reasonable to commit that much memory?

Let's say that you commit a 4k header table and permit frames of 16k
(i.e., the default values we're talking about).  That means that you
are potentially required to commit ~32M for an uncompressed set of
header fields that exploit the maximum compression (actually, it's a
little higher than this, but you get the idea).

Received on Tuesday, 8 July 2014 18:30:10 UTC