RE: #541: CONTINUATION

On Tuesday,08 July 2014 19:44, martin.thomson@gmail.com wrote:
>On 7 July 2014 22:36, Greg Wilkins <gregw@intalio.com> wrote:
>> but I have pointed out in the past that the encoder header size is a
>> reasonable indication of additional memory requirements represented by the
>> header block.   The highly compressed fields within a header block are the
>> indexed ones, and they reference memory in the header set that is
>> already constrained by a setting.
>
>Not really.  The math is pretty simple:
>
>uncompressed_size[max] = compressed_size * header_table_size / 2

Solve for compressed_size...

compressed_size = uncompressed_size[max] * 2 / header_table_size

And that's your setting.  I must be missing your point?? (An endpoint should know how much it wants to commit for the uncompressed data and how much it wants to commit for the table.)


>So yes, constrained, but not really reasonable.

Can you please clarify what's not reasonable?  Not a reasonable estimation? Or, not reasonable to commit that much memory?


This email message is intended only for the use of the named recipient. Information contained in this email message and its attachments may be privileged, confidential and protected from disclosure. If you are not the intended recipient, please do not read, copy, use or disclose this communication to others. Also please notify the sender by replying to this message and then delete it from your system.

Received on Tuesday, 8 July 2014 18:26:14 UTC