Re: #541: CONTINUATION

> On 08 Jul 2014, at 20:15, "martin.thomson@gmail.com" <martin.thomson@gmail.com> wrote:
>
>> On 8 July 2014 11:02,  <K.Morgan@iaea.org> wrote:
>> compressed_size = uncompressed_size[max] * 2 / header_table_size
>>
>> And that's your setting.  I must be missing your point?? (An endpoint should know how much it wants to commit for the uncompressed data and how much it wants to commit for the table.)
>
> WTF?  That would be completely nonsensical.  The formula only produces
> an upper bound, not a sensible value.

Yup. Totally missed your point. Got it.


>>> So yes, constrained, but not really reasonable.
>>
>> Can you please clarify what's not reasonable?  Not a reasonable estimation? Or, not reasonable to commit that much memory?
>
> Let's say that you commit a 4k header table and permit frames of 16k
> (i.e., the default values we're talking about).  That means that you
> are potentially required to commit ~32M for an uncompressed set of
> header fields that exploit the maximum compression (actually, it's a
> little higher than this, but you get the idea).

This part I still don't get. What does it matter if the uncompressed headers are 32M, or 32G for that matter? As soon as you reach the limit you are willing to commit in resources to the uncompressed headers, you'll stop and respond with 431.  Of course you'll have to decide if you should keep processing to synchronize state, but if a portion of 16K is generating MB uncompressed, it should be pretty obvious you are under a DoS attack and kill the connection.  Am I still missing something?



This email message is intended only for the use of the named recipient. Information contained in this email message and its attachments may be privileged, confidential and protected from disclosure. If you are not the intended recipient, please do not read, copy, use or disclose this communication to others. Also please notify the sender by replying to this message and then delete it from your system.

Received on Tuesday, 8 July 2014 19:59:00 UTC