Re: #541: CONTINUATION

On 3 Jul 2014, at 10:55 am, Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> wrote:

> My preference is (0).
> (1) adds more complexity just for 0.02% cases.
> h2 should support > 16k headers so (2) is not an option.
> (3) may be a candidate but if we can agree now then it is better.
> 
> So if 0.02% requests really causes a problem, specifically in coalesce scenario, I'd like to suggest that we can add minimum size of compressed header payload size that must be supported.  I choose compressed size because it can be easily computed by payload length.  The default size is 64k.    Receiver can freely respond with 413 within this limit but it has to burn HPACK to decompress headers (and discard them).  It can terminate connection when it encoders more than this limit.
> Sender should not send header block more than 64k in total in compressed size.  It is possible to roughly estimate upper bound of compressed size without HPACK (encode all new name literal, possible encodong context updates, reference set toggle off at the end).

Thank you.

Any thoughts about the subsequent proposal #4?

http://www.w3.org/mid/23C82249-8187-47A7-ADF0-25A29804085C@redhat.com

Cheers,

--
Mark Nottingham   http://www.mnot.net/

Received on Thursday, 3 July 2014 01:16:52 UTC