Re: #541: CONTINUATION

2014/07/02 20:12 "Mark Nottingham" <mnot@mnot.net>:
>
> <https://github.com/http2/http2-spec/issues/541>
>
> There’s been strong pushback on the current design of CONTINUATION from
some interested parties, and a few implementers. Despite the fact that this
design is the result of multiple meetings demonstrating strong consensus,
and the fact that we have a schedule-focused charter, this issue deserves a
good hearing.
>
> I think everyone now has an idea of the issues and trade-offs involved,
as well as the potential end-games. We also helpfully have a few proposals
on how to move forward:
>
> 0) the status quo
>
> 1) <https://github.com/http2/http2-spec/pull/544> and variants thereof
(e.g., not including CONTINUATION in flow control; alternative syntaxes)
>
> 2) limiting header sizes to 16K (HPACK’d) in HTTP/2, as per PHK’s
suggestion
>
> There’s also another implicit option;
>
> 3) We can’t know until we get substantial interop and deployment
experience with draft-13.
>
> I’d like to ask the implementers (as identified on the CC: line) what
their preferences are, and what they can’t live with. If there’s another
option, please say so, but only if it’s materially different, and you
believe it has a chance of achieving consensus.
>
> To be clear, if you don’t say that you can’t live with something, it
means that it’s an acceptable outcome. If you do, be prepared to back up
such a strong stance with an equally strong argument.
>
> Note that this is input to help determine consensus, not a vote.
>
> Thanks,
>
> P.S. Please keep in mind that (3) is not “wait until September, then
decide it’s too late.” Achieving a reasonable consensus now is relatively
pain-free, if possible; deadlocking right before we (want to) ship is
something I want to avoid.
>
> P.P.S. To anticipate some responses, a generic “jumbo frame” is off the
table for this discussion; doing so doesn’t appear to have broad support,
and there are strong arguments against it.
>
>
> --
> Mark Nottingham   http://www.mnot.net
>
>

My preference is (0).
(1) adds more complexity just for 0.02% cases.
h2 should support > 16k headers so (2) is not an option.
(3) may be a candidate but if we can agree now then it is better.

So if 0.02% requests really causes a problem, specifically in coalesce
scenario, I'd like to suggest that we can add minimum size of compressed
header payload size that must be supported.  I choose compressed size
because it can be easily computed by payload length.  The default size is
64k.    Receiver can freely respond with 413 within this limit but it has
to burn HPACK to decompress headers (and discard them).  It can terminate
connection when it encoders more than this limit.
Sender should not send header block more than 64k in total in compressed
size.  It is possible to roughly estimate upper bound of compressed size
without HPACK (encode all new name literal, possible encodong context
updates, reference set toggle off at the end).

Best regards,
Tatsuhiro Tsujikawa

Received on Thursday, 3 July 2014 00:56:27 UTC