Re: #540: "jumbo" frames

On 26/06/2014 2:43 p.m., Matthew Kerwin wrote:
> On 26 June 2014 12:14, Jason Greene wrote:
> 
>>
>> I was just thinking perhaps HEADERS w/ CONTINUATION should require a total
>> multi-frame length?
>>
>>
> How do you calculate that ahead of time? Even without HPACK it's a bit of a
> chore; but if you have to mutate your compression context and buffer the
> output so you can prefix it with a final length... well, in another world
> that's why we have T-E:chunked. Too hard.

T-E:chunked is DATA. That works just fine.

The unknown-length CONTINUATION is headers and is perpetuating the
existing HTTP/1.x problems with infinite header length.

IMO fixing that is important if we end up with CONTINUATION.


Since as others have put it "HEADERS+CONTINUATION should be handled and
sent as a unit" the full size should therefore be available as a group
to the generating client/server. A sender which does not know the size
before starting to emit the bytes is probably sending far too large a
frame sequence and the pain is yet another incentive on them to be more
reasonable.


> 
> One of the biggest problems with CONTINUATION is that a server has no idea
>> how huge the headers will be, and is forced to buffer them until a limit is
>> hit. If this information was known up front it could either RST_STREAM, or
>> simply discard all subsequent CONTINUATION frames and reply with a too
>> large status.
>>
>>
> I don't know if that's universally true. As an endpoint: if the
> framing-layer machinery is buffering the headers in order to emit them as a
> single blob, then yes; but the same machinery could stream the headers
> piecemeal, no buffering required, and thus it wouldn't care how much header
> data there is overall. The higher-level stuff (the application?) might
> store the headers, but then it's that application's responsibility to tell
> the sender to STFU.

IIRC the use-cases for which CONTINUATION/jumbo is required are large
sets of Cookies and large Kerberos authentication tokens. In both these
cases the size is known enough in advance not to cause the server problems.

> 
> I am not a proxy person. I imagine an aggregator would care more about
> buffering; maybe that case really would benefit from the ability to fail
> fast. Calculating the final size of (uncompressed) header data is easier if
> less of said data is compressed, if that's worth anything.

Aggregator cares primarily about speed (RTT + latency + CPU cycles),
*some* extra buffer can be coped with provided the transaction is
completed fast enough to release it again before too much other traffic
is impacted. Secondly is the buffering since it is limited after all and
too many large transactions makes for either DoS or slower transactions
in a nasty feedback loop.

The middleware complaints against HPACK if you notice old discussions
have largely been about speed of the mandatory re-compress activity
expending time and CPU.

> 
> Also, if it's not compressed it's easier to shout "STFU" on a stream
> without having to either process the headers anyway (to keep your context
> valid), or tear down the connection -- you don't need to know the final
> length up front.
> 


Amos

Received on Thursday, 26 June 2014 08:12:12 UTC