Re: #540: "jumbo" frames

On 26 June 2014 12:14, Jason Greene <jason.greene@redhat.com> wrote:

>
> I was just thinking perhaps HEADERS w/ CONTINUATION should require a total
> multi-frame length?
>
>
How do you calculate that ahead of time? Even without HPACK it's a bit of a
chore; but if you have to mutate your compression context and buffer the
output so you can prefix it with a final length... well, in another world
that's why we have T-E:chunked. Too hard.


One of the biggest problems with CONTINUATION is that a server has no idea
> how huge the headers will be, and is forced to buffer them until a limit is
> hit. If this information was known up front it could either RST_STREAM, or
> simply discard all subsequent CONTINUATION frames and reply with a too
> large status.
>
>
I don't know if that's universally true. As an endpoint: if the
framing-layer machinery is buffering the headers in order to emit them as a
single blob, then yes; but the same machinery could stream the headers
piecemeal, no buffering required, and thus it wouldn't care how much header
data there is overall. The higher-level stuff (the application?) might
store the headers, but then it's that application's responsibility to tell
the sender to STFU.

I am not a proxy person. I imagine an aggregator would care more about
buffering; maybe that case really would benefit from the ability to fail
fast. Calculating the final size of (uncompressed) header data is easier if
less of said data is compressed, if that's worth anything.

Also, if it's not compressed it's easier to shout "STFU" on a stream
without having to either process the headers anyway (to keep your context
valid), or tear down the connection -- you don't need to know the final
length up front.


-- 
  Matthew Kerwin
  http://matthew.kerwin.net.au/

Received on Thursday, 26 June 2014 02:43:49 UTC