Re: #541: CONTINUATION - option #4

Just a few comments after letting this sink in...

On 5 Jul 2014, at 4:09 am, Jason Greene <jason.greene@redhat.com> wrote:

> Just to be clear, option 4  is:
> 
> 1) Senders must not send continuations unless they have filled and sent a maximum sized header frame.

Is there any motivation for #1 beyond accommodating implementations that don't want to think about CONTINUATION?


> 2) Senders can not send more than MAX_HEADER_BLOCK_SIZE (or alternatively MAX_HEADER_SIZE based on decoded vs encoded size)

This needs to include padding if the intent is to control use of CONTINUATION.


> 3) MAX_HEADER_[BLOCK_]SIZE defaults to 16KB

Can it be set lower?

Also, I think this needs to default to 2**14 - 10 to account for HEADERS (w/ priority flag) overhead, if the intent is to default to 0 continuations.


> 4) A larger MAX_HEADER_[BLOCK_]SIZE value can be specified by the sender in a SETTINGS frame 

What do you mean by "sender" here?


> The end result is that a proxy or a server does not have to process and/or relay unlimited data when it will ultimately reject a value <= 16KB. In the .02% cases where a value greater than that is needed, a reasonable limit is negotiated thereby mitigating the impact of HOL blocking and reducing or eliminating connection drops.

SETTINGS is explicitly not a negotiation mechanism; it only allows one end to advertise its configuration to its peer.

In this case, the semantic of MHS (MAX_HEADER_SIZE) would be roughly "I allow headers coming at me to be at most THIS BIG." There's no way for the peer to ask for more; it has to accept the limitation.

Thinking through deployment scenarios --

a) Imagine a forward proxy that has a single connection to the user agent. Each time the UA requested an origin that necessitated a new connection, the proxy would have to advertise a new, lower request MHS if the origin advertises one less than the current client connection's. It could only raise the UA->proxy request MHS again when that connection is no longer in use by the client (and it'd likely need to use some heuristic to figure this out). This isn't great, because you're effectively getting the smallest MHS you're currently connected to (for some value of "currently"), and it's dynamic.

Alternatively, it could keep the UA->proxy request MHS constant and synthesise a 431 error response when an origin's request MHS was violated. What's bad here is that something that works without a proxy won't work when the proxy is interposed (and we know how those support calls go). Arguably, though, this isn't much different from e.g. Squid's 20K header limit.

b) In the case of a "reverse" proxy / gateway that multiplexes many UA connections into one origin connection, the proxy's advertised response MHS would need to be the smallest of the UA connections', for the lifetime of that connection. Again, it'd need to be adjusted as clients came and went, but this is pretty nasty, since it gives clients control over others' sessions. 

Alternatively, the gateway could synthesise a 502 (for example) back to the UA that's advertise too low a MHS for a given response. Not sure if it'd be useful to communicate the problem back to the origin (especially since it'd have to be HTTP/2-specific)... 

Are there any other ways to handle it?

Of course, if MHS can't be set to lower than 16K (-1), there's at least a floor, and if it never gets advertised above 16K (-1) by anyone, you don't really see these issues. The question is how likely that is to be the case.

The other interesting aspect to think about is how MHS surfaces in HTTP APIs. Request MHS seems like it would necessitate a synthesised 431 whenever the client attempts to make a request that violates server policy. Response MHS is tricker; by default, the response would just get dropped on the floor, in the same way that a client that doesn't like your response doesn't give you any feedback. However, I suspect that people would invent new APIs on top of the "normal" HTTP API to allow this information to come through...

That's how I currently understand it; feel free (as always) to poke holes.

Cheers,


--
Mark Nottingham   https://www.mnot.net/

Received on Monday, 7 July 2014 08:10:57 UTC