Re: Zero weight for 100 CONTINUE, instead of flow control

On 2014–04–03, at 5:18 AM, Martin Thomson <martin.thomson@gmail.com> wrote:

> On 2 April 2014 13:52, Mike Bishop <Michael.Bishop@microsoft.com> wrote:
>> Unless we're, in general, supporting that the server can send PRIORITY frames to the client to suggest how the client prioritize its uploads?  I hope not, but maybe I missed that discussion.
> 
> Me too.  And that would be contrary to what is written.  If the server
> were to send PRIORITY, it would be to make a request of the client
> regarding resource allocation at the client.  Which, for HTTP/2, is
> pretty much pointless.

The PRIORITY frame is specifically allowed in the half closed (remote) state, which corresponds to prioritization by the sender. Nothing currently disallows reprioritization by a server. The spec as currently written doesn’t differentiate at all between client and server.

As far as I can see, priority is a property of the stream applying to both endpoints, and anyone can set it.

In the CONTINUE case, the client would only really care whether the response is RST_STREAM or something else.

It is not reasonable to either require clients to weight uploads (client libraries are likely to often be simplistic), nor to require PRIORITY frames from the server to be meaningless. More sophisticated libraries will handle client and server stream multiplexing the same way. And, do not underestimate the variety of applications. HTTP/2, being more compact, is attractive for M2M communication and data acquisition, where clients might be data producers rather than consumers, yet have tight resource constraints.

It comes down to quality of implementation. Some things may be left unimplemented because they are meaningless for an application. The protocol is designed to be tolerant in any case, and a client must expect to see uselessly imprecise weighting in practice.

> 100 Continue is addressed perfectly well by setting the initial stream
> flow control window to zero and using WINDOW_UPDATE to open the pipe.
> Either requires a round trip, but weight changing has other
> (side-)effects.

When the client makes a request before receiving the initial SETTINGS, the flow control windows are still initially set to 64 KiB. If the server refuses to accept anything but sends a zero-windowed SETTINGS in response, then the client will likely end up filling all the intermediate buffers at 64 KiB *per hop*, including the server itself, before it receives those settings.

As long as those buffers contain the unwanted data, it subtracts from the connection flow control window. Only the back-propagating RST_STREAM notifies forwarders to free resources again.

Zeroing the flow control window for all streams adds a round trip as latency for all streams, not only the ones that need a CONTINUE.

Weight changing need not have any side effects at all. I’ve only proposed a transient condition. The client is free to set a new priority after the server sends a PRIORITY. It remains likely meaningless until the server starts sending data.

Flow control based throttling is the methodology with side effects. If it were well-behaved, we’d just throttle everything from the client side and never bother with priorities in the first place!

Received on Thursday, 3 April 2014 05:27:03 UTC