Re: Zero weight for 100 CONTINUE, instead of flow control

David,

You seem to want something that a client sets in order to determine
the policies a client follows when sending data.

That doesn't need any protocol.  Maybe your stack needs an API hook for that.

[More responses inline.]

On 2 April 2014 22:26, David Krauss <potswa@gmail.com> wrote:
> The PRIORITY frame is specifically allowed in the half closed (remote) state,

Correct.

> [...] which corresponds to prioritization by the sender.

...actually, this corresponds to prioritization by the *receiver*.

> Nothing currently disallows reprioritization by a server.  The spec as currently written doesn’t differentiate at all between client and server.

Technically correct.

> As far as I can see, priority is a property of the stream applying to both endpoints, and anyone can set it.

That's not quite right.  PRIORITY expresses the expectations of the
endpoint sending the PRIORITY frame about how the endpoint receiving
the PRIORITY frame should behave.

The PRIORITY frame does not provide a way to signal the intent of the
sending endpoint.

> It is not reasonable to either require clients to weight uploads (client libraries are likely to often be simplistic), nor to require PRIORITY frames from the server to be meaningless.

I believe it to be perfectly reasonable to have clients weight
uploads, otherwise we lose much of the advantages of a multiplexed,
concurrent protocol.  However, since the client is probably the one
most aware of the relative priority of streams, having an indication
from the server isn't usually of much use.  Clients will most likely
be able to weight streams on their own, without any signals in the
protocol itself.

> When the client makes a request before receiving the initial SETTINGS, the flow control windows are still initially set to 64 KiB. If the server refuses to accept anything but sends a zero-windowed SETTINGS in response, then the client will likely end up filling all the intermediate buffers at 64 KiB *per hop*, including the server itself, before it receives those settings.

This is one aspect of a well-known and thoroughly discussed issue (see
https://github.com/http2/http2-spec/issues/184).  We decided to do
nothing about this, instead requiring that servers handle clients
exercising default settings gracefully.   That means RST_STREAM if the
server can't handle unwanted streams or data.

If you are concerned about bytes in transit that aren't wanted, it's
actually worse than you have stated, because intermediaries can accept
and buffer more data.  Because, yes, flow control is hop-by-hop.  And
intermediaries are likely unaware of the context in which a request is
made.  Even if the server has an initial window of 0, that doesn't
mean intermediaries will follow suit.  Quite to the contrary.  But I
don't consider that to be an issue.

> As long as those buffers contain the unwanted data, it subtracts from the connection flow control window. Only the back-propagating RST_STREAM notifies forwarders to free resources again.

As I said, I don't consider this to be an issue.  This optimizes for
less latency in the case where the server wants to start accepting
data.  Rather than requiring a full end-to-end round trip, it only
requires a round trip on each affected (i.e., stalled) hop.  That
might result in and end-to-end delay if all hops are stalled, but that
will depend on the buffering policies of intermediaries.

To me, that's a description of a functioning flow control system.  I
don't see a problem there.

Received on Thursday, 3 April 2014 16:53:35 UTC