HTTP/2 flow control <draft-ietf-httpbis-http2-17>

HTTP/2 folks,

As I said, consider this as a late review from a clueful but fresh 
pair of eyes.
My main concerns with the draft are:
* extensibility (previous posting)
* flow control (this posting - apologies for the length - I've tried 
to explain properly)
* numerous open issues left dangling (see subsequent postings)

===HTTP/2 FLOW CONTROL===

The term 'window' as used throughout is incorrect and highly confusing, in:
* 'flow control window' (44 occurrences),
* 'initial window size' (5),
* or just 'window size' (8)
* and the terms WINDOW_UPDATE and SETTINGS_INITIAL_WINDOW.

The HTTP/2 WINDOW_UPDATE mechanism constrains HTTP/2 to use only 
credit-based flow control, not window-based. At one point, it 
actually says it is credit-based (in flow control principle #2 
<https://tools.ietf.org/html/draft-ietf-httpbis-http2-17#section-5.2.1> 
), but otherwise it incorrectly uses the term window.

This is not just an issue of terminology. The more I re-read the flow 
control sections the more I became convinced that this terminology is 
not just /confusing/, rather it's evidence of /confusion/. It raises 
the questions
* "Is HTTP/2 capable of the flow control it says it's capable of?"
* "What type of flow-control protocol ought HTTP/2 to be capable of?"
* "Can the WINDOW_UPDATE frame support the flow-control that HTTP/2 needs?"

To address these questions, it may help if I separate the two 
different cases HTTP/2 flow control attempts to cover (my own 
separation, not from the draft):

a) Intermediate buffer control
Here, a stream's flow enters /and/ leaves a buffer (e.g. at the 
app-layer of an intermediate node).

b) Flow control by the ultimate client app.
Here flow never releases memory (at least not during the life of the 
connection). The flow is solely consuming more and more memory (e.g. 
data being rendered into a client app's memory).

==a) Intermediate buffer control==
For this, sliding window-based flow control would be appropriate, 
because the goal is to keep the e2e pipeline full without wasting buffer.

Let me prove HTTP/2 cannot do window flow control. For window flow 
control, the sender needs to be able to advance both the leading and 
trailing edges of the window. In the draft:
* WINDOW_UPDATE frames can only advance the leading edge of a 
'window' (and they are constrained to positive values).
* To advance the trailing edge, window flow control would need a 
continuous stream of acknowledgements back to the sender (like TCP). 
The draft does not provide ACKs at the app-layer, and the app-layer 
cannot monitor ACKs at the transport layer, so the sending app-layer 
cannot advance the trailing edge of a 'window'.

So the protocol can only support credit-based flow control. It is 
incapable of supporting window flow control.

Next, I don't understand how a receiver can set the credit in 
'WINDOW_UPDATE' to a useful value. If the sender needed the receiver 
to answer the question "How much more can I send than I have seen 
ACK'd?" that would be easy. But because the protocol is restricted to 
credit, the sender needs the receiver to answer the much harder 
open-ended question, "How much more can I send?" So the sender needs 
the receiver to know how many ACKs the sender has seen, but neither 
of them know that.

The receiver can try, by taking a guess at the bandwidth-delay 
product, and adjusting the guess up or down, depending on whether its 
buffer is growing or shrinking. But this only works if the unknown 
bandwidth-delay product stays constant.

However, BDP will usually be highly variable, as other streams come 
and go. So, in the time it takes to get a good estimate of the 
per-stream BDP, it will probably have changed radically, or the 
stream will most likely have finished anyway. This is why TCP bases 
flow control on a window, not credit. By complementing window updates 
with ACK stream info, a TCP sender has sufficient info to control the flow.

The draft is indeed correct when it says:
"   this can lead to suboptimal use of available
    network resources if flow control is enabled without knowledge of the
    bandwidth-delay product (see [RFC7323]).
"
Was this meant to be a veiled criticism of the protocol's own design? 
A credit-based flow control protocol like that in the draft does not 
provide sufficient information for either end to estimate the 
bandwidth-delay product, given it will be varying rapidly.

==b) Control by the ultimate client app==
For this case, I believe neither window nor credit-based flow control 
is appropriate:
* There is no memory management issue at the client end - even if 
there's a separate HTTP/2 layer of memory between TCP and the app, it 
would be pointless to limit the memory used by HTTP/2, because the 
data is still going to sit in the same user-space memory (or at least 
about the same amount of memory) when HTTP/2 passes it over for rendering.
* Nonetheless, the receiving client does need to send messages to the 
sender to supplement stream priorities, by notifying when the state 
of the receiving application has changed (e.g. if the user's focus 
switches from one browser tab to another).
* However, credit-based flow control would be very sluggish for such 
control, because credit cannot be taken back once it has been given 
(except HTTP/2 allows SETTINGS_INITIAL_WINDOW_SIZE to be reduced, but 
that's a drastic measure that hits all streams together).

==Flow control problem summary==

With only a credit signal in the protocol, a receiver is going to 
have to allow generous credit in the WINDOW_UPDATEs so as not to hurt 
performance. But then, the receiver will not be able to quickly close 
down one stream (e.g. when the user's focus changes), because it 
cannot claw back the generous credit it gave, it can only stop giving out more.

IOW: Between a rock and a hard place,... but don't tell them where the rock is.

==Towards a solution?==

I think 'type-a' flow control (for intermediate buffer control) does 
not need to be at stream-granularity. Indeed, I suspect a proxy could 
control its app-layer buffering by controlling the receive window of 
the incoming TCP connection. Has anyone assessed whether this would 
be sufficient?

I can understand the need for 'type-b' per-stream flow control (by 
the ultimate client endpoint). Perhaps it would be useful for the 
receiver to emit a new 'PAUSE_HINT' frame on a stream? Or perhaps 
updating per-stream PRIORITY would be sufficient? Either would 
minimise the response time to a half round trip. Whereas credit 
flow-control will be much more sluggish (see 'Flow control problem summary').

Either approach would correctly propagate e2e. An intermediate node 
would naturally tend to prioritise incoming streams that fed into 
prioritised outgoing streams, so priority updates would tend to 
propagate from the ultimate receiver, through intermediate nodes, up 
to the ultimate sender.

==Flow control coverage==

The draft exempts all TCP payload bytes from flow control except 
HTTP/2 data frames. No rationale is given for this decision. The 
draft says it's important to manage per-stream memory, then it 
exempts all the frame types except data, even tho each byte of a 
non-data frame consumes no less memory than a byte of a data frame.

What message does this put out? "Flow control is not important for 
one type of bytes with unlimited total size, but flow control is so 
important that it has to be mandatory for the other type of bytes."

It is certainly critical that WINDOW_UPDATE messages are not covered 
by flow control, otherwise there would be a real risk of deadlock. It 
might be that there are dependencies on other frame types that would 
lead to a dependency loop and deadlock. It would be good to know what 
the rationale behind these rules was.

==Theory?==

I am concerned that HTTP/2 flow control may have entered new 
theoretical territory, without suitable proof of safety. The only 
reassurance we have is one implementation of a flow control algorithm 
(SPDY), and the anecdotal non-evidence that no-one using SPDY has 
noticed a deadlock yet (however, is anyone monitoring for deadlocks?).

Whereas SPDY has been an existence proof that an approach like http/2 
'works', so far all the flow control algos have been pretty much 
identical (I think that's true?). I am concerned that the draft takes 
the InterWeb into uncharted waters, because it allows unconstrained 
diversity in flow control algos, which is an untested degree of freedom.

The only constraints the draft sets are:
* per-stream flow control is mandatory
* the only protocol message for flow control algos to use is the 
WINDOW_UPDATE credit message, which cannot be negative
* no constraints on flow control algorithms.
* and all this must work within the outer flow control constraints of TCP.

Some algos might use priority messages to make flow control 
assumptions. While other algos might associate PRI and WINDOW_UPDATE 
with different meanings. What confidence do we have that everyone's 
optimisation algorithms will interoperate? Do we know there will not 
be certain types of application where deadlock is likely?

"   When using flow
    control, the receiver MUST read from the TCP receive buffer in a
    timely fashion.  Failure to do so could lead to a deadlock when
    critical frames, such as WINDOW_UPDATE, are not read and acted upon.
"
I've been convinced (offlist) that deadlock will not occur as long as 
the app consumes data 'greedily' from TCP. That has since been 
articulated in the above normative text. But how sure can we be that 
every implementer's different interpretations of 'timely' will still 
prevent deadlock?

Until a good autotuning algorithm for TCP receive window management 
was developed, good window management code was nearly non-existent. 
Managing hundreds of interdependent stream buffers is a much harder 
problem. But implementers are being allowed to just 'Go forth and 
innovate'. This might work if everyone copies available open source 
algo(s). But they might not, and they don't have to.

This all seems like 'flying by the seat of the pants'.

==Mandatory Flow Control? ==

"      3. [...] A sender
        MUST respect flow control limits imposed by a receiver."
This ought to be a 'SHOULD' because it is contradicted later - if 
settings change.

"   6.  Flow control cannot be disabled."
Also effectively contradicted half a page later:
"   Deployments that do not require this capability can advertise a flow
    control window of the maximum size (2^31-1), and by maintaining this
    window by sending a WINDOW_UPDATE frame when any data is received.
    This effectively disables flow control for that receiver."

And contradicted in the definition of half closed (remote):
"  half closed (remote):
       [...] an endpoint is no longer
       obligated to maintain a receiver flow control window.
"

And contradicted in 
<https://tools.ietf.org/html/draft-ietf-httpbis-http2-17#section-8.3>8.3. 
The CONNECT Method, which says:
"  Frame types other than DATA
    or stream management frames (RST_STREAM, WINDOW_UPDATE, and PRIORITY)
    MUST NOT be sent on a connected stream, and MUST be treated as a
    stream error (Section 5.4.2) if received.
"

Why is flow control so important that it's mandatory, but so 
unimportant that you MUST NOT do it when using TLS e2e?

Going back to the earlier quote about using the max window size, it 
seems perverse for the spec to require endpoints to go through the 
motions of flow control, even if they arrange for it to affect 
nothing, but to still require implementation complexity and bandwidth 
waste with a load of redundant WINDOW_UPDATE frames.

HTTP is used on a wide range of devices, down to the very small and 
challenged. HTTP/2 might be desirable in such cases, because of the 
improved efficiency (e.g. header compression), but in many cases the 
stream model may not be complex enough to need stream flow control.

So why not make flow control optional on the receiving side, but 
mandatory to implement on the sending side? Then an implementation 
could have no machinery for tuning window sizes, but it would respond 
correctly to those set by the other end, which requires much simpler code.

If a receiving implemention chose not to do stream flow control, it 
could still control flow at the connection (stream 0) level, or at 
least at the TCP level.


==Inefficiency?==

<https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-5.2>5.2. 
Flow Control
"Flow control is used for both individual
    streams and for the connection as a whole."
Does this means that every WINDOW_UPDATE on a stream has to be 
accompanied by another WINDOW_UPDATE frame on stream zero? If so, 
this seems like 100% message redundancy. Surely I must  have misunderstood.

==Flow Control Requirements===

I'm not convinced that clear understanding of flow control 
requirements has driven flow control design decisions.

The draft states various needs for flow-control without giving me a 
feel of confidence that it has separated out the different cases, and 
chosen a protocol suitable for each. I tried to go back to the early 
draft on flow control requirements 
<http://tools.ietf.org/html/draft-montenegro-httpbis-http2-fc-principles-01>, 
and I was not impressed.

I have quoted below the various sentences in the draft that state 
what flow control is believed to be for. Below that, I have attempted 
to crystalize out the different concepts, each of which I have tagged 
within the quotes.

* 
<https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-2>2. 
HTTP/2 Protocol Overview says
   "Flow control and prioritization ensure that it is possible to 
efficiently use multiplexed streams. [Y]
    Flow control (Section 5.2) helps to ensure that only data that 
can be used by a receiver is transmitted. [X]"
* 
<https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-5.2>5.2. 
Flow Control says:
   "Using streams for multiplexing introduces contention over use of 
the TCP connection [X], resulting in blocked streams [Z]. A flow 
control scheme ensures that streams on the same connection do not 
destructively interfere with each other [Z]."
* 
<https://tools.ietf.org/html/draft-ietf-httpbis-http2-16#section-5.2.2>5.2.2. 
Appropriate Use of Flow Control
"  Flow control is defined to protect endpoints that are operating under
    resource constraints.  For example, a proxy needs to share memory
    between many connections, and also might have a slow upstream
    connection and a fast downstream one [Y].  Flow control addresses cases
    where the receiver is unable to process data on one stream, yet wants
    to continue to process other streams in the same connection [X]."
"  Deployments with constrained resources (for example, memory) can
    employ flow control to limit the amount of memory a peer can consume. [Y]

Each requirement has been tagged as follows:
[X] Notification of the receiver's changing utility for each stream
[Y] Prioritisation of streams due to contention over the streaming 
capacity available to the whole connection.
[Z] Ensuring one stream is not blocked by another.

[Z] might be a variant of [Y], but [Z] sounds more binary, whereas 
[Y] sounds more like optimisation across a continuous spectrum.


Regards



Bob


________________________________________________________________
Bob Briscoe,                                                  BT 

Received on Friday, 6 March 2015 14:44:26 UTC