W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2015

Re: HTTP/2 flow control <draft-ietf-httpbis-http2-17>

From: Greg Wilkins <gregw@intalio.com>
Date: Fri, 20 Mar 2015 10:58:45 +1100
Message-ID: <CAH_y2NH0zBcA_2g_DSQN=ovJzM9yuw5cmS-C26L3-0OhCVFtKQ@mail.gmail.com>
To: Roberto Peon <grmocg@gmail.com>
Cc: Jason Greene <jason.greene@redhat.com>, Bob Briscoe <bob.briscoe@bt.com>, HTTP Working Group <ietf-http-wg@w3.org>, Patrick McManus <pmcmanus@mozilla.com>
On 20 March 2015 at 09:28, Roberto Peon <grmocg@gmail.com> wrote:

> Suffice it to say that the flow control is good enough to prevent oom on
> servers and intermediaries. It is otherwise imperfect, though simple


The use-case of memory protection is key.    Perhaps the mechanism should
have been called Buffer Control rather than Flow Control?

I do not like the suggestion by Bob that it should be optional with excess
data discarded, as this would necessitate an acknowledgement protocol and
data retention in the sender (thus having a real window).     We have a
reliable channel, so we should not throw away it's advantages.  The
questions is, how much effective parallelism can we get with a single
reliable channel.

I think the confusion here comes back to poor setting of requirements in
the charter.   We were tasked with preventing HOL blocking, but there are
many ways that can be looked at.

One view of that requirement is to say that means that that a large stream
should never prevent another small stream from proceeding even if the
bandwidth is saturated.    This view requires the flow control mechanism to
prevent the TCP flow control from being hit, so that a small frame can
always be sent.  Thus we sacrifice maximal throughput for stream fairness.

An alternative view of preventing HOL is that we just need to prevent small
streams from waiting for the completion of large streams.  So long as we
fragment and interleave our streams, then TCP flow control can be hit and
that will delay all streams on the connection, but the stream
fragmentation/interleaving will ensure that all frames progress in parallel
as fast as the data throughput allows.   This means a sender cannot just
suddenly wake up and expect to be able to immediately send a small frame as
the connection may be saturated.... but that's always the case even if
multiple connections are used!

I think we often write about the mechanism as if we are striving for the
former, but I think we have specified the later.  So confusion comes from
the poor statement or requirements and the perhaps resulting confused/wrong
description of the mechanism.

It appears that rather than a window based flow control mechanism we
actually have a credit based buffer control mechanism - which may or may
not impact maximal throughput depending on how it is configured and the
ratio between available memory and network transfer rates.   We are now
engaged in a global experiment to see how that works out (hence I do think
the idea of this being an experimental standard is not a bad one).

cheers




-- 
Greg Wilkins <gregw@intalio.com>  @  Webtide - *an Intalio subsidiary*
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.
Received on Thursday, 19 March 2015 23:59:14 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:43 UTC