Re: http/2 prioritization/fairness bug with proxies

The problem that we have here is that the TCP API isn't sufficiently rich
to allow us to do the right thing (e.g. read bytes without allowing the
sender to send more). As a result, we have to have another level of flow
control which we'd otherwise be able to do without. Unfortunately, per
priority TCP connections don't work well for large loadbalancers where each
of these connections will likely be terminating at a different place. This
would create a difficult synchronization problem server side, full of races
and complexity, and likely quite a bit worse in complexity than getting
flow control working well.

Note that the recommendation will be that flow control be effectively
disabled unless you know what you're doing, and have a good reason (memory
pressure) to use it.

-=R


On Tue, Feb 12, 2013 at 8:16 AM, Nico Williams <nico@cryptonector.com>wrote:

> On Tue, Feb 12, 2013 at 1:13 AM, Yoav Nir <ynir@checkpoint.com> wrote:
> > On Feb 12, 2013, at 1:59 AM, Nico Williams <nico@cryptonector.com>
> wrote:
> >> Right.  Don't duplicate the SSHv2 handbrake (Peter Gutmann's term) in
> HTTP/2.0.
> >>
> >> Use percentages of BDP on the sender side.  Have the receiver send
> >> control frames indicating the rate at which it's receiving to help
> >> estimate BDP, or ask TCP.  But do not flow control.
> >>
> >> Another possibility is to have the sender (or a proxy) use
> >> per-priority TCP connections.
> >
> > I don't think that one solves the problem. A server has to consider
> priority as relative to the TCP connection, so that high-priority requests
> trump low-priority requests within the same connection, but not
> low-priority requests in another connection. Otherwise we have a fairness
> issue even without proxies.
>
> Clearly with per-priority TCP connections there's no need for explicit
> priority labels.  The reason for wanting multiple flows is to avoid
> the situation where bulk transfers block the smaller requests (and
> responses) needed for applications to remain responsive to user input.
>  The moment different QoS traffic is multiplexed over one TCP
> connection we need either nested flow control (bad!) or other
> cooperation between the sender and receiver (hop-by-hop too) to ensure
> timely delivery of non-bulk, high-priority requests.
>
> > So you're effectively creating several streams, each with all requests
> having the same priority. The server will then try to be fair to all
> connections, effectively giving the same performance to high-priority and
> low-priority requests.
>
> Not necessarily.  First, the small requests can get through even
> though the TCP connection for bulk transfers is full enough that it'd
> take much longer for the small requests to get through (possibly
> because of I/O problems with bulk sinks/sources on the server side
> causing flow control to kick in).  Second, the server can probably
> apply application-specific [possibly heuristic] rules to prioritize
> processing of some requests over others regardless of which TCP
> connections they arrived over.
>
> I'm not advocating per-priority TCP connections.  I'm specifically
> arguing against SSHv2-style per-channel flow control -- a performance
> disaster -- and offering and supporting alternartives.
>
> Nico
> --
>
>

Received on Tuesday, 12 February 2013 18:19:08 UTC