W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: http/2 prioritization/fairness bug with proxies

From: Nico Williams <nico@cryptonector.com>
Date: Tue, 12 Feb 2013 10:16:41 -0600
Message-ID: <CAK3OfOiLVfq7y4SJmoP-JyvgQc5uB8sDTrkDySMZfWqAdYf0yg@mail.gmail.com>
To: Yoav Nir <ynir@checkpoint.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
On Tue, Feb 12, 2013 at 1:13 AM, Yoav Nir <ynir@checkpoint.com> wrote:
> On Feb 12, 2013, at 1:59 AM, Nico Williams <nico@cryptonector.com> wrote:
>> Right.  Don't duplicate the SSHv2 handbrake (Peter Gutmann's term) in HTTP/2.0.
>>
>> Use percentages of BDP on the sender side.  Have the receiver send
>> control frames indicating the rate at which it's receiving to help
>> estimate BDP, or ask TCP.  But do not flow control.
>>
>> Another possibility is to have the sender (or a proxy) use
>> per-priority TCP connections.
>
> I don't think that one solves the problem. A server has to consider priority as relative to the TCP connection, so that high-priority requests trump low-priority requests within the same connection, but not low-priority requests in another connection. Otherwise we have a fairness issue even without proxies.

Clearly with per-priority TCP connections there's no need for explicit
priority labels.  The reason for wanting multiple flows is to avoid
the situation where bulk transfers block the smaller requests (and
responses) needed for applications to remain responsive to user input.
 The moment different QoS traffic is multiplexed over one TCP
connection we need either nested flow control (bad!) or other
cooperation between the sender and receiver (hop-by-hop too) to ensure
timely delivery of non-bulk, high-priority requests.

> So you're effectively creating several streams, each with all requests having the same priority. The server will then try to be fair to all connections, effectively giving the same performance to high-priority and low-priority requests.

Not necessarily.  First, the small requests can get through even
though the TCP connection for bulk transfers is full enough that it'd
take much longer for the small requests to get through (possibly
because of I/O problems with bulk sinks/sources on the server side
causing flow control to kick in).  Second, the server can probably
apply application-specific [possibly heuristic] rules to prioritize
processing of some requests over others regardless of which TCP
connections they arrived over.

I'm not advocating per-priority TCP connections.  I'm specifically
arguing against SSHv2-style per-channel flow control -- a performance
disaster -- and offering and supporting alternartives.

Nico
--
Received on Tuesday, 12 February 2013 16:17:09 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 12 February 2013 16:17:12 GMT