W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: http/2 prioritization/fairness bug with proxies

From: Roberto Peon <grmocg@gmail.com>
Date: Wed, 13 Feb 2013 13:23:18 -0800
Message-ID: <CAP+FsNegVDNfvzUMwyi-CqURcK_hWG26CFP-Fc=kjPsZL6LLiQ@mail.gmail.com>
To: Nico Williams <nico@cryptonector.com>
Cc: Yoav Nir <ynir@checkpoint.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Wed, Feb 13, 2013 at 12:57 PM, Nico Williams <nico@cryptonector.com>wrote:

> On Tue, Feb 12, 2013 at 12:18 PM, Roberto Peon <grmocg@gmail.com> wrote:
> > The problem that we have here is that the TCP API isn't sufficiently
> rich to
> > allow us to do the right thing (e.g. read bytes without allowing the
> sender
> > to send more). As a result, we have to have another level of flow control
>
> That's not necessary here.
>
> There are two issues here:
>
> a) flow [and congestion] control;
> b) prioritization of "interactive" or "control" traffic over bulk traffic.
>

We have more levels of prioritization than just that, but yes.


>
> These are exactly the same issues that have been faced in SSHv1 and SSHv2.
>
> TCP can handle (a), but if you multiplex traffic of different QoS over
> one TCP connection you run into the issues that SSHv1 and v2 have run
> into.
>

Agreed -- varying QoS for packets on a single in-order stream (i.e.
connection) basically don't help, even if the network did the right thing
with them, which it may not.
Even if the network does the right thing and the bytes have arrived, TCP's
API still only lets you access the packets in-order.



>
> There's two ways to address these issues: either don't it (it ==
> multiplex diff QoS traffic over the same TCP conn.) or try hard never
> to write more than one BDP's worth of bulk without considering higher
> priority traffic.


QoS for packets on multiple connections also doesn't work- each entity
owning a connection sends at what it believes is its max rate, induces
packet loss, gets throttled appropriately, and then takes too make RTTs to
recover. You end up not fully utilizing the channel(s).



>  Determining BDP is non-trivial and it can vary, but
> it's reasonable to estimate it by looking at round-trip times (it'd be
> nice if TCP could expose that to apps so they don't have to measure it
> redundantly!) and growing send bandwidth until receive bandwidth stops
> growing -- not exactly trivial, but reasonable.
>
>
The hard part is "considering higher priority traffic" when that traffic is
being send from a different machine, as would occur in the multiple
connection case.
With a single connection, this is easy to coordinate. Agreed that
estimating BDP isn't trivial (however it is something that TCP effectively
has to do).


> Now, in practice browsers already use multiple TCP connections to the
> same server anyways, so... what's wrong with per-priority TCP
> connections?  (see below)
>
> > which we'd otherwise be able to do without. Unfortunately, per priority
> TCP
> > connections don't work well for large loadbalancers where each of these
> > connections will likely be terminating at a different place. This would
> > create a difficult synchronization problem server side, full of races and
> > complexity, and likely quite a bit worse in complexity than getting flow
> > control working well.
>
> I think you're saying that because of proxies it's difficult to ensure
> per-priority TCP connections, but this is HTTP/2.0 we're talking
> about.  We have the power to dictate that HTTP/2.0 proxies replicate
> the client's per-priority TCP connection scheme.
>


No, I'm saying that it is somewhere between difficult and impossible to
ensure that separate connections from a client end up on one machine in the
modern loadbalancer world.
>From a latency perspective, opening up the multiple connections can be a
loss as well-- it increases server load for both CPU and memory and vastly
increases the chance that you'll get a lost-packet on the SYN which takes
far longer to recover from as it requires an RTO before RTT has likely been
computed.


> > Note that the recommendation will be that flow control be effectively
> > disabled unless you know what you're doing, and have a good reason
> (memory
> > pressure) to use it.
>
> Huh?  Are you saying "we need and will specify flow control.  It won't
> work.  Therefore we'll have it off by default."  How can that help?!
> I don't see how it can.
>
>
Everyone will be required to implement the flow control mechanism as a
sender.
Only those people who have effective memory limitations will require its
use when receiving (since the receiver dictates policy for flow control).

-=R


> Nico
> --
>
Received on Wednesday, 13 February 2013 21:23:45 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 13 February 2013 21:23:48 GMT