W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: why not multiple, short-lived HTTP/2 connections?

From: (wrong string) 陈智昌 <willchan@chromium.org>
Date: Mon, 30 Jun 2014 10:40:51 -0700
Message-ID: <CAA4WUYjC8CmkHHE9KQ8kbyEFsj651OSk9nJbW86NvozpsArnKQ@mail.gmail.com>
To: Patrick McManus <mcmanus@ducksong.com>
Cc: Peter Lepeska <bizzbyster@gmail.com>, Mike Belshe <mike@belshe.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Mon, Jun 30, 2014 at 9:58 AM, Patrick McManus <mcmanus@ducksong.com>

> On Mon, Jun 30, 2014 at 12:04 PM, <bizzbyster@gmail.com> wrote:
>> All,
>> Another huge issue is that for some reason I still see many TCP
>> connections that do not advertise support for window scaling in the SYN
>> packet. I'm really not sure why this is but for instance WPT test instances
>> are running Windows 7 and yet they do not advertise window scaling and so
>> TCP connections max out at a send window of 64 KB. I've seen this in tests
>> run out of multiple different WPT test locations.
It's true that TCP window scaling can be a problem. We definitely see this
issue in a number of places around the world, most prominently in APAC at
certain ISPs (due to network wscale stripping, UGH!). But simply opening up
more connections is not the right solution. You can easily hit the other
problem of too much congestion leading to way worse performance. I talk
about these multiple connection and congestion issues at
https://insouciant.org/tech/network-congestion-and-web-browsing/ and
provide several example traces of problematic congestion. Fundamentally,
this is a transport issue and we should be fixing the transport. Indeed,
we're working on this at Google, both with our Make TCP Fast team and our
QUIC team.

>> The impact of this is that high latency connections max out at very low
>> throughputs. Here's an example (with tcpdump output so you can examine the
>> TCP flow on the wire) where I download data from a SPDY-enabled web server
>> in Virginia from a WPT test instance running in Sydney:
>> http://www.webpagetest.org/result/140629_XG_1JC/1/details/. Average
>> throughput is not even 3 Mbps despite the fact that I chose a 20 Mbps FIOS
>> connection for my test. Note that when I disable SPDY on this web server, I
>> render the page almost twice as fast because I am using multiple
>> connections and therefore overcoming the per connection throughput
>> limitation: http://www.webpagetest.org/result/140629_YB_1K5/1/details/.
>> I don't know the root cause (Windows 7 definitely sends windows scaling
>> option in SYN in other tests) and have sent a note to the webpagetest.org
>> admin but in general there are reasons why even Windows 7 machines
>> sometimes appear to not use Windows scaling, causing single connection SPDY
>> to perform really badly even beyond the slow start phase.
> I think this is a WPT issue you should take up offlist because , IIRC, the
> issue would just be in the application. its not a OS or infrastructure
> thing we'll need to cope with.

I agree it's probably specific to WPT. Here's the cloudshark trace for the
same WPT run (http://www.webpagetest.org/result/140630_HY_SGK/) using a
different Chrome instance (from Dulles, VA):
As you can see, the window scaling option is on there. And the packet trace
is taken at the client. So that lends credence to the hypothesis this
problem is local to the Sydney WPT Chrome instance in your test run.

>  IIRC when I last looked at it if you used an explicit SO_RCVBUF on your
> socket before opening on win 7 it would set the scaling factor to the
> smallest factor that was able to accommodate your desired window. (so if
> you set it to 64KB or less, scaling would be disabled). Of course there is
> no way to renegotiate scaling, so that sticks with you for the life of the
> connection no matter what you might set RCVBUF to along the way. I believe
> the correct fix is "don't do that" and any new protocol implementation
> should be able to take that into consideration.
> but maybe my info is dated.
> -P
Received on Monday, 30 June 2014 17:41:19 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC