Re: HTTP/2 and TCP CWND

On Wed, Apr 24, 2013 at 11:52 AM, Peter Lepeska <bizzbyster@gmail.com>wrote:

>
> On Apr 24, 2013, at 12:36 PM, William Chan (陈智昌) <willchan@chromium.org>
> wrote:
>
> On Wed, Apr 24, 2013 at 8:40 AM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>
>> Not sure this has been proposed before, but better than caching would be
>> dynamic initial CWND based on web server object size hinting.
>>
>> Web servers often know the size of the object that will be sent to the
>> browser. The web server therefore can help the transport make smart initial
>> CWND decisions. For instance, if an object is less than 20KB, which is true
>> for the majority of objects on web pages, the web server could tell the
>> transport to increase the CWND to a size that would allow the object to be
>> sent in the initial window.
>>
>
> In the HTTP/2 case where we often are multiplexing, this doesn't seem to
> make as much sense. Also, I'm not sure that it's a reasonable argument to
> select initcwnd in absence of any congestion information...or were you
> suggesting merely tweaking the initcwnd a little bit if that little bit
> would make a difference in terms of fitting the whole object in the
> initcwnd?
>
>
> Right. A small number of multiplexed connections transfer less of a given
> page's data in slow start so this will have less impact for those
> connections. However it's worth nothing that often the first object
> requested over the multiplexed channel will be the root object alone and of
> course number of round trips to download the root object directly impacts
> page load time.
>

We should move away from this assumption that the first request is for the
root object. I've been advising companies on how to do SPDY deployments,
and a common scenario is origin server hosting the root doc + SPDY capable
CDN for the subresources (primarily images served on the edge). For these
CDNs, they're going to serve a burst of traffic immediately, and those
subresources often have high impact on the above the fold perceived latency
(in many of today's websites, images form a big part of the initial
viewport's content, so serving these images quickly is vital). In today's
non-SPDY / HTTP2 case, they just domain shard and do 6 * [2-4] sharded
hosts, for 12-24 connections with IW10, starting out with effective
initcwnds of 120+. They are gaming initcwnd to the benefit of their users
that don't have a congested path, and severe detriment of users that cannot
handle such high bursts. This situation sucks.


>
> Caching attempts to reuse old congestion information, although it has been
> reasonably pointed out that the validity of that information is
> questionable. It's an open research question as far as I'm concerned, and
> I'd love to see any data people had.
>
>
>>
>> For larger objects, the benefit of a large CWND is minimal so the web
>> server could tell the transport to use the default and let the connection
>> ramp slowly.
>>
>
> I'm not sure this makes sense. GMail and Google+ and I'm sure other large
> web apps have rather large scripts and stylesheets, but they still care
> about their initial page load latency. Perhaps you're making the assumption
> that large objects implies the user does not have interactivity /
> low-latency expectations? If so, that's invalid. Those roundtrips still
> matter and I can tell you our Google app teams work very hard to eliminate
> them. Or maybe your definition is large is larger than what I'm thinking.
>
>
> The threshold is tunable. My point here is if the TCP connection is going
> to be used to download a 100 MB file,  or stream a video, then slow start
> has a negligible impact on overall download time for the file.
>

Sure, if you're doing non-interactive large data transfers, then the slow
start latency isn't going to matter much. I don't view that conversation as
very interesting, and no one's agitating for change there. The contentious
and more interesting discussion is how to safely, yet quickly start up TCP
connections for interactive bursty traffic like web browsing. I include
video web sites like Youtube amongst that, even if their objects are large,
since the time to start viewing the video is still important.


>
>
>
>> Peter
>>
>>
>>
>>
>> On Mon, Apr 15, 2013 at 8:16 PM, Roberto Peon <grmocg@gmail.com> wrote:
>>
>>>
>>>
>>>
>>> On Mon, Apr 15, 2013 at 4:03 PM, Eggert, Lars <lars@netapp.com> wrote:
>>>
>>>> Hi,
>>>>
>>>>
>>>> On Apr 15, 2013, at 15:56, Roberto Peon <grmocg@gmail.com> wrote:
>>>> > The interesting thing about the client mucking with this data is
>>>> that, so
>>>> > long as the server's TCP implementation is smart enough not to kill
>>>> itself
>>>> > (and some simple limits accomplish that), the only on the client
>>>> harms is
>>>> > itself...
>>>>
>>>> I fail to see how you'd be able to achieve this. If the server uses a
>>>> CWND that is too large, it will inject a burst of packets into the network
>>>> that will overflow a queue somewhere. Unless you use WFQ or something
>>>> similar on all bottleneck queues (not generally possible), that burst will
>>>> likely cause packet loss to other flows, and will therefore impact them.
>>>>
>>>
>>> The most obvious way is that the server doesn't use a CWND which is
>>> larger than the largest currently active window to a similar RTT. The other
>>> obvious way is to limit it to something like 32, which is about what we'd
>>> see with the opening of a mere 3 regular HTTP connections! This at least
>>> makes the one connection competitive with the circumventions that HTTP/1.X
>>> currently exhibits.
>>>
>>>
>>>> TCP is a distributed resource sharing algorithm to allocate capacity
>>>> throughout a network. Although the rates for all flows are computed in
>>>> isolation, the effect of that computation is not limited to the flow in
>>>> question, because all flows share the same queues.
>>>>
>>>
>>> Yes, that is what I've been arguing w.r.t. the many connections that the
>>> application-layer currently opens :)
>>> It becomes a question of which dragon is actually most dangerous.
>>>
>>> -=R
>>>
>>>
>>>>
>>>> Lars
>>>
>>>
>>>
>>
>
>

Received on Wednesday, 24 April 2013 19:28:15 UTC