W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2013


From: Peter Lepeska <bizzbyster@gmail.com>
Date: Wed, 24 Apr 2013 11:40:27 -0400
Message-ID: <CANmPAYFhD8kwiM5F1vG0A5Thkrf4Dmw+64nDhvOjzPDVONU7mQ@mail.gmail.com>
To: Roberto Peon <grmocg@gmail.com>
Cc: "Eggert, Lars" <lars@netapp.com>, Gabriel Montenegro <Gabriel.Montenegro@microsoft.com>, "Simpson, Robby (GE Energy Management)" <robby.simpson@ge.com>, Eliot Lear <lear@cisco.com>, Robert Collins <robertc@squid-cache.org>, Jitu Padhye <padhye@microsoft.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>, "Brian Raymor (MS OPEN TECH)" <Brian.Raymor@microsoft.com>, Rob Trace <Rob.Trace@microsoft.com>, Dave Thaler <dthaler@microsoft.com>, Martin Thomson <martin.thomson@skype.net>, Martin Stiemerling <martin.stiemerling@neclab.eu>
Not sure this has been proposed before, but better than caching would be
dynamic initial CWND based on web server object size hinting.

Web servers often know the size of the object that will be sent to the
browser. The web server therefore can help the transport make smart initial
CWND decisions. For instance, if an object is less than 20KB, which is true
for the majority of objects on web pages, the web server could tell the
transport to increase the CWND to a size that would allow the object to be
sent in the initial window.

For larger objects, the benefit of a large CWND is minimal so the web
server could tell the transport to use the default and let the connection
ramp slowly.


On Mon, Apr 15, 2013 at 8:16 PM, Roberto Peon <grmocg@gmail.com> wrote:

> On Mon, Apr 15, 2013 at 4:03 PM, Eggert, Lars <lars@netapp.com> wrote:
>> Hi,
>> On Apr 15, 2013, at 15:56, Roberto Peon <grmocg@gmail.com> wrote:
>> > The interesting thing about the client mucking with this data is that,
>> so
>> > long as the server's TCP implementation is smart enough not to kill
>> itself
>> > (and some simple limits accomplish that), the only on the client harms
>> is
>> > itself...
>> I fail to see how you'd be able to achieve this. If the server uses a
>> CWND that is too large, it will inject a burst of packets into the network
>> that will overflow a queue somewhere. Unless you use WFQ or something
>> similar on all bottleneck queues (not generally possible), that burst will
>> likely cause packet loss to other flows, and will therefore impact them.
> The most obvious way is that the server doesn't use a CWND which is larger
> than the largest currently active window to a similar RTT. The other
> obvious way is to limit it to something like 32, which is about what we'd
> see with the opening of a mere 3 regular HTTP connections! This at least
> makes the one connection competitive with the circumventions that HTTP/1.X
> currently exhibits.
>> TCP is a distributed resource sharing algorithm to allocate capacity
>> throughout a network. Although the rates for all flows are computed in
>> isolation, the effect of that computation is not limited to the flow in
>> question, because all flows share the same queues.
> Yes, that is what I've been arguing w.r.t. the many connections that the
> application-layer currently opens :)
> It becomes a question of which dragon is actually most dangerous.
> -=R
>> Lars
Received on Wednesday, 24 April 2013 15:40:57 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:12 UTC