W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2009

Re: p1-message-07 S 7.1.4

From: Mark Nottingham <mnot@mnot.net>
Date: Mon, 20 Jul 2009 16:09:57 +1000
Cc: Henrik Nordstrom <henrik@henriknordstrom.net>, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <AE6BF546-83DC-4767-AAA4-0E109DB58D34@mnot.net>
To: Adrien de Croy <adrien@qbik.com>
AIUI, because of the way that TCP congestion control works, the impact  
is not just on the server -- it's also on the intervening network  
itself (because congestion control tries to be fair to applications on  
a per-connection basis, so someone opening lots of connections can  
crowd out others when the network starts to get congested).

As such, it isn't a simple economic relationship -- there's no natural  
incentive for clients to not hog shared network resources. This is why  
the HTTP spec imposes an artificial limit that clients have largely  
respected (at least generally).


On 20/07/2009, at 4:08 PM, Adrien de Croy wrote:

>> OTOH, I also think completely removing limitations isn't good  
>> practice either, because there are still networks out there where  
>> congestion is a problem, and having an app open multiple TCP  
>> connections (as many "download accelerators" do) to hog resources  
>> isn't good for the long-term health of the Internet either.
> even a download accelerator that opens dozens of connections isn't  
> necessarily a problem.
> It's kinda like market-driven economics vs socialism.
> If the supplier can't keep up with demand, they have the option to  
> increase supply.  Do we want to take away that option by choking the  
> clients?
> I guess in the end, this is all only a SHOULD level recommendation.   
> Maybe also then add "clients that implement a connection limit  
> SHOULD also provide a mechanism to configure the limit".
> Cheers

Mark Nottingham     http://www.mnot.net/
Received on Monday, 20 July 2009 06:10:36 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:50 UTC