Re: p1-message-07 S 7.1.4

AIUI, because of the way that TCP congestion control works, the impact  
is not just on the server -- it's also on the intervening network  
itself (because congestion control tries to be fair to applications on  
a per-connection basis, so someone opening lots of connections can  
crowd out others when the network starts to get congested).

As such, it isn't a simple economic relationship -- there's no natural  
incentive for clients to not hog shared network resources. This is why  
the HTTP spec imposes an artificial limit that clients have largely  
respected (at least generally).

Cheers,



On 20/07/2009, at 4:08 PM, Adrien de Croy wrote:

>> OTOH, I also think completely removing limitations isn't good  
>> practice either, because there are still networks out there where  
>> congestion is a problem, and having an app open multiple TCP  
>> connections (as many "download accelerators" do) to hog resources  
>> isn't good for the long-term health of the Internet either.
> even a download accelerator that opens dozens of connections isn't  
> necessarily a problem.
>
> It's kinda like market-driven economics vs socialism.
>
> If the supplier can't keep up with demand, they have the option to  
> increase supply.  Do we want to take away that option by choking the  
> clients?
>
> I guess in the end, this is all only a SHOULD level recommendation.   
> Maybe also then add "clients that implement a connection limit  
> SHOULD also provide a mechanism to configure the limit".
>
> Cheers


--
Mark Nottingham     http://www.mnot.net/

Received on Monday, 20 July 2009 06:10:36 UTC