W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2009

Re: [OT] Re: #131: Connection limits (proposal)

From: Jim Gettys <jg@freedesktop.org>
Date: Mon, 19 Oct 2009 14:45:37 -0400
Message-ID: <4ADCB3D1.2010506@freedesktop.org>
To: "William A. Rowe, Jr." <wrowe@rowe-clan.net>
CC: Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>
William A. Rowe, Jr. wrote:
> Mark Nottingham wrote:
>> <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/131>
>> """
>> Clients (including proxies) SHOULD limit the number of simultaneous
>> connections that they maintain to a given server (including proxies).
>> Previous revisions of HTTP gave a specific number of connections as a
>> ceiling, but this was found to be impractical for many applications. As
>> a result, this specification does not mandate a particular maximum
>> number of connections, but instead encourages clients to be conservative
>> when opening multiple connections.
>> """
> It really seems like this is ripe for a Connection: max=# tag recommendation.
> Wherein the application can recommend a number of parallel connections that
> 1) they support and 2) provide optimal user/application experience.  But this
> would be out of scope of 2616bis :)

The problem is that congestion can occur anywhere in the path from 
client to server, and therefore any information the server can give on 
number of connections is incorrect.  So this idea doesn't work.

(One of) the fundamental issues is that many/most implementations of 
HTTP (or other application protocols, for that matter) do not understand 
that in current implementations, it is extremely important to read the 
data from the operating system so that you don't delay acks: otherwise, 
TCP cannot  run at anything near full speed.

We showed in the HTTP/1.1 paper that additional parallel connections did 
not actually increase performance; fastest performance was achieved with 
a single TCP connection.  But for many bad implementations, doing so 
will, without the implementers having to actually think or restructure 
their code.

It may (or may not) be as pernicious (to others) than at the time 2616 
was first issued, as in that era, buffering in edge router equipment was 
very limited, and besides screwing yourself, you may be less evil than 
then.  But I've not seen any data (nor thought deeply) on how the 
Internet has changed since then.

Exactly what the right recommendations to implementers should be, and 
whether the spec should try to enforce such behavior is a different 
				- Jim
Received on Monday, 19 October 2009 18:46:24 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:52 UTC