as discussed previously, I'm not convinced that concurrent connections is the (only?) thing that should be limited, or that this should be the responsibility of the client (rather than the server or an intermediary).

Server resources are becoming cheaper every day (memory / disk / CPU).  So I believe the best place to restrict usage is at the server, where the operator then has a choice about how much service will be provided.  Putting a responsibility on the client takes away this choice from the server operator.

If you made cellphones, and decided to restrict the number of calls it would make in any 1 day, do you think you'd find a telco that would sell it?

Furthermore, putting restrictions into the protocol makes it independent of application.  For instance an in-house client server system using HTTP may have no need or desire for any sort of limit.

I think therefore the best we can do is encourage implementors to act responsibly, and consider other users of the network (where there are any).

So, what about something like:

"Implementors of client applications SHOULD give consideration to effects that a client's use of resources may have on the network (both local and non-local), and design clients to act responsibly within any network they participate in.  Some intermediaries and servers are known to limit the number of concurrent connections, or rate of requests.  An excessive number of connections has also been known to cause issues on congested shared networks.  In the past HTTP has recommended a maximum number of concurrent connections a client should make, however this limit has also caused problems in some applications.  It is also believed that any recommendation on number of concurrent connections made now will not apply properly to all applications, and will become obsolete with advances in technology."

this then potentially covers any resource that should be managed, e.g. not just connections, but perhaps also bandwidth, cache space on an intermediary etc etc etc

Regards

Adrien.


William A. Rowe, Jr. wrote:
Mark Nottingham wrote:
  
<http://trac.tools.ietf.org/wg/httpbis/trac/ticket/131>

NEW:

"""
Clients (including proxies) SHOULD limit the number of simultaneous
connections that they maintain to a given server (including proxies).

Previous revisions of HTTP gave a specific number of connections as a
ceiling, but this was found to be impractical for many applications. As
a result, this specification does not mandate a particular maximum
number of connections, but instead encourages clients to be conservative
when opening multiple connections.

In particular, while using multiple connections avoids the "head-of-line
blocking" problem (whereby a request that takes significant server-side
processing and/or has a large payload can block subsequent requests on
the same connection), each connection used consumes server resources
(sometimes significantly), and furthermore using multiple connections
can cause undesirable side effects in congested networks.
"""
    

Is it worthwhile to add the caveat;

"""
Clients attempting to establish simultaneous connections SHOULD anticipate
the server to reject excessive attempts to establish additional connections,
and gracefully degrade to passing all requests through the successfully
established connection(s), rather than retrying.
"""

  

-- 
Adrien de Croy - WinGate Proxy Server - http://www.wingate.com