William A. Rowe, Jr. wrote:
Adrien de Croy wrote:
So I believe the best place to restrict usage is at the server, where
the operator then has a choice about how much service will be provided. 
Putting a responsibility on the client takes away this choice from the
server operator.

Which says nothing about the client expectations, which was the point of my
initial response.  The responsibility has been assumed by servers and their
intermediate firewalls/load balancers to ferret out abusive traffic.  Any
poorly constructed client can and will fall into such traps.

Some of the history of this was the 2 connection limit implemented in IE7, which caused havoc with sites like facebook, especially through a proxy.  So I guess we are seeing the background to this from different angles.

What do you mean by client expectations?

So, what about something like:

"Implementors of client applications SHOULD give consideration to
effects that a client's use of resources may have on the network (both
local and non-local), and design clients to act responsibly within any
network they participate in.  Some intermediaries and servers are known
to limit the number of concurrent connections, or rate of requests.  An
excessive number of connections has also been known to cause issues on
congested shared networks.  In the past HTTP has recommended a maximum
number of concurrent connections a client should make, however this
limit has also caused problems in some applications.  It is also
believed that any recommendation on number of concurrent connections
made now will not apply properly to all applications, and will become
obsolete with advances in technology."

This does not address my specific concern, which is to beat into the implementors
heads not to aggressively retry parallel connections where none will be permitted;

Do we therefore need some wording on how a client should detect such cases, and respond?  E.g. since there's no specific status code for a rejection based on number of concurrent connections, it would at best be an assumption that this is occuring.   I agree hammering away would be problematic.

Should there be a new status code for this?  Otherwise we need some sort of back-out algorithm to reduce demand client-side.  I think that could cause other problems as well if there is no explicit way to signal this particular problem.



Is it worthwhile to add the caveat;

Clients attempting to establish simultaneous connections SHOULD anticipate
the server to reject excessive attempts to establish additional connections,
and gracefully degrade to passing all requests through the successfully
established connection(s), rather than retrying.

We appear to be in agreement, but addressing two different aspects of the same.

Adrien de Croy - WinGate Proxy Server - http://www.wingate.com