W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2009

Re: #131: Connection limits (proposal)

From: Adrien de Croy <adrien@qbik.com>
Date: Tue, 20 Oct 2009 09:46:34 +1300
Message-ID: <4ADCD02A.6080801@qbik.com>
To: Jim Gettys <jg@freedesktop.org>
CC: "William A. Rowe, Jr." <wrowe@rowe-clan.net>, Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>


Jim Gettys wrote:
> Adrien de Croy wrote:
>> as discussed previously, I'm not convinced that concurrent 
>> connections is the (only?) thing that should be limited, or that this 
>> should be the responsibility of the client (rather than the server or 
>> an intermediary).
>>
>> Server resources are becoming cheaper every day (memory / disk / 
>> CPU).  So I believe the best place to restrict usage is at the 
>> server, where the operator then has a choice about how much service 
>> will be provided.  Putting a responsibility on the client takes away 
>> this choice from the server operator.
>>
>> If you made cellphones, and decided to restrict the number of calls 
>> it would make in any 1 day, do you think you'd find a telco that 
>> would sell it?
>
> This analogy is badly flawed.
>
I disagree.  We're discussing implementing voluntary restrictions in a 
client to ease loads on a service.

Anything done on the client side is voluntary.  Only the 
server/intermediary can enforce anything.  Basic security principles 
state you should never _rely_ on the client to behave.

My point was that if we decide an arbitrary (which it would necessarily 
be) limit on connections made by clients, and get co-operating vendors 
to implement it, then we take away choice from service providers for 
what level of service they are then able to provide a client.  If I 
wrote a service that relied on a client opening many connections, I'd be 
hosed.  I'd also be stupid for relying on it, but in the past we set 
arbitrary limits, and it caused problems.  Any number we pick now will 
likely just be a problem for later.  The limit of 2 has been in the spec 
for 13 years, but wasn't widely implemented AFAIK for quite a while (and 
when it was, it caused problems).  Can you guarantee any number you pick 
will still be valid in 13 years from now?

> More connections does not result in higher performance.
Agreed, and therefore I believe this provides sufficient incentive for 
bona-fide client vendors to not abuse concurrency.  If you sell a client 
and all it does is create problems for your users etc, then you won't 
sell much.

>
> And such a client can cause congestion, as TCP's congestion control 
> does not detect multiple connections from the same client, degrading 
> not only the user's client, but others as well.
is this on the backbone, or just a single stack?  Or are you talking 
about backbone elevated priority of SYN packets etc?

>
> It is much as if someone opened 50 cell phones all at once and dialed 
> at the same time: you can be sure that your service (and others) were 
> degraded badly).
As a telco, I'd be happy to charge the user for that, and if demand 
outstripped supply, I would have a choice either to extend capacity, and 
earn more, or treat it as a DOS attack.  I believe the choice should be 
the telcos.

So in the HTTP world, the servers have an existing need to be able to 
limit supply, if only to cater for many individual clients.  Since such 
a capability for limiting is required anyway, then that can also cope 
with individual clients doing things the server operator decides are not 
to be allowed.  But putting an arbitrary limit in the client takes the 
choice away from the server operator, which takes away their opportunity 
to provide more service should they choose to, which could limit their 
opportunities for commercial growth.

Adrien

>                     - Jim
>
>>
>> Furthermore, putting restrictions into the protocol makes it 
>> independent of application.  For instance an in-house client server 
>> system using HTTP may have no need or desire for any sort of limit.
>>
>> I think therefore the best we can do is encourage implementors to act 
>> responsibly, and consider other users of the network (where there are 
>> any).
>>
>> So, what about something like:
>>
>> "Implementors of client applications SHOULD give consideration to 
>> effects that a client's use of resources may have on the network 
>> (both local and non-local), and design clients to act responsibly 
>> within any network they participate in.  Some intermediaries and 
>> servers are known to limit the number of concurrent connections, or 
>> rate of requests.  An excessive number of connections has also been 
>> known to cause issues on congested shared networks.  In the past HTTP 
>> has recommended a maximum number of concurrent connections a client 
>> should make, however this limit has also caused problems in some 
>> applications.  It is also believed that any recommendation on number 
>> of concurrent connections made now will not apply properly to all 
>> applications, and will become obsolete with advances in technology."
>>
>> this then potentially covers any resource that should be managed, 
>> e.g. not just connections, but perhaps also bandwidth, cache space on 
>> an intermediary etc etc etc
>>
>> Regards
>>
>> Adrien.
>>
>>
>> William A. Rowe, Jr. wrote:
>>> Mark Nottingham wrote:
>>>  
>>>> <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/131>
>>>>
>>>> NEW:
>>>>
>>>> """
>>>> Clients (including proxies) SHOULD limit the number of simultaneous
>>>> connections that they maintain to a given server (including proxies).
>>>>
>>>> Previous revisions of HTTP gave a specific number of connections as a
>>>> ceiling, but this was found to be impractical for many 
>>>> applications. As
>>>> a result, this specification does not mandate a particular maximum
>>>> number of connections, but instead encourages clients to be 
>>>> conservative
>>>> when opening multiple connections.
>>>>
>>>> In particular, while using multiple connections avoids the 
>>>> "head-of-line
>>>> blocking" problem (whereby a request that takes significant 
>>>> server-side
>>>> processing and/or has a large payload can block subsequent requests on
>>>> the same connection), each connection used consumes server resources
>>>> (sometimes significantly), and furthermore using multiple connections
>>>> can cause undesirable side effects in congested networks.
>>>> """
>>>>     
>>>
>>> Is it worthwhile to add the caveat;
>>>
>>> """
>>> Clients attempting to establish simultaneous connections SHOULD 
>>> anticipate
>>> the server to reject excessive attempts to establish additional 
>>> connections,
>>> and gracefully degrade to passing all requests through the successfully
>>> established connection(s), rather than retrying.
>>> """
>>>
>>>   
>>
>> -- 
>> Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
>>
>
>

-- 
Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
Received on Monday, 19 October 2009 20:43:13 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:12 GMT