RE: Connection limits

I would agree.

The connection limit, in addition to the bandwidth limitations of the time, also helped the servers themselves.  Back then, maintaining many simultaneous but short lived TCP connections was inefficient for many operating systems TCP stacks.  By switching to fewer, longer standing connections, and the hope of the use of pipelining, we thought we'd address that.
Nowadays with stacks much more efficiently tuned to handle these types of connections efficiently, and the non-occurrence of pipelining, I think this could be relaxed a bit.

(speaking for myself, not an official company position)


-----Original Message-----
From: ietf-http-wg-request@w3.org [mailto:ietf-http-wg-request@w3.org] On Behalf Of David Morris
Sent: Wednesday, March 05, 2008 12:30 AM
To: Mark Nottingham
Cc: ietf-http-wg@w3.org Group
Subject: Re: Connection limits



I think that in the contect of where the web has evolved, we should move
this to the category of a recommendation. The nature of the beast I
observe is absurd numbers of objects retrieved to compose invividual pages
(e.g., I recently counted 190+ objects on a single major new site page).

Given the 'two' rule, all it takes is two slow objects to seriously block
presentation of the page to the user. If I were the author of an end user
user agent (aka browser) the motivation would be to get many more than
2 parallel retrievals going. Based on this paragraph, the alternative is
to use discrete connections??? Not really better. A resource constrained
server can always opt to close connections either on the basis of too many
from a given client or too many total.

These limits were established in the days when an end user was luck to
have a 128kbps connection to the internet. With the latest 10mbps and
higher consumer grade connections and servers with many GB of memory,
defining limits in the HTTP protocol doesn't make sense to me.

Our mythical implementation guide could address the principles here
much more productively than the specification.

Dave Morris

On Wed, 5 Mar 2008, Mark Nottingham wrote:

>
> RFC2616 Section 8.1.4, Practical Considerations:
>
> > Clients that use persistent connections SHOULD limit the number of
> > simultaneous connections that they maintain to a given server. A
> > single-user client SHOULD NOT maintain more than 2 connections with
> > any server or proxy. A proxy SHOULD use up to 2*N connections to
> > another server or proxy, where N is the number of simultaneously
> > active users. These guidelines are intended to improve HTTP response
> > times and avoid congestion.
>
>
> I'm not sure I want to suggest that these limits should be changed,
> but I think it's worth a discussion. I've seen a number of cases where;
>     * resources have been spread across multiple servers, just to work
> around this limitation
>     * client-side frameworks have been designed to work around this
> limitation by batching multiple representations together (removing the
> possibility of caching)
>     * because of low adoption of pipelining, two slow responses
> blocking an entire Web page
>     * servers purposefully answer HTTP/1.0, because some clients will
> use four connections with them
>
> Also, considering the wider availability of event-driven Web servers
> and intermediaries, resource limitations on servers isn't necessarily
> the problem it once was.
>
> What do people think? Should we consider upping this to 4? There's
> also the possibility of negotiating a higher number of connections hop-
> by-hop (but that would be outside of HTTPBIS).
>
> Cheers,
>
>
> --
> Mark Nottingham       mnot@yahoo-inc.com
>
>

Received on Wednesday, 5 March 2008 12:53:03 UTC