W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2008

Re: Connection limits

From: David Morris <dwm@xpasc.com>
Date: Wed, 5 Mar 2008 00:30:19 -0800 (PST)
To: Mark Nottingham <mnot@yahoo-inc.com>
cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Message-ID: <Pine.LNX.4.33.0803050015250.22157-100000@egate.xpasc.com>


I think that in the contect of where the web has evolved, we should move
this to the category of a recommendation. The nature of the beast I
observe is absurd numbers of objects retrieved to compose invividual pages
(e.g., I recently counted 190+ objects on a single major new site page).

Given the 'two' rule, all it takes is two slow objects to seriously block
presentation of the page to the user. If I were the author of an end user
user agent (aka browser) the motivation would be to get many more than
2 parallel retrievals going. Based on this paragraph, the alternative is
to use discrete connections??? Not really better. A resource constrained
server can always opt to close connections either on the basis of too many
from a given client or too many total.

These limits were established in the days when an end user was luck to
have a 128kbps connection to the internet. With the latest 10mbps and
higher consumer grade connections and servers with many GB of memory,
defining limits in the HTTP protocol doesn't make sense to me.

Our mythical implementation guide could address the principles here
much more productively than the specification.

Dave Morris

On Wed, 5 Mar 2008, Mark Nottingham wrote:

>
> RFC2616 Section 8.1.4, Practical Considerations:
>
> > Clients that use persistent connections SHOULD limit the number of
> > simultaneous connections that they maintain to a given server. A
> > single-user client SHOULD NOT maintain more than 2 connections with
> > any server or proxy. A proxy SHOULD use up to 2*N connections to
> > another server or proxy, where N is the number of simultaneously
> > active users. These guidelines are intended to improve HTTP response
> > times and avoid congestion.
>
>
> I'm not sure I want to suggest that these limits should be changed,
> but I think it's worth a discussion. I've seen a number of cases where;
>     * resources have been spread across multiple servers, just to work
> around this limitation
>     * client-side frameworks have been designed to work around this
> limitation by batching multiple representations together (removing the
> possibility of caching)
>     * because of low adoption of pipelining, two slow responses
> blocking an entire Web page
>     * servers purposefully answer HTTP/1.0, because some clients will
> use four connections with them
>
> Also, considering the wider availability of event-driven Web servers
> and intermediaries, resource limitations on servers isn't necessarily
> the problem it once was.
>
> What do people think? Should we consider upping this to 4? There's
> also the possibility of negotiating a higher number of connections hop-
> by-hop (but that would be outside of HTTPBIS).
>
> Cheers,
>
>
> --
> Mark Nottingham       mnot@yahoo-inc.com
>
>
Received on Wednesday, 5 March 2008 08:30:40 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:37 GMT