W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2008

Re: Connection limits

From: Jamie Lokier <jamie@shareable.org>
Date: Thu, 6 Mar 2008 00:14:42 +0000
To: David Morris <dwm@xpasc.com>
Cc: Mark Nottingham <mnot@yahoo-inc.com>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Message-ID: <20080306001442.GE1022@shareable.org>

David Morris wrote:
> I think that in the contect of where the web has evolved, we should move
> this to the category of a recommendation. The nature of the beast I
> observe is absurd numbers of objects retrieved to compose invividual pages
> (e.g., I recently counted 190+ objects on a single major new site page).
> 
> Given the 'two' rule, all it takes is two slow objects to seriously block
> presentation of the page to the user. If I were the author of an end user
> user agent (aka browser) the motivation would be to get many more than
> 2 parallel retrievals going. Based on this paragraph, the alternative is
> to use discrete connections??? Not really better. A resource constrained
> server can always opt to close connections either on the basis of too many
> from a given client or too many total.
> 
> These limits were established in the days when an end user was luck to
> have a 128kbps connection to the internet. With the latest 10mbps and
> higher consumer grade connections and servers with many GB of memory,
> defining limits in the HTTP protocol doesn't make sense to me.
> 
> Our mythical implementation guide could address the principles here
> much more productively than the specification.

>From a network efficiency point of view, the optimal way to send 190+
objects is to multiplex them into a single TCP/IP connection.  That's
better for lots and lots of reasons, affecting latency, bandwidth, and
interaction with TCP's backoff heuristics.

But, in HTTP, if an object takes time to generate, it will block the
delivery of other objects over few connections.  This does happen.

>From an efficiency and response POV, I suspect the best network
performance on all axes is to multiplex responses in any order onto
few or one stream(s), as they are generated at the server, including
splitting chunks (partial message multiplexing).  Same with requests,
in the case of large requests.  And with appropriate heuristics at the
endpoint, not just random order.

If we ever develop a multiplexing HTTP protocol (e.g. see SCTP and/or
BEEP), then one connection might be enough and optimal for most
applications. :-)

-- Jamie
Received on Thursday, 6 March 2008 00:14:56 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:37 GMT