W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1996

Number of connections (was: Re: HTTP Working Group 'issues' list)

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Wed, 14 Feb 96 15:29:23 PST
Message-Id: <9602142329.AA11052@acetes.pa.dec.com>
To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
    From: hardie@merlot.arc.nasa.gov (Ted Hardie)

    I notice in the issues list that the persistent connection subgroup
    agreed that the presence of persistent connections would not be
    used to limit connections to a specific small number.  If there is
    more information on that decision, I would like to see it, even in
    a rough form.

We basically agreed that (1) there would be no way to enforce
a limit on the number of connections, and (2) there was clear
evidence that the optimal number is greater than 1.  However,
we also agreed to strongly encourage clients to use the minimal
number of connections, and as much as possible to ensure that
the protocol made this practical.

    I have already seen complaints about Netscape's current behavior on
    several lists, including statements which implied webmasters might
    turn off persistent connections in the face of what they saw as
    "piggish" behavior.  If there is a valid reason for holding open
    the multiple connections, we should probably make it public as soon
    as practicable, or we may find that false beliefs about its
    interaction with multiple connections will slow the spread of
    persistent connections.

Here is why I think two connections (from a browser to any given
server) is probably optimal:

The main reason why Netscape uses multiple connections is to
allow early rendering of pages with inlined images.  Assuming
persistent connections, part of the problem can be solved by
careful use of byte-range requests.  The sequence of operations
done by the client would be something like this:

	Client					Server

	request HTML file
						transmit HTML response
	parse IMG tags to get image URLs
	use GET+Range: to request
		initial bytes of first N images	|
	use GET+Range: to request		| server starts responding
		rest of first N images		| anywhere in this interval
	use GET to request remaining images	|

Note that the only mandatory round-trip in this sequence is to
get the HTML file; the rest of the requests can all be pipelined.
While this does add N additional requests to the server's load,
it does that on a single connection.  This not only reduces the
number of TCP connections that the server must handle, but (and
FAR more important) it flow-controls all of the requests (using
the normal transport-level flow control mechanism) and so the
server is not forced to explicitly schedule a large number of

It's a little tricky to say what "N" should be.  One could set
to the total number of images, or to some heuristically chosen
value; for example, if one is reloading a page that one has
already seen, the browser could assume that the image sizes
aren't going to change "much".

A naive view of this approach would say "oh, you only need one
connection for that."  The problem is that the HTML file might
be quite large, and it is considered useful to be able to start
rendering the first few pages long before the end of the HTML
file has arrived.  Therefore, it might make sense to open a
second connection for the image retrievals.

I think it is possible to demonstrate formally that two connections
suffices if you are willing to set N = total number of images,
but I'm not going to try to do that.  And this analysis ignores
fancy features such as "frames" and applets, since I don't really
understand how they launch connections.

The situation between a proxy and an origin server (or an inbound
proxy) is more complex, because a proxy may be multiplexing requests
from several clients to a single origin server.   In such a case,
it probably is optimal to have 2*M connections between a proxy
and a server, if M clients are simultaneously retrieving pages from
that server.  However, I would expect that M is rarely much larger
than one.   (Has anyone analyzed proxy logs to see how common this
overlapping actually is?)

    From: dmk@allegra.att.com (Dave Kristol)

    IMO, the offensive behavior isn't that multiple connections are
    opened, but that they all send Connection: keepalive, and the
    client never closes them.  So eventually the server has to time
    them out and close them.  Netscape's browser could mitigate the
    damage by either
	- not sending Connection: keepalive if it knows there are no
		follow-up requests coming, or
	- closing any open connections when it knows it's done

I do not believe that it is "offensive behavior" for a client
(or proxy) to hold open a persistent connection once it has finished
using it.  In fact, the simulations I did for my SIGCOMM '95 paper
suggest the opposite; that there is a definite benefit in keeping
connections hanging around for a long time, on the order of a few
tens of minutes, because it's highly likely that a user will make
another request within that period.  This increases the number of
requests per connection, which decreases the total amount of
connection-related overhead for a given request load.

Of course, this should not be done if it means failing to accept
requests from new clients, so the server needs to close idle
connections early enough to maintain a small pool of "free"
connection resources.  (Note that most BSD-based systems don't
have a hard upper limit on the number of open TCP connections,
aside from that set by kernel RAM requirements.  This may or
may not be true for other operating systems.)  But my simulations
showed that even on a moderately busy server, this was not a

What would be antisocial behavior would be to open far more
TCP connections than is necessary to get the job done effectively.
I think this is a problem whether they are opened serially or
in parallel, since the long-term costs are essentially the same.

Bottom line:
	clients should open as few TCP connections as possible
	but then once one is opened, use it for as long as possible.

Received on Wednesday, 14 February 1996 15:38:39 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:16 UTC