W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1995

Re: Keep-Alive Notes

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Mon, 16 Oct 95 11:45:11 MDT
Message-Id: <9510161845.AA19772@acetes.pa.dec.com>
To: "David W. Morris" <dwm@shell.portal.com>
Cc: http working group <http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com>
David Morris writes:
    As I recall, Jeffery Mogul's data was captured at the server and
    roughly represents client to server access. I would postulate
    that client-proxy connections will have a very different profile
    in that they will almost always be a long runing series of
    requests from the client to the proxy. Do we have any data to
    look at this part of the protocol's implications?

I have no direct data.  However, consider this line of reasoning:
I suspect that client-proxy interactions will show about the same
short-term locality as client-server interactions (since most users
stick to one server at a time, the short-term patterns should be
about the same).  Longer-term, the client-proxy interactions should
show more locality than client-server interactions I observed, since
the client has fewer proxies to choose from than it has servers.

My simulations showed that servers can probably do a good job at
capturing locality of reference using LRU management of connections.
If proxies have even better locality, then LRU should work for them,

    My understanding thus far is that client-proxy and proxy-server
    connections are independant of each other in that an efficient
    proxy might keep a long connection with each client and use
    a pool of long connections to servers to satisfy the client's
    requests.  In particular, there is no requirement that a
    given proxy-server connection be used for a single client.
    This raises some interesting issues:
    1.  Serving from a shared connection may/will have some access
        control issues. Similar but not identical to caching
    2.  [per-connection state info]

I would add:
    3.  Flow-control and early termination of responses could be harder.

This is only a concern if multiple clients of the same proxy are
trying to use the same server at more or less the same time.
I have some data to suggest that heavily-used proxies do present
mixed-client requests to busy servers, so it's worth considering
this issue.

One approach might be to simply use multiple proxy-server connections
to handle multiple client-server sessions.  That is, the proxy would
not mingle multiple client requests on the same TCP connection.  Would
this reduce the advantages of persistent connections?

It would increase the number of active TCP connections per server and
per proxy, but probably not by a lot (except for very busy servers and
very busy proxies).  It could cause some inter-connection competition
for the bandwidth between proxy and server.

On the other hand, if a proxy does mingle requests from multiple
clients on the same connection, then this makes it much harder to
avoid retransmitting access control and state information with
each request.  In the worst case, the requests would alternate
and so each request would cause a transmission of access control

What might be even harder to deal with is "what happens when
client A and client B are sharing the same connection, and both
start retrievals at the same time?"  In HTTP-NG, we might
have a means of interleaving the retrievals, but in HTTP 1.x,
they will be done serially, and one of the clients is going to
lose.  Which is especially bad if the "winning" retrieval is
very long, and the losing one is short.  There's probably some
basic result in queueing theory that says this is a bad idea,
but I'm pretty ignorant of queueing theory.

So, even though one of our papers suggested that combining
multiple clients onto one proxy-server connection would be
more efficient, I'd now recommend against implementing this
in HTTP 1.x.

Received on Monday, 16 October 1995 12:04:03 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:15 UTC