W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1995

Where to estimate "link speed", and why not

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Wed, 10 May 95 13:13:42 MDT
Message-Id: <9505102013.AA03164@acetes.pa.dec.com>
To: Larry Masinter <masinter@parc.xerox.com>
Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
    As for a client-supplied link-speed estimate, I'll just point out
    that this is something that the server may be in a better position
    to estimate than the client. First, it may be the server's end of
    the connection is the performance bottleneck (the client is on a T1
    and the server is at the end of a 14.4). Since server load is often
    the determining factor in response time, the server is usually in a
    better position to judge 'how fast' the request will be satisfied.
    The server may have more history of the response time for other
    clients from the same domain.

I'm not sure I agree.  I think part of the problem is that we are
discussing "link speed", as if the term really meant anything.  In
fact, what we really care about is response time.

Link speed does affect response time, but so does a lot of other
stuff.  For example, link congestion, link error rate, and server
loading.  And, since the Internet seems to have a diameter of
ca. 10-15 hops, and my recent experience suggests that it is often
the internal "high-bandwidth" links that are the most flakey, I
do not believe that it is possible to calculate response time from
the characteristics of a few links in the path, any more than it
is possible to predict the weather in Peoria by observing butterfly
flight patterns in Palo Alto (well, perhaps I exaggerate).

I've also noticed that the path taken from our Internet demarcation
point to any specific Internet site varies quite frequently.

So I would argue that "link speed" is a pointless concept.  What we
should really measure is "recent response time", and it should be
measured *by the client* because only this way can one factor in
both the full network behavior and the server's behavior.  I do
not believe that the server can easily map its notion of "load"
to the actual perceived response time, since (1) the server's load
may be hard to measure, (2) the server application may not see the
queueing delays required to open connections, send packets, etc.

For example, my browser could keep a recent history of (object size,
response time, server) measurements.  One would probably want to
apply some filtering to the results (perhaps estimating mean and
variance), but the goal would be to try to predict how to set
the "mxb" value (or perhaps a more complex indicator of preferences).

Since the client will not have specific estimation information about
a server it has never used before (or not used recently), this means
that the technique will not work as well for the "front page" of a
server.  Clients could be conservative in this case (e.g., always
sending a low value for "mxb" in the absence of specific information).
But another implication is that Web set designers should perhaps engineer
the front page for low latency rather than high glitz.

Clearly, a client that is connected by a 14.4 tail circuit is going
to have a different default policy than one on a T3 link.  But I would
imagine that this could be "learned" very quickly by the browser software,
and no user would ever have to configure a specific link-speed value.

One more thing against putting the onus on the server: it doesn't scale
well.  Clients (in the aggregate) have far more CPU power and memory;
let them do the job.

-Jeff
Received on Wednesday, 10 May 1995 13:17:48 EDT

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:21 EDT