W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1995

Re: Where to estimate "link speed", and why not

From: Jeffrey C. Sedayao <sedayao@argus.intel.com>
Date: Thu, 11 May 95 0:50:18 PDT
Message-Id: <9505110750.AA19142@argus.intel.com>
To: Jeffrey Mogul <mogul@pa.dec.com>
Cc: masinter@parc.xerox.com, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
>     As for a client-supplied link-speed estimate, I'll just point out
>     that this is something that the server may be in a better position
>     to estimate than the client. First, it may be the server's end of
>     the connection is the performance bottleneck (the client is on a T1
>     and the server is at the end of a 14.4). Since server load is often
>     the determining factor in response time, the server is usually in a
>     better position to judge 'how fast' the request will be satisfied.
>     The server may have more history of the response time for other
>     clients from the same domain.
> I'm not sure I agree.  I think part of the problem is that we are
> discussing "link speed", as if the term really meant anything.  In
> fact, what we really care about is response time.
I don't agree either.  Response time is critical.  At Intel, we have had
a number of people complain that WWW performance is FASTER with a 14.4
dial up line at home than going through a proxy server over one of our 
corporate Internet connections (T1 lines).  It's not surprising if you 
consider all of the delay that a proxy server can impose while it is
spawning processes, looking through caches, doing at least two
more DNS lookups, etc.  A little math shows that for small objects 
some delay can easily drag down the throughput of a T1 to worse
than that of a 14.4 dialup.

> Link speed does affect response time, but so does a lot of other
> stuff.  For example, link congestion, link error rate, and server
> loading.  And, since the Internet seems to have a diameter of
> ca. 10-15 hops, and my recent experience suggests that it is often
> the internal "high-bandwidth" links that are the most flakey, I
> do not believe that it is possible to calculate response time from
> the characteristics of a few links in the path, any more than it
> is possible to predict the weather in Peoria by observing butterfly
> flight patterns in Palo Alto (well, perhaps I exaggerate).
> I've also noticed that the path taken from our Internet demarcation
> point to any specific Internet site varies quite frequently.
> So I would argue that "link speed" is a pointless concept.  What we
> should really measure is "recent response time", and it should be
> measured *by the client* because only this way can one factor in
> both the full network behavior and the server's behavior.  I do
> not believe that the server can easily map its notion of "load"
> to the actual perceived response time, since (1) the server's load
> may be hard to measure, (2) the server application may not see the
> queueing delays required to open connections, send packets, etc.
That's an interesting proposition.  It might get thrown off by
initial object retrieval, though.  The first time a server would get a
request from a client, it could go through a series of DNS lookups that it
wouldn't do on subsequent requests after it cached that lookup.  Note
that doing reverse and forward DNS mappings could result in a whole
mess of round trips between a server's and a client's network.  Then
again, some servers save time by not doing any DNS queries with

> For example, my browser could keep a recent history of (object size,
> response time, server) measurements.  One would probably want to
> apply some filtering to the results (perhaps estimating mean and
> variance), but the goal would be to try to predict how to set
> the "mxb" value (or perhaps a more complex indicator of preferences).
One could verify the usefulness of response by correlating the
response time of a particular request against previous requests.  Has
anyone done this on large scale (or be willing to do this)?

> Since the client will not have specific estimation information about
> a server it has never used before (or not used recently), this means
> that the technique will not work as well for the "front page" of a
> server.  Clients could be conservative in this case (e.g., always
> sending a low value for "mxb" in the absence of specific information).
> But another implication is that Web set designers should perhaps engineer
> the front page for low latency rather than high glitz.
That's a good suggestion, although with image caching, getting a bunch of
images on the first page that are repeated throughout a server can save
time later.

> Clearly, a client that is connected by a 14.4 tail circuit is going
> to have a different default policy than one on a T3 link.  But I would
> imagine that this could be "learned" very quickly by the browser software,
> and no user would ever have to configure a specific link-speed value.
> One more thing against putting the onus on the server: it doesn't scale
> well.  Clients (in the aggregate) have far more CPU power and memory;
> let them do the job.

I definitely agree with this statement.
> -Jeff
Jeff Sedayao
Intel Corporation
Received on Thursday, 11 May 1995 17:42:45 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:14 UTC