W3C home > Mailing lists > Public > www-talk@w3.org > November to December 1995

Re: two ideas...

From: Shel Kaphan <sjk@amazon.com>
Date: Thu, 7 Dec 1995 17:47:40 -0800
Message-Id: <199512080147.RAA20193@bert.amazon.com>
To: Jeffrey Mogul <mogul@pa.dec.com>
Cc: touch@ISI.EDU, www-talk@www0.cern.ch, www-speed@tipper.oit.unc.edu
Jeffrey Mogul writes:
	...

 > My intuition is that, at the moment, the primary contributor to
 > delay for the average web user (on a 14.4 or 28.8 modem) is the
 > long transmission time over the "tail circuit" dialup link.
 > Prefetching is therefore mostly a way of hiding this part of
 > the latency.  Since these links are typically private to a
 > given client, and are paid for by the minute, not by the packet,
 > it makes sense to try to use as much of their bandwidth as possible.
 > This assumes that the available bandwidth between ISPs is large
 > enough to cover N*28.8K bits/sec when N users are actively using
 > the Web, but presumably one would not prefetch infinitely into
 > the future; that is, there would still be plenty of "think time"
 > per active user.
 > 

There is a sense in which there is a valid issue here: to the extent
that prefetching uses more total bandwidth on the net, while it may be
locally good for the individual user benefitting from it at the
moment, it may be globally bad in that it makes the net as a whole
busier, which could end up in turn making it bad even for the
individuals benefitting from it, if it ends up being a pervasively
used technique.  It's (potentially anyway) a classic "tragedy of the
commons".

The point is that it may be reasonable to include as a design goal of
a prefetching scheme that the total amount of bandwidth used should
decrease even locally.  Given a sufficiently accurate prediction
system, persistent connections, and good caching, this seems possible
(barely).

--Shel Kaphan
Received on Thursday, 7 December 1995 20:56:33 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:18 GMT