Re: two ideas...

Jeffrey Mogul writes:
	...

 > My intuition is that, at the moment, the primary contributor to
 > delay for the average web user (on a 14.4 or 28.8 modem) is the
 > long transmission time over the "tail circuit" dialup link.
 > Prefetching is therefore mostly a way of hiding this part of
 > the latency.  Since these links are typically private to a
 > given client, and are paid for by the minute, not by the packet,
 > it makes sense to try to use as much of their bandwidth as possible.
 > This assumes that the available bandwidth between ISPs is large
 > enough to cover N*28.8K bits/sec when N users are actively using
 > the Web, but presumably one would not prefetch infinitely into
 > the future; that is, there would still be plenty of "think time"
 > per active user.
 > 

There is a sense in which there is a valid issue here: to the extent
that prefetching uses more total bandwidth on the net, while it may be
locally good for the individual user benefitting from it at the
moment, it may be globally bad in that it makes the net as a whole
busier, which could end up in turn making it bad even for the
individuals benefitting from it, if it ends up being a pervasively
used technique.  It's (potentially anyway) a classic "tragedy of the
commons".

The point is that it may be reasonable to include as a design goal of
a prefetching scheme that the total amount of bandwidth used should
decrease even locally.  Given a sufficiently accurate prediction
system, persistent connections, and good caching, this seems possible
(barely).

--Shel Kaphan

Received on Thursday, 7 December 1995 20:56:33 UTC