W3C home > Mailing lists > Public > www-talk@w3.org > November to December 1995

Re: two ideas...

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Thu, 07 Dec 95 18:19:49 PST
Message-Id: <9512080219.AA03891@acetes.pa.dec.com>
To: Shel Kaphan <sjk@amazon.com>
Cc: www-talk@www0.cern.ch, www-speed@tipper.oit.unc.edu
    There is a sense in which there is a valid issue here: to the extent
    that prefetching uses more total bandwidth on the net, while it may be
    locally good for the individual user benefitting from it at the
    moment, it may be globally bad in that it makes the net as a whole
    busier, which could end up in turn making it bad even for the
    individuals benefitting from it, if it ends up being a pervasively
    used technique.  It's (potentially anyway) a classic "tragedy of the
    commons".
    
This depends on a number of subtle issues.  The most basic one is
whether the latency-reduction effects of prefetching are (on the
whole) bigger or smaller than the latency-increasing effects of
high Internet loads.

If most Web users have one HTTP/TCP connection open at a time, I would
expect that current lower-layer techniques (such as TCP congestion
avoidance) and likely-to-be-employed techniques (such as some
form of fair queueing in the routers) should result in fairness.
That is, everybody will get a roughly equal share of the shared
links.  Then it's up to each client (software, not human) to decide
how to allocate its share among prefetching and demand fetching.

Whether or not HTTP supports prefetching, there is still a danger
of a tragedy of the commons.  You can waste bandwidth by prefetching,
or by downloading enormous MPEGs.  Or RealAudio, or whatever.
Either the net will ultimately adopt usage pricing, or the current
flat-rate model will generate enough income for ISPs to maintain
a reasonable supply of bandwidth.  Dave Clark has good arguments
to support the latter model, and so far it seems to be working.
(Dave's arguments are based on studies by respectable economists;
usage-pricing is not inevitable.)

    The point is that it may be reasonable to include as a design goal of
    a prefetching scheme that the total amount of bandwidth used should
    decrease even locally.  Given a sufficiently accurate prediction
    system, persistent connections, and good caching, this seems possible
    (barely).

Not.  Think about the set of objects retrieved by prefetching+demand.
It must be a superset of the pages retrieved by demand (in the absence
of dire failures).  It can't be a subset.  And if any of the predictions
are wrong, it will be a larger superset.  Since we can't expect
a particularly accurate mechanical prediction of what any human
being will do next (if I had one, I'd play the stock market with it),
prefetching will definitely increase bandwidth requirements.

On the other hand, it might result in more efficient use of
server resources, router resources, proxy resources, and client
resources ... by increasing temporal locality.  Or maybe not.

-Jeff
Received on Thursday, 7 December 1995 21:23:55 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:18 GMT