W3C home > Mailing lists > Public > www-talk@w3.org > November to December 1995

Re: two ideas...

From: Brian Behlendorf <brian@organic.com>
Date: Sun, 17 Dec 1995 18:03:55 -0800 (PST)
To: www-talk@www0.cern.ch
Message-Id: <Pine.SGI.3.91.951217175507.15954S-100000@fully.organic.com>

Two issues I haven't seen addressed:

1) Prefetching, if it were to be widely deployed, throws web site traffic 
analysis out the window.  Right now we have a pretty good presumed 
mapping between a page access and a liklihood that someone actually saw 
it.  Prefetching would mean that we couldn't tell, on the server side, 
whether a fetched document actually ever got rendered or not.  The 
content provider needs some way of distinguishing between prefetches and 
actual looks.

The solution to this might be to have the prefetcher obtain only the 
first, say, 1500 bytes of the document-to-be-prefetched.  This could be
accomplished via a Range: request, or perhaps even a different method 
could be invented for this.  Then, if the document is actually selected, 
the first 1500 bytes are instantly rendered while the rest is being 
grabbed.  This should increase perceived performance, at least.  Then on 
the server end I can simply look for "give me the rest of this document" 
- type requests.

2) Control of prefetching by the server.  Let's say I have a page with 
900 inlined clickable images, trying to emulate a 30x30 grid Brite-Lite 
(and I'm not using an imagemap because, well, I'm not).  If when the 
person came to that page, each of those 900 were "prefetched", I might 
have a server meltdown.  The content provider needs some way of saying, 
it would seem, that they're not interested in having each element 
pre-fetched.  Perhaps as an attribute to <A>?  I don't have an easy 
answer to this one.


brian@organic.com  brian@hyperreal.com  http://www.[hyperreal,organic].com/
Received on Sunday, 17 December 1995 23:17:56 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:32:58 UTC