W3C home > Mailing lists > Public > www-talk@w3.org > November to December 1995

Re: prefetching attribute (WAS Re: two ideas...)

From: Brian Behlendorf <brian@organic.com>
Date: Tue, 19 Dec 1995 12:56:14 -0800 (PST)
To: hardie@nasa.gov
Cc: hammond@csc.albany.edu, www-talk@w3.org
Message-Id: <Pine.SGI.3.91.951219124135.22559M-100000@fully.organic.com>
On Mon, 18 Dec 1995, Ted Hardie wrote:
> Under another set of theories, the server manager decides which of the
> pages it serves are related, and "pushes" related pages when one of
> the related set is requested.  This prevents the piggish behavior by
> leaving some control in the hands of the server manager; it works for
> pages only on a single server, but it may work better for such pages
> because the designers understand which ones are related.
> 
> Brian's suggestion seems to be a modification of the server push
> solution, designed to deal with the need he sees for server managers
> to have reasonable numbers for pages seen.  In it, the server pushes
> related pages, but not whole pages--specific byte ranges, so that the
> user agent must request the rest of the document, thus giving a
> trigger event for Brian's statistics.  The disadvantage to this is
> that this forces a new tcp connection and a second round trip, even
> for pages which might normally not reqire one, and it inherits the
> problem of not working for pages with links to multiple servers.

The second disadvantage you list, I don't have an answer to - yeah, there
will be no prefetching there simply because it's a (potentially) different
adminstrative domain.  However, the first problem isn't a problem when you
compare it to *no* prefetching - sure, a second document request is needed (I
won't say new tcp connection or round trip because we could be talking
persistant connections here), but at least you have the first screenful of
the document to read while the rest is loading, so the *perception* is that
there was no delay between the "click" and the beginning of the document
rendering.  Furthermore, you don't have the bandwidth hit of having all pages
prefetched, only the very beginnings of those pages.  I say make that
"beginning" mark arbitrary, so server/site authors can configure that on an
object-by-object basis. 

If we want to push this "smarts" back to the client, we could have a new
method, say "TOP", which means "give me the headers and however many bytes of
content you think I should be able to see before the full request goes
through".  In a typical persistant HTTP request, it means a GET is placed on
a document, the document is parsed for IMG and EMBED-ed objects, those are
fetched using GET, finally the document is parsed for HREF-linked objects,
and those objects are sent a TOP method.  When an HREF is selected, another
full request happens just like nowadays, but the browser can render the TOP
info it got immediately.  Just how many bytes a TOP request returns is left
up to the server/site author.  Some servers may configure it to be to the
first <HR> in an HTML doc - others may say the fist 1500 bytes.  The server
should also have some way of saying "look, the object you wanted was so
small, I gave you the whole thing anyways". 

This is academic theory until it's implemented as a test somewhere, so 
I won't press too much more on it.

	Brian

--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@organic.com  brian@hyperreal.com  http://www.[hyperreal,organic].com/
Received on Tuesday, 19 December 1995 18:14:22 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:18 GMT