- From: Gregg Vanderheiden <po@trace.wisc.edu>
- Date: Wed, 12 Aug 1998 16:56:34 -0500
- To: "'Al Gilman'" <asgilman@access.digex.net>, <w3c-wai-ua@w3.org>
I'm confused. Pre fetch for a "click here" seems like a lot of effort for something that is already right in front of you. When it says click here it always says what to click for in the text immediately before or after. Why do a pre-fetch? How would it give you more or better info? Maybe I am missing the point. Help me? Thx Gregg -- ------------------------------ Gregg C Vanderheiden Ph.D. Professor - Human Factors Dept of Ind. Engr. - U of Wis. Director - Trace R & D Center Gv@trace.wisc.edu, http:// trace.wisc.edu / FAX 608/262-8848 For a list of our listserves send "lists" to listproc@ trace.wisc.edu -----Original Message----- From: w3c-wai-ua-request@w3.org [mailto:w3c-wai-ua-request@w3.org] On Behalf Of Al Gilman Sent: Wednesday, August 12, 1998 12:45 PM To: w3c-wai-ua@w3.org Subject: net load for title pre-fetch to follow up on what Jon Gunderson said: > Subject: August 12th Telecon minutes > dd: UA WD doesn't have information about poor web design like "click here" > UA can do alot of potential things > da: what about image maps? > dd: client side you can do things and server side not much > client could look at URL title, for click heres > kh: performance woul suffer, page author guidelines > da: could it be a background task? > hw: browsers would be looking for stuff all the time and there would be > performance problems. What would browsers need to look for? > poor use of network band width, > mh: It seems like a lot of work and potential performance problems Sorry I missed this call. I would like to explore the objections to pre-fetch as a way of repairing nonsense link text. Not that we want to wire it into the browser guidelines as a demand, but that it is a degree of freedom in making the web work better that we should explore a certain amount. Will the browsers be pre-fetching everything all the time, or only on user mode select and heuristic determination of junk text? Least net bandwidth is that user exercises explicit "try harder" function when encountering an under-explained link, and at this point the browser creates a document citation by fetching the HTTP HEAD, that plus the first 1K bytes, or whatever the commercial indexing sites do. We can even prototype by vectoring query to search cite tied to this specific URL [ER WG play item]. Middle ground is that browser has heuristics to recognize simple failure modes such as link text is <null> "click here" <=href> and pre-fetch whenever one of these is in current page, or within 10 link look-ahead of currently focused link or some such rule and User has mode control to turn this pre-fetching on and off. High end: all links are pre-fetched enough to paint the first screen of text, but at a 'nice' reduced priority. If at first the links are inscrutable, they gradually change to be better explained. Just like waiting for the images to finish painting. We could even work on optimizing the servers so that a HEAD+ request was handled intelligently and more efficiently than a general GET. The performance burden may be greatest in terms of transaction load on the servers; the bandwidth would seem to be relatively innocuous. Al
Received on Wednesday, 12 August 1998 18:00:09 UTC