- From: Al Gilman <asgilman@access.digex.net>
- Date: Wed, 12 Aug 1998 13:45:20 -0400 (EDT)
- To: w3c-wai-ua@w3.org
to follow up on what Jon Gunderson said: > Subject: August 12th Telecon minutes > dd: UA WD doesn't have information about poor web design like "click here" > UA can do alot of potential things > da: what about image maps? > dd: client side you can do things and server side not much > client could look at URL title, for click heres > kh: performance woul suffer, page author guidelines > da: could it be a background task? > hw: browsers would be looking for stuff all the time and there would be > performance problems. What would browsers need to look for? > poor use of network band width, > mh: It seems like a lot of work and potential performance problems Sorry I missed this call. I would like to explore the objections to pre-fetch as a way of repairing nonsense link text. Not that we want to wire it into the browser guidelines as a demand, but that it is a degree of freedom in making the web work better that we should explore a certain amount. Will the browsers be pre-fetching everything all the time, or only on user mode select and heuristic determination of junk text? Least net bandwidth is that user exercises explicit "try harder" function when encountering an under-explained link, and at this point the browser creates a document citation by fetching the HTTP HEAD, that plus the first 1K bytes, or whatever the commercial indexing sites do. We can even prototype by vectoring query to search cite tied to this specific URL [ER WG play item]. Middle ground is that browser has heuristics to recognize simple failure modes such as link text is <null> "click here" <=href> and pre-fetch whenever one of these is in current page, or within 10 link look-ahead of currently focused link or some such rule and User has mode control to turn this pre-fetching on and off. High end: all links are pre-fetched enough to paint the first screen of text, but at a 'nice' reduced priority. If at first the links are inscrutable, they gradually change to be better explained. Just like waiting for the images to finish painting. We could even work on optimizing the servers so that a HEAD+ request was handled intelligently and more efficiently than a general GET. The performance burden may be greatest in terms of transaction load on the servers; the bandwidth would seem to be relatively innocuous. Al
Received on Wednesday, 12 August 1998 13:44:59 UTC