- From: Charles McCathieNevile <charles@sidar.org>
- Date: Mon, 21 Mar 2005 15:07:34 +1100
- To: me@gabrielebartolini.it, "Chris Ridpath" <chris.ridpath@utoronto.ca>
- Cc: "Jim Ley" <jim@jibbering.com>, public-wai-ert@w3.org
On Tue, 15 Mar 2005 02:37:25 +1100, Gabriele Bartolini <me@gabrielebartolini.it> wrote: > >> I read through the list threads you suggested but I'm still not sure how >> your fuzzy pointer system works. Could you provide a short description >> and >> perhaps an example or two? > > Yep. I agree. > [snip] > However, on the other hand, if some people promote the normalisation > process of an HTML (to an XML document in order to produce fuzzy > xpointers), I want to raise some doubts about it, which could hopefully > lead me to a better understanding of the issue. > > If we are indeed concerned about locating some content, I may agree this > solution would perfectly work. However, if our aim is to provide users > with information regarding errors or problems in the source and > specifically in HTML tags, normalisation could introduce further errors > IMHO. > > In particular, I'd like to know if you had thought about a strategy of > presenting the results to the users. If a document gets normalised, what > information about the structure do you intend to present: the original or > the normalised one? Do you get my point now? I think the hard part of this problem is best dealt with by tools. Users who edit their own code by hand are a minority even in accessibility-aware production. But the question is still important. Any approach to normalisation has to allow for going back to the original, in theory. In practice, I would be happy with a process that normalised HTML and tag soup to XHTML on the basis that this is what WCAG promotes anyway, and is a good idea. But I don't know if that will get consensus. (Are there any reasons not to do it?) cheers Chaals -- Charles McCathieNevile Fundacion Sidar charles@sidar.org +61 409 134 136 http://www.sidar.org
Received on Monday, 21 March 2005 04:08:31 UTC