- From: Gabriele Bartolini <me@gabrielebartolini.it>
- Date: Mon, 14 Mar 2005 16:37:25 +0100 (CET)
- To: "Chris Ridpath" <chris.ridpath@utoronto.ca>
- Cc: "Jim Ley" <jim@jibbering.com>, public-wai-ert@w3.org
> I read through the list threads you suggested but I'm still not sure how > your fuzzy pointer system works. Could you provide a short description and > perhaps an example or two? Yep. I agree. However, I have some arguments regarding the "normalisation" process of HTML documents that was discussed in the threads previously suggested (in particular: http://lists.w3.org/Archives/Public/www-annotation/2002JanJun/0156). I assume that we are trying to assess not only the structure (and partially the content) as I initially thought. Our aim is to produce a more general and generic set of procedures that is able to locate *everything*, from structure-related problems to content ones (e.g. even mispelling errors). Of course this is not the case of HTML test suite, but I guess we should keep it in mind anyway (I beg your pardon if I initially underestimated this). However, on the other hand, if some people promote the normalisation process of an HTML (to an XML document in order to produce fuzzy xpointers), I want to raise some doubts about it, which could hopefully lead me to a better understanding of the issue. If we are indeed concerned about locating some content, I may agree this solution would perfectly work. However, if our aim is to provide users with information regarding errors or problems in the source and specifically in HTML tags, normalisation could introduce further errors IMHO. In particular, I'd like to know if you had thought about a strategy of presenting the results to the users. If a document gets normalised, what information about the structure do you intend to present: the original or the normalised one? Do you get my point now? Ciao, -Gabriele
Received on Monday, 14 March 2005 15:37:29 UTC