- From: Charles McCathieNevile <charles@w3.org>
- Date: Thu, 31 Oct 2002 03:07:40 -0500 (EST)
- To: Phill Jenkins <pjenkins@us.ibm.com>
- cc: <w3c-wai-er-ig@w3.org>
Well, there are a few approaches. The simplest is to know when the document was last updated - EARL requires that you keep a date of when the resource was tested (and assumes as far as I can tell that this is the date on which the document was read. So all you need to do is know that certain assertions are for a resource dated prior to the last change, and others have been validated or added since the last change. (If you follow the work that Nick and Jim did on hashing documents in various ways then in many cases you could determine whether a particular result you have needs to be retested or not.) An example scenario: I produce a document, and I do a complete evaluation against WCAG, which I publish as an annotea annotation on a private server. It find lots of faults. I repair the page, with my repair tool accessing the private server via my password information. It then publishes the results of P1 and P2 evaluations to http://earl.w3.org (a public server that is currently available, for people who like playing with alphas). It includes a link to the earl.w3.org data in a meta element. So the public know how to find the latest version of the data. But for my private use there is some more data, and some obsolete data. What do I do with that? For the Priority 3 checkpoints where the result hasn't changed, I replace the with the same result and the new date (i.e. later than the last-modified date of my document). Where the results have changed I post a reply with a type that says it obsoletes the thing it replies to. In this way it is possible to follow the thread for my own audit purposes. (For the cases that haven't changed I could do instead post a reply that simply reconfirms the existing result). For the copies of what is now public data on my private server I post a reply that obsoletes it, and that points to the public information as the now-current version. So I have now published a document and evaluation information, and a way to find it from the document. Let's say that Phill looks at my document and claims that my evaluation of 14.1 is wrong. What does he do? He could just write a page on his website and be done with it. Maybe I will find that, maybe not. Alternatively, he could post a reply to the annotation I put onto http://earl.w3.org that obsoletes my claim for conformance. Marja could do a full conformance evaluation, including testing for triple-A conformance, and post the new results for P3 checkpoints to earl.w3.org. If he does this, the next time I edit the document, my authoring tool will go to my private server for information, and find pointers to some public information as well as some private information (the claims for P3 checkpoints). When it asks for information on my document from the public server, it can either ask for the things I said, or ask for all the relevant EARL assertions. I would like to take advantage of other people's work, so I ask for all the relevant claims. So I am now editing my document, and have all my evaluation information, plus some provided by third parties. (There may be more that other people have made available but which I haven't found. After all, I didn't use a semantic-search engine to see if there are other things on the Web, I just looked in places I knew already). I could decide to trust the latest result, irrespective of who made it. But in this case I think I have a better evaluation tool than the one Marja used. I do choose to trust all Phill's results as if they were my own. I ask fora report on conflicts between what I said and what Marja says. I can then select which of Marja's results I believe against my own, based on her commments. I am now ready to set about repairing the problems. But before I begin I decide to update the information in the document itself. So some of the results will refer to an old version of the document - in particular, Phill's claim that the content isn't clear enough. No problem. Each claim identifies not just the documentthat it is about, but the location of the problem itself. It turns out that I wasn't going to touch the pieces Phill said were particularly bady written, but I was going to change some of the things that Marja commented on - after all I had done my own private check of the P3 items and thought about how to make my page better already. After I have made the edits, my tools checks things the set of claims I already have to find out which ones are still known to be valid and which need re-testing. Then it follows up with the testing process (naturally this includes asking me about some results) to bring the results it has up to date. It posts all the results to my private annotation server, noting that they are replies to various different annotations, some that were public, some private, and dating them. I go away for the weekend, leaving the editing uncompleted, and the document unpublished (I am keeping a copy on my private server for editing), with the record of data on my private server. On Sunday night I miss my train, and decide to stay an extra day, working in the internet cafe. Now, if my editing tool is available via the Web, I can simply tell it to look for my private copy of the document, and the earl data I am keeping, and continue working, publishing my new version and my results as I did the first time I made some information available. I still keep my P3 thoughts to myself, but I reply to Marja's by noting that they refer to an old version of the document. (Not strictly necessary, but it was quick and easy, and Marja gets notified when her annotations get a response). As an alternative, I might use a different tool for editing - perhaps even one which doesn't understand accessibility or earl as a tracking mechanism. It simply edits the page. Again, when I get back to work, I can check whether the earl results I have are affected by changes in the page, and I can do the testing, repair, and publishing. What are the implications here? Beacuse I use a public server, people can post direct responses, which I can find and use or ignore. (Maybe its a semi-restricted server, where certain people can post, and certain people can read...). It was easy to find this because I pointed to it in the document. I could also have relied on people harvesting the semantic Web, as some people will point to a document by describing search terms to use in a search engine. This is more flexible and more powerful, but may be less reliable. I can keep information to myself as well. Nothing requires me to share my private data - I can acknowledge the annotations by someone else that I disagree with, or I can publicly post why I disagree, or I can ignore them completely. When I update my document I can post my new conflicting claims, or I can just note that the document has changed, or I can rely on people checking whether the evaluations are still relevant. A lot of this relies on the annotea protocol as the technology for storing the information. Although there are alternatives, I chose annotea because it is native-RDF, and compatible with a lot of the tools I will use to process ERAL itself - shorter development time because I am always working with RDF information). However not all these features are clearly implemented in the http://earl.w3.org server - should we do that and test the scenario in the real world? Cheers Chaals On Thu, 31 Oct 2002, Phill Jenkins wrote: > > >Thanks for the discussion. > >I need to focus back on the original scenario, which I didn't include in >the original post. The questions I posted were asked because I was asked >how would I know if/when the pages were tested and repaired. Independent >of where the files were stored, for example in a "fixed" directory, I >needed to leave a piece of meta data in the file itself that said the >file/page had been tested and repaired. > >Many authoring tools leave a string of meta data saying which version of >the product was used to edit or convert the file. I think this was done >initially as a marketing tool to leave vendor product names lying around >inside meta data in web files, but has also been useful in debugging tool >specific problems by knowing the possible heredity of the file. > >So if the owner of the file wants to leave some indication that the file >has been tested and validated, via some tool or process, what is the best >way to specify that piece of meta data. I agree that most owners won't >want to leave a trail to their dirty laundry (test reports), but in just >the same way others recommend visible logo's on pages, I would like to >leave a "good housekeeping seal of approval" somewhere in the file. > >Regards, >Phill Jenkins >IBM Research Division - Accessibility Center > > -- Charles McCathieNevile http://www.w3.org/People/Charles tel: +61 409 134 136 SWAD-E http://www.w3.org/2001/sw/Europe ------------ WAI http://www.w3.org/WAI 21 Mitchell street, FOOTSCRAY Vic 3011, Australia fax(fr): +33 4 92 38 78 22 W3C, 2004 Route des Lucioles, 06902 Sophia Antipolis Cedex, France
Received on Thursday, 31 October 2002 03:07:41 UTC