proposal for blanket statements

Hi,

During this weeks' teleconference, it became apparent to me that one of the main motivations for blanket statements is for displaying more compact summaries to end-users. At the same time, there is a strong need for precision in order to know what exactly has been tested. Here is a proposal that may be a good compromise:

A (simple) class called WebContentCollection (or similar) is an earl:Subject and has the following properties:
  * dc:title
  * dc:description
  * earl:location - a regular expression based on URI strings
  * an RDF list of pointers to individual WebContent instances

Note: "earl:location" is different from "earl:URI". It contains a regular expression that is based on a URI string as proposed by WCAG 2.0. Examples of earl:location are: "http://*.example.org/*" or "http://www.example.org/dir/*" etc.

Here is an instance of such a class:

<earl:WebContentCollection rdf:id="webcontentcollection1">
  <dc:title>Personnel Department</dc:title>
  <dc:description>random selection of pages according to methodology XYZ</dc:description>
  <earl:location>http://example.org/personnel/*</earl:location>
  <rdf:Seq>
    <rdf:li rdf:nodeID="webcontent1"/>
    <rdf:li rdf:nodeID="webcontent2"/>
    <rdf:li rdf:nodeID="webcontent3"/>
  </rdf:Seq>
</earl:WebContentCollection>

If in a specific context it is not important to know which exact WebContent entities have been tested, then a simple "SELECT ?location FROM ..." type query will only return the human readable part (and thus reduce data transfer).

However, we need to require that the exact WebContent entities that have been actually tested to be recorded in the RDF list. This way, one could still compare the results from different tools for the same earl:location pattern.

Example:

- Tool 1 checks the pages A, B, and C - result is PASS
- Tool 2 checks the pages A, B, and D - result is FAIL
- Tool 3 checks the pages E, F, and G - result is PASS
- All pages A, B, C, D, E, F, and G are referenced by the same location expression

Now if only the location expression is viewed, it is unclear why 2 tools claim PASS and 1 claims FAIL. However, by analyzing the actual pages tested, one could even identify page D as the potential problem.

Remember, we are actually talking about WebContent rather than pages so no information should be lost. We should promote that tools record whatever they can about what exactly they have tested in detail.

Any thoughts on this?

Regards,
  Shadi


-- 
Shadi Abou-Zahra     Web Accessibility Specialist for Europe | 
Chair & Staff Contact for the Evaluation and Repair Tools WG | 
World Wide Web Consortium (W3C)           http://www.w3.org/ | 
Web Accessibility Initiative (WAI),   http://www.w3.org/WAI/ | 
WAI-TIES Project,                http://www.w3.org/WAI/TIES/ | 
Evaluation and Repair Tools WG,    http://www.w3.org/WAI/ER/ | 
2004, Route des Lucioles - 06560,  Sophia-Antipolis - France | 
Voice: +33(0)4 92 38 50 64          Fax: +33(0)4 92 38 78 22 | 

Received on Friday, 12 May 2006 12:37:44 UTC