Re: FYI: Using EARL for Validator Web Service Format

Hi Olivier,

olivier Thereaux wrote:
> The validators are an obvious customer for EARL. Actually, 
> EARL identifies from the introduction of the spec the example 
> of a validator output using EARL (See [2], Example 2) as one 
> of their basic use cases.

Indeed, the validators are one of our primary use case. We would be very 
interested in having the W3C validators being reference implementations 
for EARL when we get to CR stage.

> Q1) how could we use earl and have a way to show, in the 
> results, the list of messages (errors, warnings) that usually 
> come with validation reports? Would we have to extend it with 
> another namespace?
> earl:info could do the job.
> [[
>      Additional information beyond the description of the 
> result. For example warnings or other informative messages 
> that may help a reader better understand the result. It is 
> recommended to use Literal values for such additional messages.
> ]]
> I think the recommendation to parse as Literal is new. Which 
> would make more sense, if we want to list errors, warnings 
> and info messages, possibly along with a count info? Use 
> divs, ol, li in a Literal? Or use something else, in another 
> namespace? What would be simpler, what would be more flexible?

You may be interested in our "Pointer Methods in RDF" document. It is a 
*very* early draft and still quite rough but it shows the basic 
principle of pointers to point into the test subject from the result 
(for example to point to an element that caused the validation error):
  - <>

> Q2) In the case of the CSS validator, when checking an html 
> page for example, the results are given for that page and the 
> CSS files it links to. What, then, is the "test subject"?
> Still unsure. I think it would be best to keep the page given 
> by the user to be checked as the test subject, but it would 
> also be useful to identify the fact that there are sub-subjects.
> Maybe the CSS validator could make assertions on the single 
> CSS documents, and have information on the fact that they are 
> "part" of a wider test subject with dct:hasPart or dct:isPartOf:
> [[
> dct:hasPart
>      Relationship to other subjects that are part of this 
> subject dct:isPartOf
>      Relationship to other subjects of which this subject is 
> a part of ]] 

This really depends on the context of the *test case*. For example, if 
you are checking that the CSS document has the correct syntax, then the 
HTML document is not part of the test. If however you are testing an 
HTML page *together* with a CSS page (for example to test attribute 
override or such), then you could indeed use the technique above (which 
is also highlighted by example 7 in the document you reference above).

> Q3) What are our Test Criteria?
> I suppose we'd have to have test criterion URIs for each of 
> the validations.
> And consider the "tasks" of Unicorn (general conformance, XHTML+CSS 
> +...) as earl:TestRequirement
> [[
>      A higher-level requirement that is tested by executing 
> one or more sub-tests. For example WCAG 1.0 Checkpoint 1.1 
> which is tested by executing several sub-tests and combining 
> the results.
> ]]

ERT WG has not been able to identify a compatible "test case description 
language" as a counterpart to EARL. So for now you have to just invent 
your own descriptions but we don't really elaborate on how to do that in 
EARL (its outside the scope of this document). However, you _could_ use 
the built in earl:TestRequirement and earl:TestCase classes.

> Q4) What does Unicorn do? Collect and present assertions, or 
> is it an assertor too?
> Would it be interesting to consider it as a compound assertor?
> Schema-20070226#compoundassertor
> but that's only interesting if Unicorn itself has an earl 
> OUTput, while for now I think the interesting part is to have 
> the various observers have EARL outputs that are used by 
> unicorn as INput to be merged and processed.

I don't have enough background on Unicorn but in fact EARL is intended 
to be aggregated. So Unicorn could have EARL as both input (from other 
sub-validators or components) as well as EARL output. It may alos 
execute its own tests and extend the compilation, or infer certain 
additional results from the input.

> Q5) Does the Outcome hard-coded pre-defined values cover all 
> our needs?
> Would be nice asking the ER WG to add some, if not.
> At least, EARL would be more precise than what the Unicorn 
> format currently has, because the Unicorn format says 
> pass/fail without describing what it passes or fails. It is 
> unclear whether it means "did not pass validation" or "could 
> not validate". EARL has values for both.

You can also sub-class these results to have more granularity. For 
example to have earl:NearlyPassed as a subclass of earl:Failed. What 
"nearly passed" really means depends again on your test case. We made 
the observations that the current set of values satisfy most use cases.

> Q6) Misc Notes
> * earl:sourceCopy could be interesting for "direct input", 
> although I'm unsure whether it would be useful to copy 
> massive chunks of text around like that. Could we just use a 
> hash as earl:subject? We need to find ways to identify the 
> subject for direct input and file upload, in any case.

The earl:Content class and its properties has been revised in the 
current version of the document, we would welcome feedback on it.

> * Our assertors are earl:Software acting in earl:automatic mode.

Seems logical. Unicorn may also have earl:heuristic if it *infers* 
additional results from the input.

> * We can probably make the output formats short by using URIs 
> refering to full description for the assertors and tests, 
> instead of having the full description and classes each time.

The beauty of RDF... ;) You could even *publish* a description of the 
validator _once_, then point to it from your output reports.

> * ... I had more notes, but I need to find the deadwood I had 
> written them on.

We would be very interested in hearing about your experience with EARL, 
and if you spot any issues. Especially when we go for Last Call later 
this month, your review comments would be very important to us. I'll 
ping the groups here once we are so far...


Shadi Abou-Zahra     Web Accessibility Specialist for Europe |
Chair & Staff Contact for the Evaluation and Repair Tools WG |
World Wide Web Consortium (W3C)  |
Web Accessibility Initiative (WAI), |
WAI-TIES Project,       |
Evaluation and Repair Tools WG, |
2004, Route des Lucioles - 06560,  Sophia-Antipolis - France |
Voice: +33(0)4 92 38 50 64          Fax: +33(0)4 92 38 78 22 |

Received on Monday, 5 March 2007 12:08:19 UTC