W3C home > Mailing lists > Public > public-css-testsuite@w3.org > August 2011

Re: Conformance requirement markup and test metadata

From: Ms2ger <ms2ger@gmail.com>
Date: Sat, 20 Aug 2011 11:00:23 +0200
Message-ID: <4E4F77A7.6080708@gmail.com>
To: public-css-testsuite@w3.org
On 08/20/2011 01:01 AM, Alan Stearns wrote:
> (Cross-posted to www-style. Please reply to public-css-testsuite)
>
> Vincent, Peter and I have been talking about adding more information to the
> Regions spec for testing. Right now, test cases have metadata that points
> back to the relevant section of the spec. We're planning on adding markup to
> the spec for each individual conformance requirement and having the tests
> also refer to the specific requirement they test.
>
> As an example, there is a sentence in the Regions spec that states:
>
>    The values "none", "inherit", "default" and "initial" are invalid flow
> names.
>
> This will be marked up as a conformance requirement with class="conform" and
> a unique id:
>
>    <span class="conform" id="invalid-flow-idents">The values "none",
> "inherit", "default" and "initial" are invalid flow names.</span>
>
> Then one or more test cases that address this conformance requirement will
> contain additional metadata with that id:
>
>    <meta name="for" content="invalid-flow-idents"/>
>
> (the "for" name and requirement ids come from
> http://www.w3.org/TR/test-methodology/, and class="conform" comes from usage
> in the WOFF spec)
>
> This will give us a finer-grained notion of what each test is for, and allow
> us to ensure that each conformance requirement in the spec has at least one
> test associated with it (via a tool like Shepherd instead of manual
> inspection).
>
> Please let us know what you think of this proposal, if there are
> modifications you'd recommend, and whether you would be interested in
> following this practice in your own spec(s).

Thanks for your email, Alan.

I think there are a couple of problems with this approach. In the first 
place, this is a significant burden for specification editors. It 
appears to me that editors would either need to annotate their 
requirements as they are written (and potentially do a lot of 
unnecessary work, as requirements are added, removed or changed), or 
need to do this when tests for the specification start being written 
(which means a lot of not very intellectually stimulating work in a 
rather short time frame, and even then changes based on implementation 
and testing experience will need to be made).

Second, as can be seen in the example you quoted above, this markup 
doubles the size of the specification (or possibly somewhat less, 
depending on the amount of non-normative material in the specification), 
for data that is essentially only ever needed by computers, and not even 
the computers of those who want to read the specification. This probably 
wouldn't be such a problem for your specification, but imagine the 
effect on a specification such as HTML (which I believe is currently 
around 5MB).

Third, with this approach, only the editors of the specification can 
update the annotations, while I believe this is one of the places where 
crowd-sourcing could actually work. Not only would this make it possible 
to spread out the work somewhat, it would also allow test authors to add 
annotations the editors missed or didn't consider necessary.

So, in short, no, I'd rather not use this approach in my own (WebApps) 
specifications; that of course doesn't need to stop the CSS WG from 
adopting it, if *its* editors don't mind.

An approach that I think would be more efficient would be something like 
the approach Philip Taylor took for his canvas tests. He has a list of 
IDs, which map to requirements in the specification, in a YAML file [1]. 
The example above could then become

   - id: invalid-flow-idents
     text: 'The values "none", "inherit", "default" and "initial" are 
invalid flow names.'

He then uses a script [2] to insert spans into a copy of the 
specification, and also inserts links back to the tests, [3] which makes 
it easy to figure out which requirements are tested sufficiently, even 
without special tools.

The benefits of this approach are, in my opinion, that (a) this can be 
(and indeed was) done without requiring cooperation from the 
specification editors, (b) this doesn't bloat the actual specification 
with markup that isn't helpful for the majority of readers, (c) this 
implementation is rather robust in the face of changes in the markup of 
the specification, while not sitting in the way of specification edits, 
and (d) this, as I mentioned, provides a list of tests per assertion.

HTH
Ms2ger

[1] 
http://dvcs.w3.org/hg/html/file/a85fcfbbf6b9/tests/submission/PhilipTaylor/tools/canvas/spec.yaml
[2] 
http://dvcs.w3.org/hg/html/file/a85fcfbbf6b9/tests/submission/PhilipTaylor/tools/canvas/gentest.py#l627
[3] 
http://dvcs.w3.org/hg/html/raw-file/a85fcfbbf6b9/tests/submission/PhilipTaylor/annotated-spec/canvas.html
Received on Saturday, 20 August 2011 09:01:05 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 20 August 2011 09:01:11 GMT