W3C home > Mailing lists > Public > public-css-testsuite@w3.org > August 2011

Re: Conformance requirement markup and test metadata

From: Linss, Peter <peter.linss@hp.com>
Date: Sat, 20 Aug 2011 21:26:21 +0100
To: Ms2ger <ms2ger@gmail.com>
CC: "public-css-testsuite@w3.org" <public-css-testsuite@w3.org>
Message-ID: <2065E83B-AB01-4A99-B401-C79B080CF0C0@hp.com>
On Aug 20, 2011, at 2:00 AM, Ms2ger wrote:

> On 08/20/2011 01:01 AM, Alan Stearns wrote:
>> {snip}
> 
> Thanks for your email, Alan.
> 
> I think there are a couple of problems with this approach. In the first 
> place, this is a significant burden for specification editors. It 
> appears to me that editors would either need to annotate their 
> requirements as they are written (and potentially do a lot of 
> unnecessary work, as requirements are added, removed or changed), or 
> need to do this when tests for the specification start being written 
> (which means a lot of not very intellectually stimulating work in a 
> rather short time frame, and even then changes based on implementation 
> and testing experience will need to be made).

I agree that we don't want to add a system that places an undue burden on spec authors, but I'm not sure that this approach actually does inflict much.

Firstly, what's getting marked up in the spec (in my mind anyway) is merely the areas of the prose that contain testable assertions. There could be additional markup adding spans down to the sentence level, but I believe there's also merit (and less work) to simply annotate the nearest existing containing element with a "test this" flag.

This could be done by a spec editor (and it probably wouldn't hurt the editors to be thinking in terms of testable assertions as they write the prose), but it also could be done by the test suite owner for that spec. And it doesn't need to be done until work begins on the test suite. Someone needs to read the spec and find all the testable assertions in order to determine the size and scope of the test suite, it might as well be done once and shared rather than by each person writing tests on an ad hoc basis as is done now. (Of course for larger specs the job can get divvied up.)

I don't see where implementation and testing experience will change the set of testable assertions except where changes in the spec are required as a result of that experience. Sure, the tests will change during development, but that doesn't affect this markup.

Note that (at least my understanding of) the proposal doesn't specify the actual tests to be written for any given assertion, it's merely identifying the presence of a testable assertion in the spec (which by definition deserves at least one test).

> 
> Second, as can be seen in the example you quoted above, this markup 
> doubles the size of the specification (or possibly somewhat less, 
> depending on the amount of non-normative material in the specification), 
> for data that is essentially only ever needed by computers, and not even 
> the computers of those who want to read the specification. This probably 
> wouldn't be such a problem for your specification, but imagine the 
> effect on a specification such as HTML (which I believe is currently 
> around 5MB).

I really don't see this doubling the size of any spec. Alan's example was a small snippet. In many cases you're probably only going to be adding a value to an existing class attribute and relying on existing ids. Yes, there's more markup if you make it more granular, but there's also no reason the ids need to be verbose. We could also get creative and look for ways to minimize the markup if it's really an issue, but I don't think the cost is that great (and most specs have all sorts of comments and extra markup already). This size of the HTML5 spec is its own issue, and not something that's likely to be repeated by other groups.

> 
> Third, with this approach, only the editors of the specification can 
> update the annotations, while I believe this is one of the places where 
> crowd-sourcing could actually work. Not only would this make it possible 
> to spread out the work somewhat, it would also allow test authors to add 
> annotations the editors missed or didn't consider necessary.

I don't see why random people can't give feedback on the annotation markup just as they would with any other aspect of the spec. A simple alternate stylesheet can highlight the assertions making them easily visible without reading the markup or requiring special tools. I don't believe this precludes crowd sourcing identifying testable assertions, if anything it probably helps because it potentially puts that aspect of the spec in anyone's face who want to see it.

> 
> So, in short, no, I'd rather not use this approach in my own (WebApps) 
> specifications; that of course doesn't need to stop the CSS WG from 
> adopting it, if *its* editors don't mind.
> 
> An approach that I think would be more efficient would be something like 
> the approach Philip Taylor took for his canvas tests. He has a list of 
> IDs, which map to requirements in the specification, in a YAML file [1]. 

My main concern in having the list of assertions in a separate file is that the binding is fragile and will likely break with every edit to the spec. We want to encourage early generation of tests, having a fragile component will provide incentive to delay building tests until the spec stabilizes.

I haven't done an in-depth analysis of Philip's script but it appears to rely on matching raw text of the spec, I don't see how you consider this robust in the face of spec edits (unless I'm missing something).

Also, having the markup in the spec ensures that the assertion data is available to everyone and exists in a central location without any additional management overhead. Stand-alone files will tend to go missing, go out of date, or have multiple versions in random places.

Note again, there's no reason to try to bind individual tests to the spec at this point, just identify the areas of the spec that require testing. As tests are written that link back to the spec, existing tools (with small tweaks) can show testing coverage of the spec (something akin to our test harness's spec annotations, or a simple coverage report).


> The example above could then become
> 
>   - id: invalid-flow-idents
>     text: 'The values "none", "inherit", "default" and "initial" are 
> invalid flow names.'
> 
> He then uses a script [2] to insert spans into a copy of the 
> specification, and also inserts links back to the tests, [3] which makes 
> it easy to figure out which requirements are tested sufficiently, even 
> without special tools.
> 
> The benefits of this approach are, in my opinion, that (a) this can be 
> (and indeed was) done without requiring cooperation from the 
> specification editors, (b) this doesn't bloat the actual specification 
> with markup that isn't helpful for the majority of readers, (c) this 
> implementation is rather robust in the face of changes in the markup of 
> the specification, while not sitting in the way of specification edits, 
> and (d) this, as I mentioned, provides a list of tests per assertion.
> 
> HTH
> Ms2ger
> 
> [1] 
> http://dvcs.w3.org/hg/html/file/a85fcfbbf6b9/tests/submission/PhilipTaylor/tools/canvas/spec.yaml
> [2] 
> http://dvcs.w3.org/hg/html/file/a85fcfbbf6b9/tests/submission/PhilipTaylor/tools/canvas/gentest.py#l627
> [3] 
> http://dvcs.w3.org/hg/html/raw-file/a85fcfbbf6b9/tests/submission/PhilipTaylor/annotated-spec/canvas.html
> 
Received on Saturday, 20 August 2011 20:26:03 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 20 August 2011 20:26:12 GMT