Re: Test Case Template/Meta Data

Kris Krueger wrote:
> I'd like to get agreement on a test case template for the HTML testing task force.
> To start with I'd like to propose that the template allow for including other key information about the test using a meta tag.
> 
> For example a test case that depends upon the ahem font would contain.
> 
>     <meta name="flags" content="ahem" />
> 
> Other key information that may been necessary
> 
>     Author       -> Who created the test
>     Status/Phase -> Approved/Draft/Submitted
>     Reviewer     -> Who reviewed the test case
>     Help         -> URI back to the specification
>     Assert       -> Describes what specifically the test case tests
> 
> Thoughts?  Other additional information that we would like to have?

A few random thoughts:

I'm not quite certain how the concept of a template will work with the 
full range of tests that are needed.

E.g. for parser tests, the individual test case inputs clearly can't 
contain embedded metadata. Currently we have a load of tests like 
<http://code.google.com/p/html5lib/source/browse/testdata/tree-construction/tests1.dat> 
(format described in <http://wiki.whatwg.org/wiki/Parser_tests>), which 
can be run by standalone parsers or run in a web browser via 
<http://gsnedders.html5.org/html5lib-tests/runner.html>.

With that kind of test, what would the metadata be, and where would it 
be stored, and what format would the test itself be stored as, and how 
should the test-runner script relate to all this?

Validator tests would have similar problems, since making the test case 
conform to a template would often disrupt the thing that the test case 
was trying to test.

One significant issue is that there's currently over five hundred 
tree-construction tests, and a few thousand tokeniser tests, and over 
six hundred canvas tests, and lots of ad hoc collections of tests, and 
lots of other features will each need hundreds more. Whatever process is 
used for tests, I think scalability is crucial if the effort is going to 
be successful - it needs to be as quick and easy as possible to write a 
test, and review a test, and run a test against an implementation, and 
fix a test when the spec changes, without letting in an undue number of 
incorrect tests (since they distract or mislead implementors who are 
trying to find and fix bugs).

So I think the amount of process and metadata involved with each test 
case should be heavily minimised - the question should be how little 
information can we get away with, rather than what extra would we like 
to have.

Given its size and diversity, rather than trying to develop a single 
method for testing the whole of HTML5 I wonder if it would be better to 
split it into a number of components (parsing, validating, canvas, media 
elements, forms, etc) and approach each one as a largely independent 
testing effort.

Each could have its own process and system for writing and running 
tests. Parsing tests could be similar to the current html5lib tests, 
validator tests could be similar to 
<http://wiki.whatwg.org/wiki/Validator.nu_Full-Stack_Tests>, canvas 
tests could be similar to the current canvas tests, etc. They should 
share tools and concepts (with each other and with efforts like 
<http://omocha.w3.org/wiki/>) to avoid duplicated effort, but should 
each use whatever process is most appropriate to their particular 
characteristics.

(Overlap between components is fine - the canvas tests can test what 
happens when you draw a video onto a canvas, etc. Components would be 
defined more by their testing methodology than by spec section.)

That would also provide a relatively straightforward way to begin work 
on the W3C test suite, by importing the existing scattered test suites 
for HTML5 features, and then using the group's resources to focus effort 
on cleaning them up and extending them and reviewing them, without 
having to start from scratch or significantly rewrite them to conform to 
a common testing process.

There's been a fair amount of testing done already, so I think 
organising that work and focusing on improving it would be a good way to 
get started with concrete results.

-- 
Philip Taylor
pjt47@cam.ac.uk

Received on Thursday, 19 November 2009 19:39:18 UTC