- From: James Graham <jgraham@opera.com>
- Date: Thu, 19 Nov 2009 22:49:46 +0000
- To: Kris Krueger <krisk@microsoft.com>
- Cc: "'public-html-testsuite@w3.org'" <public-html-testsuite@w3.org>
Quoting Kris Krueger <krisk@microsoft.com>: > Would you agree that one test or set of tests should somehow contain > this meta data? > > The meta data doesn't have to be specifically embedded into every > individual specific test. For example it could be stored in another > file (xml?) and still provide information about what this test is > testing, the test status, etc.. I tend to agree with Philip about the general approach we should take. With regard to metadata, each additional piece of metadata that has to accompany a test is a disincentive to write that test at all (because it adds to the effort required to write the test and keep it up to date). Therefore I think we should be looking to justify each piece of required metadata as essential or at least as having a good opportunity cost (e.g. by making the testsuite more maintainable). From the list you mentioned: > Author -> Who created the test Can be handled by the VCS. > Status/Phase -> Approved/Draft/Submitted Can be stored out of band if we actually do this. (This approach requires tests have a unique associated identifier). > Reviewer -> Who reviewed the test case Can be stored out of band if we do this. > Help -> URI back to the specification > Assert -> Describes what specifically the test case tests These seem quite quite similar. Tests that test a particular conformance criterion could have links to the corresponding fragment id in the spec. For areas like parsing tests will inevitably touch on multiple conformance criteria. With the html5lib tests we haven't really found a good way to associate the tests with part of the spec in such a way that it is obvious which tests need to change when a given part of the spec changes. All the obvious ways to do this (e.g. listing all the tokenizer and tree-construction phases that a given input should pass through) end up being so much effort that no one would bother to write any tests at all if this were required. Instead we have just dealt with changes to the spec in an ad-hoc manner. This doesn't seem like a bad strategy since the spec should become more stable with time, not less.
Received on Thursday, 19 November 2009 22:50:32 UTC