Re: Test Case Template/Meta Data

Kris Krueger wrote:
> I agree that having meta data in each test case would be too much to handle if the
> suite ends up with thousands of test cases.  

Ifwe don'r end up with thousands of testcases we don't have enough :)

> Though I do think we need to track what part of the spec have test cases and which 
> parts do not have test cases.  Eventually we want to have tests for the whole 
> specification, so that the spec can get to rec.  Right?

Indeed. But for complex parts of the spec (like say, parsing) it is a 
non-trivial amount of effort to enumerate all the 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
parts of the specification that a particular test depends on. Saying 
that something tests "9.2.5.10" isn't really useful; that section of the 
specification has a huge number of possible behaviours depending on the 
token being processed and the prior state of the tree. Understanding 
whether you have complete coverage or not relies on knowing that, for 
example:

<!doctype html><table>x</table>

has quite different behaviour to:

<!doctype html><table> </table>

Which you can only learn by reading the "Anything else" clause in 
9.2.5.13. On the other hand, getting the right resulting tree depends on 
getting a huge number of other sections of the parsing spec right and, 
in particular, one might naively identify the first test as a test of 
9.2.5.3 alone. So it is quite unclear to me how one would add the right 
metadata to determine which sections a test covered without adding a 
full list of states that a test was expected to pass through and which 
actions it was expected to take in each.

Perhaps for the case of parsing we need a more automated approach to 
determining which tests cover which aspects of the specification?

Received on Friday, 20 November 2009 10:19:50 UTC