Re: Automated Test Runner

On 02/18/2011 12:23 AM, L. David Baron wrote:
> On Tuesday 2010-11-16 11:21 +0100, James Graham wrote:
>> {tests:{"001.html":{type:"javascript",
>>                      flags:["SVG"],
>>                      expected_results:10,
>>                      top_level_browsing_context:false
>>                     }
>>         },
>>   subdirs: ["more_tests"]
>> }
>>

[...]

> I don't see why this is needed, and it's a extra work to maintain,
> especially if people are contributing tests written elsewhere.

A manifest of some form is needed in order for any automated test runner 
to know what tests there are. Some of the above information may be 
strictly unnecessary e.g. maybe we can live without the "flags" parameter.

I note that the above assumes tests are identified by filenames. In 
general this need not be true, one could write a test that depends on 
query parameters (I have done this) or fragment ids (I have never done 
this).

> The number of tests isn't important (and is not a good measure of
> testing coverage); what matters is whether any of them failed.

The number of tests is important. If you expect a test file to return 
100 results and you only get 50 then something went wrong, even if all 
50 results were reported as pass.

I agree that forcing people to add this metadata manually is not the 
nicest approach. But I can't think of a better one either.

Received on Friday, 18 February 2011 10:33:36 UTC