W3C home > Mailing lists > Public > public-html-testsuite@w3.org > February 2011

Re: Automated Test Runner

From: James Graham <jgraham@opera.com>
Date: Fri, 18 Feb 2011 11:32:47 +0100
Message-ID: <4D5E4ACF.20805@opera.com>
To: "L. David Baron" <dbaron@dbaron.org>
CC: Kris Krueger <krisk@microsoft.com>, Anne van Kesteren <annevk@opera.com>, "public-html-testsuite@w3.org" <public-html-testsuite@w3.org>, "Jonas Sicking (jonas@sicking.cc)" <jonas@sicking.cc>
On 02/18/2011 12:23 AM, L. David Baron wrote:
> On Tuesday 2010-11-16 11:21 +0100, James Graham wrote:
>> {tests:{"001.html":{type:"javascript",
>>                      flags:["SVG"],
>>                      expected_results:10,
>>                      top_level_browsing_context:false
>>                     }
>>         },
>>   subdirs: ["more_tests"]
>> }
>>

[...]

> I don't see why this is needed, and it's a extra work to maintain,
> especially if people are contributing tests written elsewhere.

A manifest of some form is needed in order for any automated test runner 
to know what tests there are. Some of the above information may be 
strictly unnecessary e.g. maybe we can live without the "flags" parameter.

I note that the above assumes tests are identified by filenames. In 
general this need not be true, one could write a test that depends on 
query parameters (I have done this) or fragment ids (I have never done 
this).

> The number of tests isn't important (and is not a good measure of
> testing coverage); what matters is whether any of them failed.

The number of tests is important. If you expect a test file to return 
100 results and you only get 50 then something went wrong, even if all 
50 results were reported as pass.

I agree that forcing people to add this metadata manually is not the 
nicest approach. But I can't think of a better one either.
Received on Friday, 18 February 2011 10:33:36 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 18 February 2011 10:33:37 GMT