W3C home > Mailing lists > Public > public-test-infra@w3.org > July to September 2011

Re: Number of tests in a test suite

From: James Graham <jgraham@opera.com>
Date: Tue, 16 Aug 2011 13:07:43 +0200
Message-ID: <4E4A4F7F.1070307@opera.com>
To: public-test-infra@w3.org
On 08/16/2011 12:22 PM, Francois Daoust wrote:

> I don't see how one can extract subtests from a script test
> automatically unless we put constraints on the way these tests are
> written, but I'd be more than happy to be wrong. Many tests have
> conditional subtests that only get added and run when e.g. a first
> subtest passes.

Whilst I consider it bad style to have conditional tests, I also believe 
that you can't extract the names and number of tests automatically; you 
either have to do it manually or run the test case and see what you get.

> A constraint that would work: provided no error occurs, all subtests
> must be run when the test runs, no matter whether subtests pass or fail.
> If that constraint is respected, running the test once would allow to
> extract information about subtests through a simple script, and prepare
> the required input file to be imported into the harness.

That constraint is good as a statement of intent but you can't 
realistically enforce it in the face of possible browser bugs.

> I'm not sure that is reasonable though. Some subtests could be triggered
> by event firing, and the event might not fire in some implementations
> for some reason.

Right, exactly. When you have tests like this you need to deal with the 
fact that the set of results you get back might randomly change whenever 
you run the test. That means you either need a fixed list to compare 
against (a huge pain for authors) or you need to work with deltas 
between different test runs.
Received on Tuesday, 16 August 2011 11:08:17 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 16 August 2011 11:08:17 GMT