W3C home > Mailing lists > Public > public-html-testsuite@w3.org > February 2011

Re: Automated Test Runner

From: L. David Baron <dbaron@dbaron.org>
Date: Fri, 18 Feb 2011 09:11:42 -0800
To: James Graham <jgraham@opera.com>
Cc: Kris Krueger <krisk@microsoft.com>, Anne van Kesteren <annevk@opera.com>, "public-html-testsuite@w3.org" <public-html-testsuite@w3.org>, "Jonas Sicking (jonas@sicking.cc)" <jonas@sicking.cc>
Message-ID: <20110218171142.GA3804@pickering.dbaron.org>
On Friday 2011-02-18 11:32 +0100, James Graham wrote:
> On 02/18/2011 12:23 AM, L. David Baron wrote:
> >On Tuesday 2010-11-16 11:21 +0100, James Graham wrote:
> >>{tests:{"001.html":{type:"javascript",
> >>                     flags:["SVG"],
> >>                     expected_results:10,
> >>                     top_level_browsing_context:false
> >>                    }
> >>        },
> >>  subdirs: ["more_tests"]
> >>}
> >>
> 
> [...]
> 
> >I don't see why this is needed, and it's a extra work to maintain,
> >especially if people are contributing tests written elsewhere.
> 
> A manifest of some form is needed in order for any automated test
> runner to know what tests there are. Some of the above information
> may be strictly unnecessary e.g. maybe we can live without the
> "flags" parameter.
> 
> I note that the above assumes tests are identified by filenames. In
> general this need not be true, one could write a test that depends
> on query parameters (I have done this) or fragment ids (I have never
> done this).

Yes, a list of the filenames is needed.  (Mozilla has tests with
both query parameters and fragment IDs.)

> >The number of tests isn't important (and is not a good measure of
> >testing coverage); what matters is whether any of them failed.
> 
> The number of tests is important. If you expect a test file to
> return 100 results and you only get 50 then something went wrong,
> even if all 50 results were reported as pass.
> 
> I agree that forcing people to add this metadata manually is not the
> nicest approach. But I can't think of a better one either.

Two things solve the problem of a test unexpectedly terminating
without actually finishing:

 (1) the harness goes on to the next test when the current test
 tells the harness it that it is finished, so if the test never says
 it's finished, the run stops.  (And this is needed anyway to run
 anywhere close to efficiently; alloting tests a fixed amount of
 time is a huge waste of time.)

 (2) an onerror handler catches uncaught exceptions or script parse
 errors, counts them as a failure, and goes on.

-David

-- 
L. David Baron                                 http://dbaron.org/
Mozilla Corporation                       http://www.mozilla.com/
Received on Friday, 18 February 2011 17:12:33 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 18 February 2011 17:12:34 GMT