W3C home > Mailing lists > Public > public-html-testsuite@w3.org > February 2011

Re: Automated Test Runner

From: L. David Baron <dbaron@dbaron.org>
Date: Fri, 18 Feb 2011 13:01:23 -0800
To: Aryeh Gregor <Simetrical+w3c@gmail.com>
Cc: James Graham <jgraham@opera.com>, Kris Krueger <krisk@microsoft.com>, Anne van Kesteren <annevk@opera.com>, "public-html-testsuite@w3.org" <public-html-testsuite@w3.org>, "Jonas Sicking (jonas@sicking.cc)" <jonas@sicking.cc>
Message-ID: <20110218210123.GA9502@pickering.dbaron.org>
On Friday 2011-02-18 15:05 -0500, Aryeh Gregor wrote:
> numbers of tests.  But it's not okay if we're going to publish pass
> percentages for different browsers, because then fixing a failure
> might decrease the pass percentage if it opens up new failures, or
> conversely causing a new failure might increase the pass percentage.
> IMO, we should publish pass percentages for different browsers for any
> sufficiently complete part of the test suite, to encourage them to
> compete on getting to 100% conformance.  But for that to work, fixing
> failures needs to consistently increase your pass percentage, and that
> might not happen if it can change the number of tests that run.

This is one of a number of reasons that I don't think it makes sense
to publish pass percentages.  They're not a useful metric,
especially in an environment where different vendors can contribute
tests (especially large numbers of tests that don't actually provide
much real coverage) in order to skew that metric.

Instead, it would make more sense to publish "does implementation X
pass all tests for feature Y".

-David

-- 
L. David Baron                                 http://dbaron.org/
Mozilla Corporation                       http://www.mozilla.com/
Received on Friday, 18 February 2011 21:02:19 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 18 February 2011 21:02:20 GMT