W3C home > Mailing lists > Public > public-html-testsuite@w3.org > February 2011

Re: Automated Test Runner

From: L. David Baron <dbaron@dbaron.org>
Date: Sun, 20 Feb 2011 10:03:52 -0800
To: Aryeh Gregor <Simetrical+w3c@gmail.com>
Cc: James Graham <jgraham@opera.com>, Kris Krueger <krisk@microsoft.com>, Anne van Kesteren <annevk@opera.com>, "public-html-testsuite@w3.org" <public-html-testsuite@w3.org>, "Jonas Sicking (jonas@sicking.cc)" <jonas@sicking.cc>
Message-ID: <20110220180352.GA15384@pickering.dbaron.org>
On Saturday 2011-02-19 19:42 -0500, Aryeh Gregor wrote:
> On Fri, Feb 18, 2011 at 4:01 PM, L. David Baron <dbaron@dbaron.org> wrote:
> > Instead, it would make more sense to publish "does implementation X
> > pass all tests for feature Y".
> That makes sense to me.  We want some report of how well different
> implementations implement various parts of the standard, but "all
> tests for feature Y" seems like it would be good enough for that
> purposes, and it's much more meaningful than a percentage.  Of course,
> this would only be for things where we have a decent approved test
> suite -- within HTML, it seems that means only canvas right now.
> (IMO, my base64 and reflection tests also qualify, but no one's
> reviewed them yet.)

Why is inclusion in the test suite being gated on a review process?
I thought we agreed during the testing discussion at TPAC in
November that it wouldn't be.

I think a model where tests are included when their author thinks
they're ready, and then others can challenge them, works much better
than one gated on reviews for all tests.


L. David Baron                                 http://dbaron.org/
Mozilla Corporation                       http://www.mozilla.com/
Received on Sunday, 20 February 2011 18:04:42 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:14:30 UTC