Re: Automated Test Runner

On Saturday 2011-02-19 19:42 -0500, Aryeh Gregor wrote:
> On Fri, Feb 18, 2011 at 4:01 PM, L. David Baron <dbaron@dbaron.org> wrote:
> > Instead, it would make more sense to publish "does implementation X
> > pass all tests for feature Y".
> 
> That makes sense to me.  We want some report of how well different
> implementations implement various parts of the standard, but "all
> tests for feature Y" seems like it would be good enough for that
> purposes, and it's much more meaningful than a percentage.  Of course,
> this would only be for things where we have a decent approved test
> suite -- within HTML, it seems that means only canvas right now.
> (IMO, my base64 and reflection tests also qualify, but no one's
> reviewed them yet.)

Why is inclusion in the test suite being gated on a review process?
I thought we agreed during the testing discussion at TPAC in
November that it wouldn't be.

I think a model where tests are included when their author thinks
they're ready, and then others can challenge them, works much better
than one gated on reviews for all tests.

-David

-- 
L. David Baron                                 http://dbaron.org/
Mozilla Corporation                       http://www.mozilla.com/

Received on Sunday, 20 February 2011 18:04:42 UTC