Re: Test review procedure

On 03/18/2011 09:24 AM, James Graham wrote:

> e) Trying to address d) by having continually updated implementation
> reports has a number of bad side effect; it encourages the erroneous use
> of the testsuite as a metric of quality when it is a largely incomplete
> state. This in turn can increase the pressure on potential contributers
> to submit only tests that cast their favoured browser in a good light.

It occurs to me that one way around this would be to make tables of the 
number of browsers failing each test, but not list the browsers that 
fail in each case. Such an approach would have a number of advantages:

It would be easy to identify tests that failed in multiple browsers, 
which are the most likely to be problematic

It would require people examining the tests to rerun for themselves 
rather than trusting the submitted results.

It would not lend itself to use as a (misleading) browser comparison metric.

Received on Friday, 18 March 2011 14:46:47 UTC