- From: James Graham <jgraham@opera.com>
- Date: Fri, 18 Mar 2011 15:46:04 +0100
- To: "L. David Baron" <dbaron@dbaron.org>
- CC: Kris Krueger <krisk@microsoft.com>, Aryeh Gregor <Simetrical+w3c@gmail.com>, "public-html-testsuite@w3.org" <public-html-testsuite@w3.org>
On 03/18/2011 09:24 AM, James Graham wrote: > e) Trying to address d) by having continually updated implementation > reports has a number of bad side effect; it encourages the erroneous use > of the testsuite as a metric of quality when it is a largely incomplete > state. This in turn can increase the pressure on potential contributers > to submit only tests that cast their favoured browser in a good light. It occurs to me that one way around this would be to make tables of the number of browsers failing each test, but not list the browsers that fail in each case. Such an approach would have a number of advantages: It would be easy to identify tests that failed in multiple browsers, which are the most likely to be problematic It would require people examining the tests to rerun for themselves rather than trusting the submitted results. It would not lend itself to use as a (misleading) browser comparison metric.
Received on Friday, 18 March 2011 14:46:47 UTC