- From: Øyvind Stenhaug <oyvinds@opera.com>
- Date: Fri, 23 Sep 2011 14:07:24 +0200
- To: fantasai <fantasai.lists@inkedblade.net>
- Cc: public-css-testsuite@w3.org
On Fri, 23 Sep 2011 02:40:56 +0200, fantasai <fantasai.lists@inkedblade.net> wrote: > On 09/22/2011 04:40 PM, Linss, Peter wrote: >> On Sep 22, 2011, at 1:51 PM, fantasai wrote: (...) >>> Needs Work - Incorrect /* The test is wrong and should not be >>> passed or doesn't test what's claimed. * >>> Needs Work - Metadata /* The test metadata needs correction or >>> improvement. */ >>> Needs Work - Usability /* The test is confusing or hard to judge. >>> */ >>> Needs Work - Precision /* The test is imprecise and may give false >>> positives. */ >>> Needs Work - Format /* Syntax errors, format violations, etc. */ (...) >> While the harness and Shepherd don't talk to each other (yet), the >> harness does have a >> notion of tests reported as invalid, they're still presented as part of >> the suite and >> listed in results, but they get de-prioritized in testing order and >> counted separately >> in the reports. I would think a test that needs work for any of the >> reasons listed >> above should fall into that category as the results shouldn't be >> trusted (except for >> really minor issues like typos in the metadata). > > I disagree; if the test's metadata is wrong, or it has a validation > issue that doesn't > affect its results, or it's just awkward to use, that's no reason to > distrust the pass/fail > results that are recorded. If a test is "confusing or hard to judge", that means they may have been mislabeled. As I recall, that was the case for most such tests that I reported (that would be how I noticed the issues - examining test failures). -- Øyvind Stenhaug Core Norway, Opera Software ASA
Received on Friday, 23 September 2011 12:08:10 UTC