- From: Alex Rousskov <rousskov@measurement-factory.com>
- Date: Thu, 28 Feb 2002 13:36:01 -0700 (MST)
- To: Rob Lanphier <robla@real.com>
- cc: "www-qa@w3.org" <www-qa@w3.org>
On Thu, 28 Feb 2002, Rob Lanphier wrote: > I was going to object to this point, but in writing my response, I > find myself in the middle. I think Alex is right in saying that > this group won't be able to construct a checklist that is safe to > mechanically apply to test suites to determine their conformance > level. There's going to be qualitative judgements made. > > That said, the W3C makes qualitative judgements all of the time, > any time a specification is promoted to "Recommendation" status. > So, there may need to be various levels of blessing of test > suites, and a process for getting those test suites blessed. > Part of the process would most likely involve measuring the test > suite against a checklist, but there should be a process to ensure > that we arrive at a consensus judgement regarding the satisfaction > of that checklist. IMO, there is a big difference between being judgmental about decisions in Recommendations and being judgmental about quality of products. Imagine that W3C issues a Recommendation that makes so many wrong choices that nobody uses it. Such Recommendation would have little negative impact, if any. Now compare that with the situation where a better test suite A is assigned a rating of 10 and a worse test suite B is assigned rating of 99. What that is going to do to suite A chances of getting acceptance and recognition, especially if W3C starts promoting the "winner" and demoting the "looser"? Judgment calls are required and expected when designing specifications. If rating test materials requires a lot of judgement calls, then no rating is better. Alex.
Received on Thursday, 28 February 2002 15:36:03 UTC