W3C home > Mailing lists > Public > www-qa@w3.org > March 2002

Re: test suite distinctions [was: Re: Feedback on "The Matrix"]

From: <skall@nist.gov>
Date: Fri, 01 Mar 2002 03:09:33 -0500 (EST)
To: Alex Rousskov <rousskov@measurement-factory.com>
Message-ID: <1014970173.3c7f373d26a78@email.nist.gov>
Cc: Mark Skall <skall@nist.gov>, www-qa@w3.org
Quoting Alex Rousskov <rousskov@measurement-factory.com>:
> 
> In my opinion, such a meta-level checklist would have very little
> utility and should not be used for rating test tools:

We should not be "rating" tools if by "rating" we mean assigning degrees of 
goodness.  As in everything we do, we should be determining conformance to our 
document(s). This is all a recommendation should do. This is a yes or no 
decision.  Thus, the checklist is very appropriate.

> 
> The things you mention are obvious qualities of a good test suite. Any
> sane test suite author would try to implement them, and nobody will
> get them right 100%. 

Huh? Many should get these 100% right (i.e.., get "yes" for each item on the 
checklist.)

> Thus, there should be little value in spending
> time on making the "obvious" checklist available. On the other hand,
> it would be impossible to use that checklist for rating (i.e.,
> assigning comparable scores) of "test materials" because meta criteria
> cannot be converted to scores in an algorithmic fashion:
> 
> 	Test suite A tests 75% of MUSTs in RFC 2616 (HTTP)
> 	Test suite B tests 95% of MUSTs in XML 1.0 Recommendation

Again, this should not be done.  It is not appropriate.

> 
> Which test suite is "better"? Which should get a higher score? Will
> many XML-processing users care about HTTP test score? Etc., etc.

We don't rate implementations according to which is better, but only 
provide criteria to determine conformance (yes or no).  Why should this be 
different?


> Alex.
> 
> 
Received on Friday, 1 March 2002 03:09:35 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:40:29 UTC