W3C home > Mailing lists > Public > www-qa@w3.org > June 2003

Re: The bad side of test cases

From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Sun, 22 Jun 2003 23:16:55 -0600 (MDT)
To: Karl Dubost <karl@w3.org>
cc: www-qa@w3.org
Message-ID: <Pine.BSF.4.53.0306222300360.15225@measurement-factory.com>

On Sun, 22 Jun 2003, Karl Dubost wrote:

> In the completion of a Test Suite with test cases, how do we define
> the depth of it, where does it stop? I pointed to this comment
> because we have started to write the Test Guidelines.

I do not think it is possible to usefully define the depth of test
suite coverage, in general. If you are lucky to have more than one
test suite, you can try to compare their results for a given
implementation and see if one finds more bugs than the other. An
obvious and untestable Test Guideline is to write as deep and broad
tests as possible given external constraints such as time and budget.

When we get customer feedback saying that a relevant test case did not
expose a bug (that they knew about or that their customer found), we
thank the customer and add/improve test cases. This improves the depth
of our test collection, but we cannot measure it.

Moreover, it is always possible to create an implementation that would
get a perfect test score but will violate many of the requirements in
real world. The reason people still test is because the assumption is
that implementors are interested in finding actual bugs as well as
passing all tests. Our customers often ask for tougher/deeper test


                            | HTTP performance - Web Polygraph benchmark
www.measurement-factory.com | HTTP compliance+ - Co-Advisor test suite
                            | all of the above - PolyBox appliance
Received on Monday, 23 June 2003 01:17:13 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:40:32 UTC