Re: The bad side of test cases

On Sun, 22 Jun 2003, Karl Dubost wrote:

> In the completion of a Test Suite with test cases, how do we define
> the depth of it, where does it stop? I pointed to this comment
> because we have started to write the Test Guidelines.

I do not think it is possible to usefully define the depth of test
suite coverage, in general. If you are lucky to have more than one
test suite, you can try to compare their results for a given
implementation and see if one finds more bugs than the other. An
obvious and untestable Test Guideline is to write as deep and broad
tests as possible given external constraints such as time and budget.

When we get customer feedback saying that a relevant test case did not
expose a bug (that they knew about or that their customer found), we
thank the customer and add/improve test cases. This improves the depth
of our test collection, but we cannot measure it.

Moreover, it is always possible to create an implementation that would
get a perfect test score but will violate many of the requirements in
real world. The reason people still test is because the assumption is
that implementors are interested in finding actual bugs as well as
passing all tests. Our customers often ask for tougher/deeper test
cases.

Alex.

-- 
                            | HTTP performance - Web Polygraph benchmark
www.measurement-factory.com | HTTP compliance+ - Co-Advisor test suite
                            | all of the above - PolyBox appliance

Received on Monday, 23 June 2003 01:17:13 UTC