W3C home > Mailing lists > Public > www-qa@w3.org > May 2003

Re: OpsGL QA-commitment-group

From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Fri, 9 May 2003 15:41:53 -0600 (MDT)
To: Mark Skall <mark.skall@nist.gov>
cc: Lofton Henderson <lofton@rockynet.com>, www-qa@w3.org
Message-ID: <Pine.BSF.4.53.0305091508530.18383@measurement-factory.com>

On Fri, 9 May 2003, Mark Skall wrote:

> At 01:27 PM 5/9/2003 -0600, Alex Rousskov wrote:
> >Now, let's see if you can give an example of a UNtestable requirement.
> >I bet I will be able to use your own logic to show that that
> >requirement is testable.
> Actually, I like your example.  The requirement that all
> requirements MUST be testable is untestable.  One cannot write a
> test suite without knowing the exact requirement.

I think "all requirements MUST be testable" is testable using your
approach. Here is an algorithm:

	1. Assign a test case to each requirement if
	   it is obvious for the tester how to do that
	2. For each requirement that remains without a
	   test case after (1), assign any test case from

This is a finite algorithm that assigns test cases to each requirement
and, thus, proves that all requirements are testable (including the
"all requirements MUST be testable" requirement). Some test cases
would not be likely to detect any relevant violations, but that is OK
(according to your "no matter how inefficient" definition).

It may seem absurd, but is no more absurd than trying to test black
box internal state to declare "implementations MUST ignore extensions"
requirement testable!

> >If you disagree, you have to prove, among many other things, that
> >your test tool is _always_ correct. What if your test tool is
> >lying? Why should I trust the vendor less than you?
> Because the vendor is biased (This is not necessarily a bad thing -
> it's their business to promote their product).  The ones who
> designed the test are neutral (or at least they should be).

Everybody should be neutral, but nobody is. If my mood, pay check,
fame, or spare time depends on the test outcome (and often they do), I
have a clear incentive to falsify test results or test tools, just
like the vendor has a incentive to lie about compliance. I (and
vendors) have other, conflicting incentives and motivations as well,
of course.

> >Are we going to talk about vendor incentives to lie versus test lab
> >incentives to lie? Is it above motive?
> Yes.

Great. Vendors have a motive to lie. Testers do not. Let's assume that
for a second. Since your test tool usually uses operating system,
compilers, processor firmware, and such, it is, by your own admission,
tainted by the motives of those vendors that produced the environment
you rely on. Thus, we cannot trust your test tool -- we have to assume
that it cannot be functioning correctly.

> > > Again, we can usually come up with some test, no matter how
> > > inefficient.
> >
> >The "no matter how inefficient" part makes all requirements testable.
> >Does it not?
> No. See my example above.

I provided an algorithm that makes your example testable, given your
assumption of efficiency relevance.

Again, I think this discussion would be much more productive if
"testable" is defined.


                            | HTTP performance - Web Polygraph benchmark
www.measurement-factory.com | HTTP compliance+ - Co-Advisor test suite
                            | all of the above - PolyBox appliance
Received on Friday, 9 May 2003 18:51:35 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:40:32 UTC