Re: OpsGL QA-commitment-group

As usual, the discussion seems to boil down to the definition of
"testable". If Marks (implied) definition is used, then (IMO) all
requirements become "testable". If my (implied) definition is used,
then some requirements become "untestable", but remain valid/useful.

Detailed comments are inlined, but it may be wise to agree on the
explicit definition before proceeding further. If we cannot come up
with a definition, then perhaps we should not use the word (which
would be pretty sad!).

On Fri, 9 May 2003, Mark Skall wrote:
> Remember, we're talking about "testable", not proving it has been
> implemented correctly. Even the best tests only improve our
> confidence level in the fact that the implementation is correct.
> We certainly can't prove that it's correct, but, it seems to me, we
> can probe to find out whether certain states have changed. If we
> can't determine state changes through standard functions (I presume
> that's going to be your counter-argument), one can determine how
> those state change have impacted other functions. This would
> obviously not be as efficient and may only result in "success"
> (determining non-conformance) in a small percentage of tries - but
> that's what falsification testing is all about.

I am afraid that if you follow the above logic, you will find that
_all_ requirements are testable!

You are saying that you cannot test something directly, but will try
to see if there is some unknown indirect effect that you might notice.
If you fail to see that indirect effect, you will say "well, we tried!
We tested! It is testable!".

Now, let's see if you can give an example of a UNtestable requirement.
I bet I will be able to use your own logic to show that that
requirement is testable.

> So vendors will say they conform but we can't determine if they're
> telling the truth?

Yes. That's the informal world we leave in. It is impractical to
change that, even in the virtual world, unless all specs and all
implementations are based on some formal verifiable model.

If you disagree, you have to prove, among many other things, that your
test tool is _always_ correct. What if your test tool is lying? Why
should I trust the vendor less than you? Are we going to talk about
vendor incentives to lie versus test lab incentives to lie? Is it
above motive?

> Again, it may be possible to implement correctly, but it still is
> impossible to know if it has been.

True, but we have to live with that uncertainty. It is not possible to
eliminate it unless we formalize the entire virtual environment. See
above.

> Again, we can usually come up with some test, no matter how
> inefficient.

The "no matter how inefficient" part makes all requirements testable.
Does it not?

> Since compliance is the ultimate goal, every requirement needs to be
> testable to ensure compliance.

Compliance is the ultimate goal. Being able to ensure (test for)
compliance is not the ultimate goal; it is only a highly desirable
feature (i.e., a SHOULD).

Thanks,

Alex.

Received on Friday, 9 May 2003 22:48:26 UTC