Re: lack of testability definition

On Wed, 5 Nov 2003, Karl Dubost wrote:

>       Definition: Machine Testable: There is a known algorithm that
>       will determine, with complete reliability, whether the
>       technique has been implemented or not. Probabilistic
>       algorithms are not sufficient.]

> For example
> 		""" Proxy MUST delete Connection headers. """
>
> I would say: Why it's not machine testable?
>
> Except if I'm wrong about HTTP, this is completely machine testable
> and completely defined.

The above assertion is not "testable" according to the above
definition because no matter how many test cases you write or execute,
there is always a non-zero possibility that a given proxy will not
delete a header when required to do so. These tests are probabilistic.

For example, a proxy may delete all Connection:-listed headers that
are up to 16 characters long. Or it might delete all
Connection:-listed headers unless there are two Connection: headers
(and, hence, two "delete me" lists to go through). And so on. A good
test suite will cover many scenarios, but cannot cover all (there are
infinite number of scenarios, even if you do not account for things
like a proxy that does not delete on Mondays but does delete on
Tuesdays).

The only way to avoid probabilistic nature of these tests is to have
access to proxy source code. Then, in theory, there is an algorithm
you can use to get a definitive answer. That algorithm would take
eternity to finish, but that is beyond the above definition scope.
Here on Earth, we mostly deal with black-box testing, of course.

> When you do an experiment of any kind. You precise the conditions of
> your test, the fact you can't determine the answer is not part of
> your test but of the external conditions.

I can determine the answer based on every test case I run, but no
number of test cases achieves "complete reliability" that the above
definition requires. As it says, probabilistic algorithms are not
sufficient.

> I think in the discussion people confuse two things.
>
> Random behaviour determined by a spec:
> 	-> for example a random number generator you have to test.
> 	It's testable by verifying the accordance to the random law which has
> been determined.
>
> Random behaviour which are not dependant of the spec:
> 	-> for example a proxy on the network.
> 	If you can't control the proxy, it doesn't make your test invalid and
> your spec not testable, it just says that your experimentation
> conditions are not good.
> 	The spec is still testable.

No, this is not the problem here. We are not talking about random
behavior. We are talking about 100% deterministic behavior but such
that a violation is a needle in an infinite haystack: it is impossible
to be 100% sure there is no needle because the haystack is infinite.

> For example, you could test the performance to render a simple HTML
> page. If one visual user agent takes 0.01s to do it and another one
> 10s but both display the document with right rendering. You still
> have conformance to the spec.

... but only for that given document! Conformance on one or one
thousand documents is not conformance that is defined in the spec.
The spec says the implementation MUST render correctly _all_ valid
documents. The number of possible valid documents is infinite.

I hope the above clarifies.

Alex.

Received on Wednesday, 5 November 2003 17:33:28 UTC