- From: Karl Dubost <karl@w3.org>
- Date: Wed, 5 Nov 2003 15:38:22 -0500
- To: www-qa@w3.org
me in the process of nailing down Le Mardi, 4 nove 2003, à 13:16 America/Montreal, Alex Rousskov a écrit : > Sure. Pretty much every MUST from HTTP (RFC 2616) or any other > behavioral spec. For example, > > Definition: Machine Testable: There is a known algorithm that > will determine, with complete reliability, whether the > technique has been implemented or not. Probabilistic > algorithms are not sufficient.] The discussion is interesting and we are starting to define the pros and cons of testable. I have a question though. For example """ Proxy MUST delete Connection headers. """ I would say: Why it's not machine testable? Except if I'm wrong about HTTP, this is completely machine testable and completely defined. One could answer, this is not machine testable in a particular environment for example on the network when you don't have an access to the proxy server for example BUT This is is not anymore testing. When you do an experiment of any kind. You precise the conditions of your test, the fact you can't determine the answer is not part of your test but of the external conditions. I think in the discussion people confuse two things. Random behaviour determined by a spec: -> for example a random number generator you have to test. It's testable by verifying the accordance to the random law which has been determined. Random behaviour which are not dependant of the spec: -> for example a proxy on the network. If you can't control the proxy, it doesn't make your test invalid and your spec not testable, it just says that your experimentation conditions are not good. The spec is still testable. We have this kind of cases in astrophyics all the time, except for planet exploration. We don't have access to the objects. It doesn't mean that the science is not testable, just that the conditions of experiences are sometimes difficult. You might have to redo experiences in different context, etc. But unpredictability is not part of the solution. - Your test can be wrong - The science can be wrong - The noise too important to make any valuable test, but in this case you can't use the test and, so it's not part of the framework. Defining tests has no truth per se (outside of the spec). You define the tests and create things which demonstrate the spec (Framework). For example, you could test the performance to render a simple HTML page. If one visual user agent takes 0.01s to do it and another one 10s but both display the document with right rendering. You still have conformance to the spec. And the test is outside of the scope of the spec (there's no mention of performance in HTML 4.01). Now you could test that you have the right rendering which is in the scope of the spec, BUT your test has a time-out of 5s. You might say the second agent is not conforming when it's a mistake of the test and a lack of precision on conditions of the testing. -- Karl Dubost - http://www.w3.org/People/karl/ W3C Conformance Manager *** Be Strict To Be Cool ***
Received on Wednesday, 5 November 2003 17:08:32 UTC