- From: Alex Rousskov <rousskov@measurement-factory.com>
- Date: Thu, 23 May 2002 19:13:54 -0600 (MDT)
- To: Karl Dubost <karl@w3.org>
- cc: www-qa@w3.org
On Thu, 23 May 2002, Karl Dubost wrote: > The QA glossary has been updated > http://www.w3.org/QA/glossary > The document is open for your review. Atomic Test 1. Do we need this term? 2. The definition says "maps back to exactly one [test] assertion". Test assertion is defined as a "set of [...] rules that are known to be true by definition in the spec". Since two sets of rules are still a single set of rules, it is not clear what an "exactly one set of rules" is. The whole standard is a single test assertion, by definition. The definition further says "this is in contrast to some test cases that may test a combination of rules". However, again, a combination (set) of rules is a (single?) assertion, by definition. 3. I doubt many atomic tests exist, even if we manage to define them correctly. Any standard is a set of rules, many of which are dependent/related. For example, to test a single MUST in HTTP, I need to test that MUST plus many of the rules related to HTTP header parsing and interpretation. I simply cannot avoid testing parsing rules because most MUSTs rely on something being parsed first. I am sure the same is true for non-protocols such as XML. For example, one cannot test that an XML document has exactly one root node without also relying on (and hence testing) node syntax. Conformance 1. The definition says "ability to meet requirements claimed to be supported". Is not that a definition of "claims conformance" rather than "standard conformance"? We probably should limit the glossary to standard conformance only and rephrase the definition accordingly. 2. Do we need to add Compliance? Is Compliance any different from Conformance? Conforming Document 1. Do we need Conforming Application/Device/Mechanism? Is there a word that can describe all "subjects" of specifications and standards? Conformance Conformance Testing 1. The definitions do not answer a fundamental question: What exactly is sufficient to claim (or certify) Conformance based on a test result? For example, if my HTTP test suite has test cases for all HTTP MUSTs, and the device under test passes all those cases, is the device conformant? In other words, does the absence of known violations mean conformance?? It is a difficult but very important question to answer. On one hand, if we cannot find any violations, then we have no objective evidence that the device violates the specs. On the other hand, it is impossible to test all combinations of inputs so we may be missing a test case that detects violations. Thus, we either end up certifying non-conformant devices or refuse to certify any device! While the above problem is mostly obvious for protocols, I believe it does exist for specifications like XML and for Conforming Documents: While the "input" in this case is fixed (it is the document itself), the very program that checks for conformance may be buggy. So, again, we cannot say for sure that the document is conformant. I do not know how to solve this "uncertainty" problem, but it needs a solution if we want to talk about conformance tools and especially certification. As a conformance test suite author, I need to know how my tool can issue a "Conforms to the [W3C] Standard" certificate using W3C definition of Conformance. I hope that the "uncertainty" problem has been solved elsewhere, and we just need to pick the best solution. Does anybody know of any good solutions? I can only think of "Designed for MS Windows" and UL "safety" certification that boils down to "if company X says you conform, then you conform". Not very applicable to the W3C situation where W3C develops a set of objective rules rather than issues subjective conformance certificates. It is easy for the specification to say that a device is conformant if it meets A, B, and C. We need to define how one can test/prove/testify that the device meets A, B, and C. Do we need a "[beyond] reasonable doubt" clause? Ugh. Test Case 1. The definition says, "an individual test that corresponds to a test purpose". IMO, a test case should correspond to a test assertion (a set of rules) not test purpose. Test Purpose 1. Test purpose is to check/exercise/test an assertion. The current definition says "an explanation of why the test was written" which is vague and misleading. I do not think we need this term at all. I hope the above comments are useful. My primary concern is to be able to use these definitions in practice when testing for protocol conformance. Thank you, Alex.
Received on Thursday, 23 May 2002 21:13:56 UTC