- From: Kirill Gavrylyuk <kirillg@microsoft.com>
- Date: Tue, 2 Jul 2002 18:24:31 -0700
- To: "Lofton Henderson" <lofton@rockynet.com>, <www-qa-wg@w3.org>
Hi, Lofton! Thanks for a pretty extensive list. 1) Agreed 2) Alignment with the spec guidelines (at least terminology wise) is definitely on the plate, but we haven't look into it yet. 3) Agree, and we don't really call them Untestables. Agree about the Glossary. The last three (1.6-1.8) are pursuing 2 goals - feedback to the spec to initiate future errata - cataloging incoming tests that fall into these areas for future revision when corresponding errata are issued. 4) I interpret Dimitris comment as list the available ones in the Test Guidelines or ExTech document. And then the checkpoint still asks to identify "how" and if possible reuse the existing methodology. 5-9) Good summary of issues. Short answer is - most of them are known issues, but haven't got to fix them yet. 10) I see Framework checkpoint as requirements. "Goodness principles" is the solution that you can use to satisfy the requirement. We will have them, most likey in the Tech Ex document. Your 2 examples are addressed by requirements in Ck 4.11, 4.2, 4.3. 11) This is exactly the reason for the Ck 2.3, 2.4. I call them "sample tests". We definitely need to work on the prose more, but it is good that we have an examples out there already. http://www.w3.org/QA/WG/2002/07/qaframe-test-0701.html -----Original Message----- From: Lofton Henderson [mailto:lofton@rockynet.com] Sent: Tuesday, July 02, 2002 6:00 PM To: www-qa-wg@w3.org Subject: Re: Updated Test Guidelines draft - 0701 Here are some comments about the 0701 draft Test Guidelines. 1.) TestGL Glossary: we should start such a chapter, and throw stuff in there which needs to be defined (we can work out the definitions later, and migrate at least some to the QA Glossary if appropriate. Or migrate generic versions to QA Glossary, leaving expanded "functional" definitions here. Or ... But it will make the document more readable during development). 2.) "Levels of Conformance" (1.3, 4.9, other?): this doesn't fit well with SpecGL. We need to rework it in terms of "conformance variability" model. No suggestion yet. 3.) "The Untestables": 1.4 (discretionary), 1.5 (optional), 1.6 (undefined/ambigous), 1.7 (explicitly undefined), 1.8 (contradictory) Actually, they're not all untestable. The first two are conditionally testable, based on choices that the product has made, and the test results can be factored into conformance statements about the product. The last three do not lend themselves to any *conformance* testing (although, one could write diagnostic tests to see what a product does.) Perhaps these are useful distinctions. And I can see at least 5 entries for the TGL Glossary from these 5 checkpoints. 4.) CK1.3: It looks like an issue in here, as the dd and WG interpretations seem to be opposite: choice from enumerated list versus custom definition. 5.) CK2.3: "sample test scenario" seems like a candidate for further definition, explanation, example (or all three). 6.) CK2.4: reference to "discretionary and vague": I can see "discretionary" here (and "optional"), but I'm not sure about the other "untestables". 7.) "The Unverifiables" 4.1, 4.4, 4.5, 5.1, 5.4, 5.6, 5.7: these are unverifiable or borderline unverifiable as stated. There are a couple of distinct problems: subjective metrics ("easy" in 4.4, "ease of use" in 4.5 and 5.4); difficult to measure or confirm ("review available..adopt if applicable" in 4.1, 5.1); borderline ("suitable to publish" in 5.6, "sufficient to investigate" in 5.7). I think we can and should have a checkpoint on each of these topics, but they need some creative reworking. Some, e.g., the last two, could be salvaged by careful definition of criteria in the (future) prose explanation of the checkpoint, or in the Extech companion. 8.) Candidates for TGL Glossary: test framework, test case management. 9.) CK1.2, [KG] "need definition of test assertion": for a functional definition for TestGL, I like the one at http://lists.w3.org/Archives/Public/www-qa/2002May/0023.html. 10.) Do we intend to address any goodness principles on test materials? We don't seem to go beyond frameworks. E.g., should they (TM) be self-documenting? Should they use W3C technologies and standards, instead of proprietary or non-standard technology (e.g., ECMAscript versus private scripting language, etc)? 11.) Finally, CK6.2 raises a question. In SVG conformance testing, we made a design that called for light-weight breadth-first TS ("Basic Effectivity") to be written first. Its tests were simple and not necessarily even atomic, but rather provided some light coverage of all of the SVG functional areas. Its value is diagnostic, "pre-conformance" if you like -- has the implementation made any effort at all in given functional areas? Its application would typically precede a detailed atomic probing of all of the bits of the functional areas. The overall approach is something we called "progressive testing", and the progressive referred both to the allocation of resources to build the TS, and the application of the TS to an implementation under test. (In fact, due to the hideous cost of building graphics tests, SVG has never gotten beyond the ~125 BE tests, to the thousands of DT tests that would make a high-confidence conformance suite, and has never adopted a framework like XSLT for accepting and integrating DT tests.) We considered this BE TS to be immensely valuable, but I'm not seeing how it or our overall approach would fit into the TestGL. (Btw, there are things in the SVG TS that will fail and ought rightly to fail some checkpoints, so I'm not just being a SVG TS chauvinist here.) All for now, -Lofton.
Received on Tuesday, 2 July 2002 21:25:48 UTC