W3C home > Mailing lists > Public > www-qa-wg@w3.org > March 2003

TesGL comments

From: Lynne Rosenthal <lynne.rosenthal@nist.gov>
Date: Sun, 02 Mar 2003 14:11:25 -0500
Message-Id: <5.1.0.14.2.20030302141032.00a9e828@mailserver.nist.gov>
To: www-qa-wg@w3.org

Comments on the TestGL, Dec 2002 version

The document seems to be overly complex in its message.  The information is 
there, but as presented, may be daunting to the novice.   Some things to 
keep in mind for this document are
- to serve both the novice and experienced in developing test suites
- to be simple, clear, and globally applicable
- to capture good practices while not adding arbitrary requirements or over 
burden the test suite developer
- to guide the development of tests

General comments:
1.  Prior to the Guideline section, provide an overview of the test 
development process, addressing general concepts and the deliverable path, 
i.e., spec  test assertions  test cases  reporting  maintenance.  If 
possible the guidelines should follow this progression.  Also, describe the 
key aspects of a test suite: traceability, verdict criteria, 
self-explanatory, valid, short/atomic.

2. The importance and need for traceability back to the spec must be 
clearly conveyed.

3. Make clear that there is no order to satisfying the CPs, as long as at 
the end of the day, they are satisfied.

4.  Although we can’t assume that the OpsGL and SpecGL were followed, but 
if they were, some of the CPs may already be satisfied  we should provide a 
table cross-referencing where this occurs.

5. Since we advocate developing test materials for CR, we should explain 
that test materials for CR may serve a different purpose and have different 
coverage than test materials for Rec  and this is O.K.

6.  Can we incorporate some of the ideas from other WGs  SVG and CSS have 
good test documentation, explaining not only how to build tests, but some 
of the test suite principles.

7.  Although the formatting changes necessary for the GL and CPs will help, 
try to keep it simple.  Perhaps start out with guidelines related to:
gl 1: getting started  (some of this may have been done in OpsGL)
- decide if development will be within WG, pubic partnership, adopt an 
existing test suite
- determine objectives  e.g., tests for CR may have different focus than 
tests for Rec
- determine test domain (whole spec, module, coverage)
gl2: Read the spec
- determine what explicitly to test (test the technology being tested, not 
other technologies utilized in the construction of the test suite)
- determine how to divide up the spec for testing areas, what makes sense
- look at the underlying structure of the spec and see how it lends itself 
to automated techniques.  (e.g., are testable statements tagged?  Schema 
used to generate tests?)

8.  Suggest a separate Guideline for Test Results and Reporting.   It 
should also mention something about EARL.


Specific Comments
9. CP 1.1 Identify the target set of specifications being tested
How extensive does this set need to be?  For example, DOM must use valid 
components of other specifications, so would it be necessary to list all 
those specifications?  Is this target set limited to those specifications 
that are explicitly being tested rather than those specs that are utilized?

10.  CP1.5, 1.6, 1.7, 1.8, 1.9: Related to discretionary choices.
These CPs are related and breaking them into separate CPs has resulted in 
confusion as to the difference between them  What is the difference between 
“defined ambiguously” (cp1.7) and “contradictory behaviors” {1.9)?  Suggest 
combining, as: Identify behavior: undefined, ambiguous, and contradictory.

11. CP 2.1 Document the structure for the test suite and GL3 Document the 
testing methodology.
What is the difference between these?  I think there is a difference, but 
my simple mind is having trouble making sense of  all this.

12. CP 3.2 Identify publicly available testing techniques.
What does ‘testing techniques’ mean?  How is this different from test 
automation and framework?   How much of a search for these techniques needs 
to be done?

13. CP 4.1 List available test frameworks and applicable automation and 
justify why new frameworks are needed…..
a) Define these terms and their scope.  How is this different from 3.2?
b) Why must a justification be given  who is the justification for?  This 
may be adding extra work on the WG with minimal if any benefit.

14. CP 4.2 Ensure the framework and automation are platform independent.
a) Is it clear what platform independent means  Is it only the computer or 
does it also include the operating system?
b) Are the framework and automation coupled, can’t you have one without the 
other?  This implies that automation is required.  Some test suites don’t 
include automation (harness?) e.g., XML and Schema.
c) Why 3 platforms?  You don’t always have 3 platforms  especially when 
developing tests for a CR and building tests in parallel with the building 
of implementations.

15. CP 4.4 ensure the framework makes it easy to add tests for any of the 
spec areas
What is easy?

16. CP 4.5 Ensure the ease of use for the test automation
This is a judgment as to ‘ease of use’.  It is important to understand who 
the audience is for using the test automation and build and document the 
automation accordingly. What is important is to document how it can be 
used.  Also recognize that if people can’t figure out how to run the tests, 
they won’t use them.

17. CP 4.11 Ensure the framework supports test results verification
Define ‘results verification’. Again, you don’t always have 3 different 
products

18. CP 5.2 Ensure the ease of use for results reporting.  Demonstrate 
results reporting has sorting and filtering.
Why is this P1?  Although nice to have, why is sorting and filtering required?
Received on Sunday, 2 March 2003 14:14:03 GMT

This archive was generated by hypermail 2.2.0 + w3c-0.30 : Thursday, 9 June 2005 12:13:13 GMT