W3C home > Mailing lists > Public > www-qa-wg@w3.org > October 2002

Tokyo Test demos

From: Lofton Henderson <lofton@rockynet.com>
Date: Tue, 01 Oct 2002 15:11:44 -0600
Message-Id: <5.1.0.14.2.20021001112328.033b0090@rockynet.com>
To: www-qa-wg@w3.org

About Tokyo demo/discussion of existing test suites -- here is what I 
currently have for presentations, and also a list of topics that I think we 
presenters should (minimally) cover.  Comments about the latter (topics) 
list are requested -- additions, changes, etc.

Note.  At 30 minutes per demo, this would be 2-1/2 hours.  So it would take 
almost the whole half day (Day1 PM), in which it is supposed to be one of 3 
topics.  Either we have to shorten, or borrow time from Day2 AM or elsewhere.

Presentations:
=====

This list gives us an interesting variety of categories (per 
GL2):  protocols, APIs, processors, content/data:

DOM -- Dimitris
Graphics -- Lofton
XSLT -- Kirill

HTTP -- Lofton**
SOAP -- Kirill

** I went to Alex Rousskov's offices, 2 blocks away, and got a demo and 
discussion.  I should be able to cover most of the aspects of interest.

Topics
=====

I see the utility of the presented information as two-fold:  we can compare 
some real test suites against TestGL; and, we can start to establish some 
useful base data for when we start looking at potential general purpose 
test tools projects.

Here are some suggested topics to present.

1.) Level/Coverage/Structure.  Level:  is it detailed, or "Basic 
Effectivity" level (what Dimitris calls "smoke test")?  Related, how 
comprehensive is the coverage?  Related, what structural organization does 
it have (testing areas, functional modules, ...)?

2.) TCDL (Test Case Description Language):  how are test cases described, 
catalogued, etc?

3.) Test Assertions, Traceability:  how are test assertions identified in 
the specification, extracted or pointed to, etc.  How are tests linked to 
their test assertions?

4.) Assessing/Evaluating.  How are the results of application of a test 
case assessed or evaluated?

5.) Reporting:  How are test results recorded and/or reported?

6.) Framework/harness/UI (we might need to make some definitions 
here):  what is the user interface to the test suite, what harnesses and/or 
frameworks are there for presenting the tests and navigating through the 
test suite?

7.) Handling discretion, optionality, etc:  if these are a feature of the 
base spec, how are they handled by the test suite?

8.) IUT description, TS Configuration:  is there any language for 
describing the implementation under test (IUT), and are there any 
facilities to configure custom test suites to the implementation?

9.) Automation features:  applicable to both the generation of test 
materials from TCDL and other materials, as well as to configuration and 
applications of test regimes.

Other?

-Lofton.
Received on Tuesday, 1 October 2002 17:10:18 GMT

This archive was generated by hypermail 2.2.0 + w3c-0.30 : Thursday, 9 June 2005 12:13:10 GMT