W3C home > Mailing lists > Public > www-qa-wg@w3.org > March 2004

draft minutes f2f 20040302

From: Dimitris Dimitriadis <dimitris@ontologicon.com>
Date: Tue, 2 Mar 2004 13:11:50 +0200
To: www-qa-wg@w3.org
Message-Id: <672E1824-6C3A-11D8-B95B-000393556882@ontologicon.com>

Tuesday AM -- 0830 - 1200 -- QAWG topics
(Morning break: 1015)
Scribe: Dominique HazaŽl-Massieux (switched with Dimitris)

MB Mary Brady
MS Marc Skall
LR Lynne Rosenthal
DH Dominique HazaŽl-Massieux
OT Olivier Thťreaux
LH Lofton Henderson
KD Karl Dubost
PC Patrick Curran
DD Dimitris Dimitriadis

Eric Velleman
Jean-Louis Carves
Susan Lesch
Nicolas Duboc
Vivien Lacourba
Wendy Chisholm

Test questionnaire, results

presentation by DH
LR: one of Dimitris' tasks is to write a report on current practices 
which can build on the results of the questionnaire about test practices
LR: Agree with DOM that the WG not having to do anything extra

Presentation by Mary Brady
MB: Lots of efforts over the past years
- XML Core: first use of test description metafile, atomic tests, 
pointer back to test, but not asssertions. Releases available from WG 
test suite test page. Process, FAQ similar to DOM, mailing list, issues 
list. no results reporting
- DOM: markup language generated from the spec, stylesheet generated 
output in two languages, more have been seen, no standard results 
- XSL-FO: used XML dtd and added test results elements, first attempt 
at automatic test generation. created 600 tests in 3 months to exit CR. 
good feature is that you can catch changes in the spec since tests are 
automatically generated.
- XML Schema: developed automated test generation approach (java tool) 
iterating over datatype/facet combinationts, test control in xml, 23000 
tests generated

PC: problem for sun, can take 12-18 hours to run, if you test all of 
the API it can take weeks to run. must choose what is valuable tests 
MB: the WG wanted invalid in addition to valid valus for tests, in 
addition to lists, so the number of tests grew
MB: lots of wg's want separate mailing lists for tests

- XML Query: java approach again, functions and operators translated 
from specification, test generator iterates over functions and operators

What is needed to do testing:

- Specs - if it helps to auto generate, gives a starting point, 
otherwise you need some work up front
- Define work process - process document, framework, issues list, email 
list faq
- Publicize work
- Test Development - manual and/or automatic, mapping to spec, test 
coverage, categorization

Automatic test generation
- allows to quickly respond to spec changes, good for "boring tests", 
looking into defining "interesting" tets
- each WG uses its own format, syntax is easy to capture, not semantics
- flexible output formats

Nist currently tries to build a test accelerator including a Nist TestML

(coffee break)

QAF re-factorization (continued from Monday)
Presentaton of QAF for the observers by LH. Proposal endorsed by QA WG 
during yesterday's meeting is to simpligy the QAF. Thinking of 
discontinuing OpsGL, rolling its best material into a user friendly QA 
handbook. Strip down and simplify SpecGL. Ambiguity yesterday, trying 
to refactor and collectively publish in the October-November timeframe, 
after fall QA WG F2F.
LH: Propose not to look at Ops right now, since we will take it out of 
the normative track.
MS: Thought we said looking at Spec and Test, and treating Ops as the 
way to reach the end (Spec and Test).


- conformance section, most things could go in there
- variability affects interoperability (DoV goes here, version, edit, 
and so forth)
- undergo an internal assessment: specification must testable: once you 
write it, go back, have an independent review. do not develop test 
assertions, which is a technique

DH: conformance could be seen as a model. variability is a way of 
determining the conformance model. testability shows spec quality, not 
a part of the conformance model, it helps you assess the conformance 
model by describing it.
PC: what about implementability/testability?
KD: what about defining what it means to make a specification 
LR: tried to capture the test assertion guideline and consistency 
PC: quality too broad
MS: precise, clear, unambiguous requirement
PC: maybe we should use that wording instead of high quality
DH: precise, unambiguous, complete and consistent
MS: first two are for individual conformance requirements, the last two 
are for the collections
KD: afraid we're going to end up at the same place if we try to 
precisely define everything
Eric Velleman: I've used the ISO/IEC part 2, I've also used part of 
your documents and got a bit lost, where does the conformance 
requirement belong?
OT: We need to be careful not to do too much in order not to end up 
with a huge document again

LR: Where are we now? What do we have consensus on?
DH: one way of moving forward is to have the editors produce an example 
of how the new version could look
PC: are we still using the guidelines/checkpoint idea?
LR: you need to adress success criteria, how does one know they've 

Received on Tuesday, 2 March 2004 06:11:48 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:14:32 UTC