draft minutes f2f 20040302

Tuesday AM -- 0830 - 1200 -- QAWG topics
(Morning break: 1015)
Scribe: Dominique Hazaël-Massieux (switched with Dimitris)

Participants
MB Mary Brady
MS Marc Skall
LR Lynne Rosenthal
DH Dominique Hazaël-Massieux
OT Olivier Théreaux
LH Lofton Henderson
KD Karl Dubost
PC Patrick Curran
DD Dimitris Dimitriadis

Observers:
Eric Velleman
Jean-Louis Carves
Susan Lesch
Nicolas Duboc
Vivien Lacourba
Wendy Chisholm

Minutes:
Test questionnaire, results

presentation by DH
LR: one of Dimitris' tasks is to write a report on current practices 
which can build on the results of the questionnaire about test practices
LR: Agree with DOM that the WG not having to do anything extra

Presentation by Mary Brady
MB: Lots of efforts over the past years
- XML Core: first use of test description metafile, atomic tests, 
pointer back to test, but not asssertions. Releases available from WG 
test suite test page. Process, FAQ similar to DOM, mailing list, issues 
list. no results reporting
- DOM: markup language generated from the spec, stylesheet generated 
output in two languages, more have been seen, no standard results 
reporting
- XSL-FO: used XML dtd and added test results elements, first attempt 
at automatic test generation. created 600 tests in 3 months to exit CR. 
good feature is that you can catch changes in the spec since tests are 
automatically generated.
- XML Schema: developed automated test generation approach (java tool) 
iterating over datatype/facet combinationts, test control in xml, 23000 
tests generated

PC: problem for sun, can take 12-18 hours to run, if you test all of 
the API it can take weeks to run. must choose what is valuable tests 
smarter
MB: the WG wanted invalid in addition to valid valus for tests, in 
addition to lists, so the number of tests grew
MB: lots of wg's want separate mailing lists for tests

- XML Query: java approach again, functions and operators translated 
from specification, test generator iterates over functions and operators

What is needed to do testing:

- Specs - if it helps to auto generate, gives a starting point, 
otherwise you need some work up front
- Define work process - process document, framework, issues list, email 
list faq
- Publicize work
- Test Development - manual and/or automatic, mapping to spec, test 
coverage, categorization

Automatic test generation
- allows to quickly respond to spec changes, good for "boring tests", 
looking into defining "interesting" tets
- each WG uses its own format, syntax is easy to capture, not semantics
- flexible output formats

Nist currently tries to build a test accelerator including a Nist TestML

(coffee break)

QAF re-factorization (continued from Monday)
Presentaton of QAF for the observers by LH. Proposal endorsed by QA WG 
during yesterday's meeting is to simpligy the QAF. Thinking of 
discontinuing OpsGL, rolling its best material into a user friendly QA 
handbook. Strip down and simplify SpecGL. Ambiguity yesterday, trying 
to refactor and collectively publish in the October-November timeframe, 
after fall QA WG F2F.
LH: Propose not to look at Ops right now, since we will take it out of 
the normative track.
MS: Thought we said looking at Spec and Test, and treating Ops as the 
way to reach the end (Spec and Test).

OpsGL

SpecGL
LR:
- conformance section, most things could go in there
- variability affects interoperability (DoV goes here, version, edit, 
and so forth)
- undergo an internal assessment: specification must testable: once you 
write it, go back, have an independent review. do not develop test 
assertions, which is a technique

DH: conformance could be seen as a model. variability is a way of 
determining the conformance model. testability shows spec quality, not 
a part of the conformance model, it helps you assess the conformance 
model by describing it.
PC: what about implementability/testability?
KD: what about defining what it means to make a specification 
implementable?
LR: tried to capture the test assertion guideline and consistency 
guideline
PC: quality too broad
MS: precise, clear, unambiguous requirement
PC: maybe we should use that wording instead of high quality
DH: precise, unambiguous, complete and consistent
MS: first two are for individual conformance requirements, the last two 
are for the collections
KD: afraid we're going to end up at the same place if we try to 
precisely define everything
Eric Velleman: I've used the ISO/IEC part 2, I've also used part of 
your documents and got a bit lost, where does the conformance 
requirement belong?
OT: We need to be careful not to do too much in order not to end up 
with a huge document again

LR: Where are we now? What do we have consensus on?
DH: one way of moving forward is to have the editors produce an example 
of how the new version could look
PC: are we still using the guidelines/checkpoint idea?
LR: you need to adress success criteria, how does one know they've 
succeeded?

TestGL

Received on Tuesday, 2 March 2004 06:11:48 UTC