W3C home > Mailing lists > Public > www-qa-wg@w3.org > February 2004

Minutes of 20040209 QAWG telecon

From: Lofton Henderson <lofton@rockynet.com>
Date: Sun, 15 Feb 2004 14:47:28 -0700
Message-Id: <5.1.0.14.2.20040210131513.03b7f180@rockynet.com>
To: www-qa-wg@w3.org


QA Working Group Teleconference
Monday, 09-February-2004
--
Scribe: Lofton Henderson

Attendees:
(PC) Patrick Curran (Sun Microsystems)
(DD) Dimitris Dimitriadis (Ontologicon)
(KD) Karl Dubost (W3C, WG co-chair)
(DH) Dominique HazaŽl-Massieux (W3C)
(LH) Lofton Henderson (CGMO - WG co-chair)
(LR) Lynne Rosenthal (NIST - IG co-chair)
(AT) Andrew Thackrah (Open Group)

Regrets:
(MC) Martin Chamberlain (Microsoft)
(MS) Mark Skall (NIST)
(SM) Sandra Martinez (NIST)

Absent:
(VV) Vanitha Venkatraman (Sun Microsystems)

Guest:
David Marston (for a few minutes)

Summary of New Action Items:
no new action items.

Agenda:
http://lists.w3.org/Archives/Public/www-qa-wg/2004Feb/0027.html

Previous Telcon Minutes:
http://lists.w3.org/Archives/Public/www-qa-wg/2004Feb/0026.html

Minutes:
-----

Routine business:

-- Next telecon (scheduled for 16-feb) is about Test Materials for our 
Guidelines
documents.  LH will recirculate some proposals by email, for 
discussion.  Because
of holiday in U.S., it looks like 4 regulars won't attend.  So we'll try
to switch it to Wednesday (18-feb).

-- First draft f2f agenda is posted (agenda ref [2])

-- No discussion of WWW2004 this time... put it on next agenda.

DD presenting DOM Test Methodology.  Agenda reference,

[1] http://www.ontologicon.com/NIST/3_3.htm

DD: Thanks to NIST for sponsoring the work.

LH: Note that this is early deliverable on list on QAWG home page
(currently, 2nd bulletted deliverable).  Report will be linked from
there.

DD:  Referring to reference [1], going through and summarizing (summary not
minuted -- see reference [1]).

DD: Relationship to QAWG work:  DOM TS only partially covers QAWG test 
guidelines,
because TestGL came (much) later.  Ops and Spec came later as well and so those
were not fully implemented.

DD:  Motivation for automation -- spend less time writing TCs.  Minimize time
to produce TS.  (See more at [1]).

DD:  History:  2001 NIST released initial DOM (Level 1) TS which tested only JS
binding only.  200+ working tests.

DD: Issues:  Tests for each language binding?  Solution:  Abstract TS ML that
could be used to generate each binding's tests.  Took 6 months.  Design choice:
Pseudo language?  No, mimic DOM language for TS ML.  Reasons:  to not write
an abstract language for an abstract language.

Questions/discussion about DOMTS ML versus generic TCDL:

AT:  TC markup specific to DOM.  Couldn't use a generic TCDL?
DD:  Could have done it in theory, but that would be using markup
to describe IDL.  In DOM TS work, recognized two kinds of specs
-- ones like DOM can use automation like DOMTS.  I'm uncertain
whether XMLspec can be used for each specification in W3C.
The other kind of spec lends itself with more difficulty to automated
processes (where you express functionality in prose, for example
about user interaction).

AT:  Is there any applicability for a generic TCDL?
DD:  I don't think we should put energy into a generic TCDL -- not worth
the payoff. The simple metadata needs can be achieved other ways like CVS.
LH:  His DOMTS ML is two things:  automatically derivable pseudo-code for
the test itself; plus TCDL-like metadata (that may involve some manual
intervention).

Questions/discussion about automation and many-to-one (TC-to-interface)
generation:

DD:  Many-to-one TC to TA possible. Specifically, TC can point to TA (in
the specification), TA in the specification does not point to TC. On the level
above TC, aggregation can generate a table (or similar) resolving pointers
from TC to TA (and, reversely, it can give a picture of which TA gets tested
by what TC).

AT/DD/LH discussion:  Automatically derived?  No problem tracing multiple
tests to 1 interface; currently can't *auto-generate* multiple tests from
single interface.

DD:  Multiple bindings generatable from the TC ML.

DD:  Issues/lessons relevant for QAWG

-- DOM WG couldn't implement all GLs that we had at the time, Circa
2002.  There was too much GL stuff, too raw (early maturity levels),
and too late relative to DOMTS development timeline.

-- DOM WG used granular grammar, XMLspec, which enabled the automation.
XHTML-plus wouldn't cut it.  XMLspec allowed use of same vocabulary in
DOMTS ML as in DOM spec.

-- Lesson:  DOM TS dev't was slowed because of member/corporate/licensing
problems.  Hassles w/ members' legal/licensing.  IMPT!  It's wise to write
clear internal process in WG before you start an effort like DOMTS, incl.
especially resolving licensing questions in advance

LH: will someone in DOM WG answer our Test questionnaire?  (DD -- not
me.  DH -- probably team/staff.)

Adjourn:  12:05pm EST.
Received on Sunday, 15 February 2004 16:44:43 GMT

This archive was generated by hypermail 2.2.0 + w3c-0.30 : Thursday, 9 June 2005 12:13:15 GMT