TestGL issues for discussion on Monday Aug 25

Proposed discussion topics for TestGL teleconference on Monday August 25.

We've received several comments on the current public TestGL draft, and 
on the revised list of Guidelines that I published last week. I believe 
that many of these are non-controversial (for example, avoid passive 
voice, make sure that the wording reflects the fact that it's test 
materials rather than the WG that is the target of our guidelines), and 
I will incorporate these into the next draft. Instead, I'd like to focus 
on some more substantial issues:

1) TestGL issues raised by Jeremy Carroll

Jeremy has raised a number of issues, many of which have been responded 
to by Karl and David. I propose that we discuss two of them on Monday:

1a) Need for outreach

Jeremy points out in 
http://lists.w3.org/Archives/Public/www-qa/2003Jul/0001.html that we 
claim that our guidelines "capture the experiences, good practices, 
activities, and lessons learned of the Working Groups", but he believes 
that we aren't doing active outreach.

Karl responded that we have incorporated practices from several groups, 
but it's probably true that we could do more active outreach. Should we?

1b) Test-driven development

In this thread:

Jeremy's original message: 
http://lists.w3.org/Archives/Public/www-qa/2003Jul/0004.html
Karl's response: 
http://lists.w3.org/Archives/Public/www-qa/2003Jul/0020.html
Jeremey's response to Karl: 
http://lists.w3.org/Archives/Public/www-qa/2003Jul/0023.html

Jeremy argues that we imply/suggest a waterfall model (each step builds 
on or follows the previous one) rather than a more cyclical model or 
even a "test first" approach such as XP.

He argues that WGs shouldn't necessarily wait until the spec is complete 
to develop tests. Issue-driven testing, "in which test cases often form 
part of an issue resolution" can be a useful approach.

How to decide which tests will be most useful? Once again, issue-driven 
testing can help.

Jeremy points out that we have explicitly scoped our guidelines to 
conformance testing.

He points out that "both RDF Core and WebOnt WGs have had issue driven 
test processes, where the proposed tests help clarify different 
positions between members of the WG, and the approved tests clarify the 
issue resolution. Parts of the specs that had no related issues are 
typically unproblematic, and building tests for those parts is less 
critical, and less cost effective."

While Karl has pointed out in his responses to Jeremy that our other 
Framework docs can be interpreted as supporting multiple development 
models and even encouraging "early testing" I believe that Jeremy's 
comments are essentially correct: TestGL as written does focus on 
conformance testing, and tends to assume that the bulk of testing occurs 
after the spec has been developed.

This was deliberate. As Jeremy points out, the scope of our document is 
explicly stated as conformance testing, and I believe that conformance 
testing implies that tests are developed based on assertions identified 
within or derived from the specification.

Our Charter (http://www.w3.org/QA/WG/charter.html) does not restrict us 
to conformance testing and we could if we so chose expand the scope of 
TestGL to cover additional types of testing (for example, testing 
devised to clarify the spec, or interoperability testing).

My own opinion: these other kinds of testing are extremely valuable, but 
they aren't conformance testing. Since we have limited resources, I 
would rather that we focus primarily on conformance testing. This should 
not preclude us from: 

* making reference to other kinds of testing and the ways in which they 
can contribute to the conformance testing process,
* pointing out the contribution that the conformance-test development 
process makes to the clarification of the spec
* emphasizing in TestGL, as we have in our other docs, that the 
"waterfall model" is not necessarily the only way to develop tests.

2) Continuation of discussion of overlap between TestGL and OpsGL

We agreed in Crete on the need to address licensing issues, and 
specifically on the need to clarify that different licenses may be 
required for Test Cases, Test Software, and Test Documentation. However, 
licensing is explicitly addressed in OpsGL checkpoints 5.3 and 6.2. 
Since OpsGL seems like the right place for licensing checkpoints I 
propose that we adopt the same approach here as we have with other 
overlaps; add the material to TestGL, but be prepared to move to OpsGL 
later.

3) Discussion on assertions and metadata

Last week we discussed assertions, meta-data, and whether there would be 
any necessity for assertion lists if tests were automatically derived 
from the spec.

If we agree that in the latter case there will be no assertion list, 
then all metadata currently associated with assertions would of 
necessity have to be associated with tests. (This might be the right 
thing to do anyway.)

This message from Carmelo explicitly address automated test generation: 
http://lists.w3.org/Archives/Public/www-qa/2002Jun/0000.html

My position: I would be happy to extend the definition of "assertion" to 
include expressions like this:

FLWRexpr := (ForClause | LetClause)+whereClause? return Expr

This would eliminate the need to artificially derive an English-language 
assertion such as "A FLWR expression MUST consist of a ForClause or a 
LetClause which MAY be followed by a WhereClause and which MUST be 
followed by a Return expression".

For discussion: what are assertions, how do we deal with cases of  
automated test-generation such as this, what's the minimum set of 
metadata we need to associate with assertions and/or tests, and are 
there additional sets of "optional" metadata over and above the minimum?











 

Received on Friday, 22 August 2003 19:13:08 UTC