Re: Dimitris' TCDL Review (AI-20030618-9)

There are some detail points I want to make about Dimitris' review,
but first I'll make a high-level observation.

Dimitris seems to assume that tailored versions of the test suite
would be "checked out" from CVS or equivalent by the test lab. What I
propose is that the WG (or maintainer of the test suite) would do the
check-out and produce a complete suite for download. It is this
downloadable package that would contain the TCDL describing all the
cases contained in the package. In particular, the package could be
applicable to several versions of the spec. The best illustration of
the difference is when a test lab wants to test several products of the
same class that may exhibit allowable variability. In my proposal, the
lab downloads the test suite once and tailors it by filtering the cases
using TCDL data. Dimitris seems to be describing a situation in which
the lab does not simply download but extracts separately from CVS for
each test subject. I think that would be very harmful to the ability of
other labs to reproduce the results, especially if they try to do so
later. Can CVS handle all the many filtering criteria that need to be
applied, plus apply those criteria to the repository as it existed on a
particular date? Even if they can, can the said criteria be expressed
as a formula that can be published and emailed? For XSLT filtering, the
formula is likely to be an XPath expression and could be complicated.

>However, any TCDL-like language should not be used to do what other 
>things can do already...

The only sense in which TCDL does that is because it needs some meta-
data that other tools may also use. As I mentioned on the phone last
week, the driving notion is to provide all the meta-data at download
time that the lab will need to run tests. Since the test suite, with
catalog, is a deliverable of the WG, I think that's a good gateway for
collecting meta-data into the package. Maintenance of test cases over
time is a WG concern, so they may need the same meta-data. But the
deliverable item contains a snapshot of the suite, so our planning of
tools for the WG can work backward from the deliverable and possibly add
more data items not needed for the snapshot.

Likewise, the planning of tools can work forward from the notion that
various parties will contribute test cases. The WG can devise one or
more schemes to group or categorize the tests. Whenever any such scheme
of subdividing the suite should be visible to the downloaders, the WG
should define data items that extend TCDL, as will become more clear in
the TCDL document.

Dimitris gives some examples from DOM, where the "levels" are not true
cumulative levels in the sense that QAWG has been discussing. They are
really modules, so the WG would devise a sub-element of <test-case> that
allows each case to designate the module(s) to which it applies. Test
case contributors could be required to make the designation as part of
their submission. For a test regime involving true levels, a simple
numeric value or two, again a sub-element of <test-case>, might suffice.

Version data does not be applied to a <test-case> element until such
time as there exists a version to which the case does *not* apply. (For
this purpose, errata may trigger the need.) In other words, version info
is relevant when it becomes a filtering criterion.

TCDL doesn't attempt to address all the needs that the WG may have for
"suite building" that leads up to the deliverable package. QAWG should
vote on whether the Status-Tracking Feature should be optional (as I
have it), required permanently, or required until such time as more
management tools have been identified. If required, then we can talk
about whether and how to interface it with a full-powered tracking tool
when the time comes. What I put in the draft anticipates that the test
lab needs just snapshot-level information about the status of each case.
I want the TCDL document to state explicitly that it is not dictating
any status codes, and that any codes shown are for explanatory purposes.

>Could descriptive strings be part of the test itself...?

If the WG agrees that there is a harmless place for them 100% of the time.
In my work on XQuery test cases, we agreed that comments could be stored
in the files containing the test queries, but not in the test data. The
reason for the latter restriction is that some test cases require XML
data that has no comments whatsoever. Even if the descriptive string is in
the input file, as we had with the queries, it is still desirable to have
it in the catalog. Furthermore, every string should be different. (To see
why, consider the outcome that StringFunc005 passes but StringFunc006
fails, and the two are very similar. The first step in diagnosis is to
read the descriptive strings of the respective tests and see what's
different.) I can explain the uses and benefits of these strings at length
upon request.
.................David Marston

Received on Monday, 3 November 2003 10:50:10 UTC