Re: Testable assertion tagging for W3C specifications

----- Original Message -----
From: "Jonathan Robie" <jonathan.robie@datadirect-technologies.com>
To: "Paul Cotton" <pcotton@microsoft.com>; "Lofton Henderson"
<lofton@rockynet.com>; <scott_boag@us.ibm.com>
Cc: <spec-prod@w3.org>; <w3c-query-editors@w3.org>; <www-qa@w3.org>
Sent: Tuesday, May 07, 2002 9:14 AM
Subject: RE: Testable assertion tagging for W3C specifications


> At 10:37 PM 5/6/2002 -0400, Paul Cotton wrote:
> >Another technique is to have a separate test/conformance document that
> >points directly into a technical specification or quotes test from the
> >normative document.  For example, see the SOAP 1.2 test/conformance
> >document [1].
> >
> >[1] http://www.w3.org/2000/xp/Group/2/03/11/soap-1.2-conformance.html
>
> I like this. This allows someone else to work on the assertions without
> having to share the source of the document.
>
> Also, I think the assertions in this document helped me understand what
you
> are trying to accomplish. I think this is a good thing to do -
particularly
> if it leads to the development of a test suite.
>

Yes, I like this approach as well.  When we develop test suites here at
NIST, we
use something similar.  We develop a set of assertions that point directly
back to
the spec.  We then use the assertions to develop our tests.  It would be
useful to
have this information defined within the working group -- it doesn't matter
whether
it is inline in the spec or as a separate component.  What's important is
that the WG
agrees that the assertions cover the functionality defined in the spec.
This approach leads
to the following:

1) Distinct set of testable assertions which leads to concrete tests.
2) Review of the spec for testability.
3) Errors and ambiguities can be reported back to the spec authors.

The only relatively minor problem we have had in the past is that quite
often there are a
large number of assertions.  It is difficult to find folks who are willing
to review
these assertions.  It never fails, there are a handful of problems, probably
less than 5% that
lead to problems in the tests themselves.  These problems ultimately work
themselves
out as implementers begin using the tests and report back issues.

An upside to this approach that we'd like to investigate further is if the
assertions can
be defined in a way that can be machine processed, then we might be able to
automate
some of the test-generation.  We've used a variation of this approach to
automatically
generate tests for xsl-fo, and we've also defined tests in XML for DOM and
used
XSLT to transform the tests to machine specific bindings.  In another
effort, we generated tests
for XML Schema Datatypes by supplementing the schema for schemas with
additional
tagged information and feeding this info to a test generator.

--Mary

Mary Brady
NIST, Web Technologies
mbrady@nist.gov

Received on Tuesday, 7 May 2002 12:10:47 UTC