W3C home > Mailing lists > Public > public-swd-wg@w3.org > June 2007

Test Cases

From: Sean Bechhofer <sean.bechhofer@manchester.ac.uk>
Date: Mon, 4 Jun 2007 15:20:09 +0200
Message-Id: <DC1481A1-51A7-4F6B-A644-A4B72BF9EA3F@manchester.ac.uk>
To: SWD WG <public-swd-wg@w3.org>


I'm unlikely to make tomorrow's telecon, so here are a few brief
thoughts on test cases, following up a brief discussion that we had on
last week's call.

I think there are a couple of things that we need to make clear in any
test case work. The issues that are being raised include aspects
relating both to the details of the recommendations and to how
applications should (could?) deal with SKOS.

For example, if we take ISSUE-33, in the alternative proposals being
presented, we have questions of consistency. These can clearly be
turned into test cases along the lines of OWL's consistency or
inconsistency tests -- we have a well defined mechanism for describing
the expected outcome. For those who aren't aware of the OWL Test Case
work [1], tests are all described in machine readable form (using
RDF). So a consistency test includes an rdf model -- the expected
outcome of running the test through a consistency checker will be
"true". As a slighlt more complicated example, an entailment test has
an input ontology and collection of statements that are entailed by
the ontology. The key thing is that the expected outcome of the test
can be unambiguously described. It's then easy for developers to write
test harnesses that automate the testing process. It was also possible
to integrate results from different systems to show an overall picture
of the state of play while gathering implementation experience [2].

The second aspect is less clear. For example, again in ISSUE-33 there
is some discussion of how an application could display hierarchies
that make use of Bundles. It would be desirable to have "test cases"
to accompany this, but I don't think that these have the same weight
as things like the consistency tests. It's also difficult to see how
we would precisely capture the expected results of such tests (in a
way that would allow us to automate the testing process). However, I
think this would be a useful resource, that would provide some kind of
best practice/recipe advice for implementors. I'm not sure whether
there's some precedent for this kind of thing -- is this an approach
that other WGs have taken? Question for Ralph I guess.

I'll take a look at the current issues and see if there are any more
concrete proposals that spring to mind.

      Sean

[1] http://www.w3.org/TR/owl-test/
[2] http://www.w3.org/2003/08/owl-systems/test-results-out

--
Sean Bechhofer
School of Computer Science
University of Manchester
sean.bechhofer@manchester.ac.uk
http://www.cs.manchester.ac.uk/people/bechhofer
Received on Monday, 4 June 2007 13:20:22 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:07:50 UTC