Re: TEST: scope

(Chair neutrality off)


At 10:49 AM +0100 6/26/02, Jeremy Carroll wrote:
>I wished to write down my concerns about the scoping of the TEST work.
>
>
>Test suites have many different purposes:
>- checking correctness
>    - at particular points (issue driven)
>    - generally (conformance testing)
>- exercising difficult problems
>- performance testing
>- scale testing
>
>Test suites have at least two different audiences:
>- systems and their developers
>- other humans
>
>The latter audience prefer small tests that can be easily understood.
>This audience may be reading the tests in order to better understand 
>some text.
>
>=========================
>
>
>My view is that we need to be clear as to what we are trying to achieve.
>
>I suggest that we should generate tests that illustrate our issue resolutions.

I disagree strongly with this

>I suggest that we should keep all our tests as small as possible.

Agree with this

>
>I suggest that we should not aim at a conformance test suite.

mostly agree with this

>I suggest that performance and scalability tests (and the like) are out of
>scope.

absolutely agree with this.
>
>===
>
>We also need to get process in place to generate and agree test cases.
>Each test case is quite expensive and in my view we should not generate too
>many. (Not that there is any danger at the moment!)
>
>Smaller tests are much cheaper than bigger tests because they are much more
>likely to be right first time.
>
>Jeremy

Jeremy - the problem I have with being issue oriented is that we will 
then have test cases to show how we compare to a non-existent entity 
- i.e. if we were OWl 2, I'd think it makes sense to do issues only, 
but as OWL 1 it seems to me we need to help folks with all the main 
features.  Looking at Mike's OWL document (our most important one - 
since it is the reference description for the normative exchange 
syntax) I think that we should have a small test for each language 
feature -- this is because there is no place to go to find such 
things - and if we don't produce them, where will they come from?  I 
think the number of tests that would be added to the issues would be 
small (most of the language features are being discussed in some way) 
and since we wouldn't be aiming at conformance testing, they wouldn't 
have to be exhaustive- but for an implementor trying to understand 
some language feature, it's easier to say "I get it" from a test than 
to have to work through the whole MT or etc.

Here's a for example - one of my students is developing a 
multi-ontology, name-space aware tool (the RDF Instance Creator - new 
version out on RDFIG soon).  He was asking me today about what 
sameClassAs and samePropertyAs
do.  A couple of tests would have made it easier for him to decide 
whether he wanted to pursue developing code for these or not (in 
fact, I sent him a couple of examples - much as if they were test 
cases).   Since these features are not in our issues list (we all 
agree they are useful), we wouldn't have Test cases for them, which 
seems arbitrary to me.

  -JH
-- 
Professor James Hendler				  hendler@cs.umd.edu
Director, Semantic Web and Agent Technologies	  301-405-2696
Maryland Information and Network Dynamics Lab.	  301-405-6707 (Fax)
Univ of Maryland, College Park, MD 20742	  240-731-3822 (Cell)
http://www.cs.umd.edu/users/hendler

Received on Wednesday, 26 June 2002 19:07:14 UTC