W3C home > Mailing lists > Public > www-webont-wg@w3.org > September 2003

RE: Manual Rewriting and Passing Entailments

From: Peter Crowther <Peter.Crowther@melandra.com>
Date: Tue, 16 Sep 2003 12:41:58 +0100
Message-ID: <DDBBD1E00935D144AB9563D57EF98D625342@raccoon.melandra.net>
To: "Ian Horrocks" <horrocks@cs.man.ac.uk>
Cc: <www-webont-wg@w3.org>

> From: Ian Horrocks [mailto:horrocks@cs.man.ac.uk] 
[...]
> As an implementor, tests that I can (easily) pass are of little
> value. Tests that I can't pass, in particular small tests, are of
> great value.

As another implementor, I have to disagree slightly.  One starts with being unable to pass *any* tests and works up; therefore, tests that I can now pass easily may previously have been of great value to me.

However, I think the WG may need to split out the (at least) three possible uses of the tests in order to make progress on this discussion:

1) Political expediency dictating that {all, many} of the tests should be passable by {all, many} of the implementations.  Simple tests like these get the standard through in a timely manner and make us friends among all the implementors.  They may give a falsely rosy picture of a given implementation - it may pass 98% of the test cases, simply because only 2% of the tests exercise any real percentage of the implementation.

2) Useful test cases for implementors, intended to exercise everything from the trivial to the complex in a series of advances that allow controlled testing of new features.  These make us friends among the serious implementors of particular levels of the standard (as opposed to those who would like an OWL badge on their existing systems with the minimum of work).

3) Political or corporate will dictating that {all, many} of the tests should be passable by few of the implementations, in order to demonstrate some form of superiority of implementation.  Torture tests like these make us few friends among the implementors; they may or may not assist in adoption of the standard by users, as there may be better differentiation between systems that (for example) pass the simple test  cases and systems that (for example) pass the full set of test cases.

I think different WG members may be arguing from different views of the use of these tests.  As an ex-implementor, I'm all for (2) as it makes my life much easier.  As someone who wants to see OWL deployed in robust, industrial-strength interchange systems, with a high degree of interoperability, I'm against (1).  Oddly, I can't find anything good or bad to say about (3); it's just another point of view.

Just my 0.03.

		- Peter
Received on Tuesday, 16 September 2003 07:42:00 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:58:02 GMT