W3C home > Mailing lists > Public > www-webont-wg@w3.org > September 2003

Re: OWL Test Results page, built from RDF

From: Sandro Hawke <sandro@w3.org>
Date: Sun, 07 Sep 2003 07:38:42 -0400
Message-Id: <200309071138.h87BcgFM030265@roke.hawke.org>
To: Ian Horrocks <horrocks@cs.man.ac.uk>
Cc: "Jos De_Roo" <jos.deroo@agfa.com>, "Jeremy Carroll <jjc" <jjc@hplb.hpl.hp.com>, www-webont-wg@w3.org


Ian Horrocks writes:
> I don't believe that it is either desirable or sensible for the
> results to distinguish good/bad incompleteness. Bad incompleteness is
> unsoundness and can simply be reported as "fail".

When I'm working on Surnia (based on otter+axioms), I'm trying to turn
the Incompletes for Positive Entailment Tests and Inconsistency tests
into Passes (while being very careful to avoid getting any Fails).  I
have no expectation of making any progress on the Negative Entailment
Tests or Consistent tests, however.  Is there no point to
distinguishing between my expectations here?

I've split the test results page into different sections for the
different kinds of tests; maybe I'll just produce no column for any
system which reports no-data on the tests in some section.  Then by
producing no-data for the the tests which a systems has no hope of
passing, it wont even be considered in the running.  Does that make
sense?

Another issue is whether it's fair to say Surnia passes a test when it
only does so with manual (test-specific) guidance to finding a proof.
That guidance only makes it complete sooner, so it's a
Would-Pass-if-given-enough-computing-resources.  I'd like to call that
a "Pass (_note_)", (where the note is a link to an explanation); does
that seem fair?  By CADE/CASC/TPTP standards, that's not a Pass, but
they might be after something different.   

   -- sandro
Received on Sunday, 7 September 2003 07:44:25 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:58:02 GMT