Re: Reporting Test Results (was Re: Regrets for 2003-09-04 telecon)

[I've added the WG on the CC, I hope you don't mind.   I'm sure this
is a common issue.]

> In your latest version you added duplicates for all the tests - one
> for passing the test, a second for being able to handle the syntax
> of the test. I think that a better solution would be to add an
> additional name to the single list.  We could have passed,
> passedsyntax, failed, timeout, unknown as the options.  If a tester
> can pass a test, odds are pretty good that the tester can handle the
> syntax, so there is no real need to have duplicates of all the tests
> in the list.

What I mean by "Syntactic Level Test for ..." is not that the system
can parse the inputs, but that it can determine the syntactic level of
the input files (identify their "species", as we used to say), and do so
correctly.   As I understand it, this is what Peter's OWLP does.  

But this operation is something that OWL Full reasoners like Euler and
Surnia have no reason to even attempt.  I wouldn't imagine DL
reasoners would bother to distinguish between DL and Lite, and Lite
reasoners wouldn't bother to distinguish between DL and Full.  So the
only reason to do species identification, I guess, is if you can route
the ontology to one of several reasoners.

It seems to me that Syntactic Level Tests should be thought of as
another category (like Positive Entailment Tests, Tests for Incorrect
Use of OWL Namespace, and Import Level Tests), although I'm okay with
them being implicit in "OWL Test Cases" as they are now.  Most of
these categories (there are 9 others) correspond to system features
that may or may not be present, and folks will be interested in the
results for those tests only when they care about that feature (eg
determining species).

I've started the code change to give ten output tables (corresponding
to the test types), but it's not done yet.

      -- sandro

Received on Friday, 5 September 2003 17:09:20 UTC