Testing the (RL) testing...

Mike, Markus,

following the questions I asked on the call yesterday, I tried to look
at the process I, as an average implementer, have to go through for
testing. I guess we all agree that we should try to make this process as
smooth as possible if we want possibly external implementers come in and
provide their test results for CR...

My premise: (1) I am interested in RL (2) I am interested in the RDF
version only, ie, what I want is to test (and contribute to the CR
process) an implementation using the OWL RL Rule set and (3) I do not
really understand the Functional Syntax, nor do I want to spend lot of
time learning it (but I am familiar and comfortable with the RDF
encoding thereof). I believe these characterize a number of potential
OWL RL implementers. In what follows I play the role of a such a person...

Here are some of the experiences for now. Nothing major, but smallish
things that, I believe, would help those implementers a lot.

- Markus, I did download the RL tests[1]. However, I must admit that, at
least for me, this has only a limited usability as is. To test my
implementation, I need the individual 'premise' ontologies independently
of one another, and all in RDF/XML. The file[1] includes all these as
string literals, so I'd have to make an extra script that extracts those
string literals, and stores the results in separate RDF/XML files
(converting the XML entities to make it really XML). It is of course
doable with some SPARQL queries, but I wonder whether it is possible to
provide implementers with a more, shall we say, pre-chewed versions of
those (as a start, the premise and consequent ontologies should be
encoded as XMLLiterals to make the job of a, say, SPARQL extraction
easier...).

- That being said, the test page filtered with RL looks perfect to me[2]
as a starting point. It rocks:-)

- I picked one test (DisjointClasses-001[3]). It is a bit discomforting
that the whole test is described in Functional Syntax that, as I said, I
do not understand;-). But there are links to the RDF versions (they do
not work right now, but I presume our M'ter friends are busy changing
that). Although the fact that it says 'informative' is somewhat
discomforting for an external user, for whom the RDF version should be
authoritative...

- However, I find the link at the bottom which says 'Auxiliary syntax
documents' which does present the whole test in RDF/XML[4]. This is what
I really need! Great.

I wonder whether that link should not appear in a more prominent place
on[3] and not labelled as 'Auxiliary' but simply as 'RDF/XML version'.
Alternatively, we could have a complete alternative of [3], with all the
additional infos there, but in RDF/XML instead of FS. That could then be
linked from[2], ie, we can save the user some extra hops.

- [4] has the download links which I can either use to download the
RDF/XML locally or feed the URI into an implementation if the latter is
online. That rocks again:-)

One tiny issue, though: the URI for the premise ontology in RDF[5]
returns the data as application/xml. It may be better to use
application/rdf+xml, processors may depend on this.

- This particular test is labelled (on [3]) as 'applicable under both
direct and RDF-based semantics'. However, as far as I can see, this test
cannot be completed using the OWL RL Rule set. This may be an example
where the Direct semantics of RL and the RDF based semantics with the
rules diverge or, more exactly, where the Rule set is incomplete. This
is fine per se, as long as this is clearly stated on the test page
somewhere; otherwise implementers may not understand why they cannot
complete this test.

- Provided I run the test, eyeball the result, and I am happy with what
I see, I presume I have to record it using[6]. First of all, it would be
good to add some comments/annotations to that ontology because it is not
100% clear what the various terms mean. Also, the premise was that the
implementer does not understand FS, which makes it a bit of a challenge
for him/her...:-( But o.k., let us put this aside, there are examples.
So one would have to say something like:

[]
    a :InconsistencyRun , :PassingRun ;
    :test [ test:identifier "DisjointClasses-001 "^^xsd:string ] ;
    :runner ex:my-reasoner ;

but I am not sure whether this is an 'InconsistencyRun' or something
else because that information is not prominent on [3]... well, there is
an info at the bottom of the page which says 'Positive Entailment Test',
so I presume what I have to use there is 'PositiveEntailmentRun'
(looking at the FS on [6]). Maybe this information should be more
prominent on the page.

- Would it be possible to facilitate the last step a bit? After all,
this turtle code extract could be automatized by some sort of a form or
a pre-chewed turtle code on the wiki side, so that the user should just
put in the reference to his/her own reasoner, and add PassingRun (or
FailingRun or IncompleteRun).

- At the moment it is not clear where I should submit the turtle results
of my test run, but I presume that something Mike will write down...

That is for now! Sorry if I sound critical, I am not really... What we
have, as I said here and there, rocks:-)

I hope this helps

Cheers

Ivan

[1] http://wiki.webont.org/exports/profile-RL.rdf
[2] http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/Test:RL
[3]
http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/DisjointClasses-001
[4]
http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/DisjointClasses-001-RDFXML
[5]
http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/Special:GetOntology/DisjointClasses-001-RDFXML?m=p
[6] http://www.w3.org/2007/OWL/wiki/Test_Result_Format


-- 

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF: http://www.ivan-herman.net/foaf.rdf

Received on Thursday, 28 May 2009 10:15:43 UTC