- From: Ivan Herman <ivan@w3.org>
- Date: Wed, 03 Jun 2009 11:01:04 +0200
- To: Mike Smith <msmith@clarkparsia.com>
- CC: mak@aifb.uni-karlsruhe.de, W3C OWL Working Group <public-owl-wg@w3.org>
- Message-ID: <4A263BD0.3060607@w3.org>
Hi Mike, Thanks! Comments below... (Just a side remark: I tried to look at the tests from an OWL RL point of view. Many remarks might very well be valid for other profiles or for OWL Full, but I concentrated on this case only for now...) Mike Smith wrote: > I've quoted and responded to those bits for which I have useful feedback. > > On Thu, May 28, 2009 at 06:15, Ivan Herman <ivan@w3.org> wrote: > >> - Markus, I did download the RL tests[1]. However, I must admit that, at >> least for me, this has only a limited usability as is. To test my >> implementation, I need the individual 'premise' ontologies independently >> of one another, and all in RDF/XML. The file[1] includes all these as >> string literals, so I'd have to make an extra script that extracts those >> string literals, and stores the results in separate RDF/XML files > > Alternatively, people can do this by writing a small amount with the > harness, even if the goal is to run tests with a non-Java tool. I > added a bit the the Test_Running_Guide page to hint at this. > O.k. I have not tried to run the tool, and the http://github.com/msmithcp/owlwg-test/tree/master does not hint at using this tool just to extract the specific tests (I presume this is on your t.b.d. list) but that is a good way to do it indeed. I (and testers) actually would be interested to know how the harness can be run. What does it require if I have, say, a web service returning an expanded RDF graph using the RL rules, to use this harness? My comments below are related to the case when the harness cannot be run... > >> - I picked one test (DisjointClasses-001[3]). It is a bit discomforting >> that the whole test is described in Functional Syntax that, as I said, I >> do not understand > >> - However, I find the link at the bottom which says 'Auxiliary syntax >> documents' which does present the whole test in RDF/XML[4]. This is what >> I really need! Great. > > Each test page shows the format the test was initially created in - > for most this is RDF, for some it is functional syntax. Some tests > (mostly those with fs) have multiple normative formats. If an > auxiliary syntax link is available (as it was in this case), it is > because the test was manually translated to have multiple normative > formats. Both formats are included in the "download owl" link and the > exports, and test test may be used as a syntax translation test. > I know I am a pain in the back side here, my apologies:-( But, at the moment, the syntax translators via the M'ter service do not work. When do we plan to have that up and running? We have already contacted some of our potential implementers/testers and the deadline we give them to complete the tests (mid July) is fairly short. Ie, these translations to other formats should be available very soon... A cosmetic issue: the page says 'Normative syntax: Functional'. I am not sure what this means and I think we should be careful using the 'normative' word in this case. It of course makes sense for tests that convert one syntax to the other, but not for others... > >> I wonder whether that link should not appear in a more prominent place >> on[3] and not labelled as 'Auxiliary' but simply as 'RDF/XML version'. >> Alternatively, we could have a complete alternative of [3], with all the >> additional infos there, but in RDF/XML instead of FS. That could then be >> linked from[2], ie, we can save the user some extra hops. > > That link is not just for RDF/XML. A test could be initially in > RDF/XML and that link would provide a functional syntax version, or an > OWL/XML version. So if the M'ter conversion service works for all the tests, I am not really sure what the reason of having those links are. Aren't these just a source of confusion then? > > >> - This particular test is labelled (on [3]) as 'applicable under both >> direct and RDF-based semantics'. However, as far as I can see, this test >> cannot be completed using the OWL RL Rule set. This may be an example >> where the Direct semantics of RL and the RDF based semantics with the >> rules diverge or, more exactly, where the Rule set is incomplete. This >> is fine per se, as long as this is clearly stated on the test page >> somewhere; otherwise implementers may not understand why they cannot >> complete this test. > > The entailed ontology in this test does not satisfy the requirements > of Theorem PR1. I believe, then, that the RL + RDF Semantics > entailment checker could return unknown. I would rather say 'non applicable'. Maybe an extra class should be added to the result ontology indicating this. At the moment I see 'failing run', 'passing run', or 'incomplete run', and none of these really describe this case... > The test cases indicate > applicability of Direct Semantics and RDF-Based semantics. They do > not have an indicator for the partial axiomization of the RDF-Based > semantics provided by the RL rules. > > *** > I believe this was discussed in the past but no action was taken. > Would you like to propose enhancing the metadata for RL tests to > indicate if PR1 is satisfied? > *** I think this is certainly good to have. > >> - Provided I run the test, eyeball the result, and I am happy with what >> I see, I presume I have to record it using[6]. First of all, it would be >> good to add some comments/annotations to that ontology because it is not >> 100% clear what the various terms mean. Also, the premise was that the >> implementer does not understand FS, which makes it a bit of a challenge >> for him/her... > > I've modified the page to include a description of the example and > provided a link to the ontology in RDF/XML. Hopefully that makes it > more approachable. Yes, thank you. But that was not really my point. What I am wondering if it was possible to add an extra field to, say, the http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/DisjointClasses-001 that provides most of the necessary answer data. Ie, a field saying: [[[ @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . @prefix : <http://www.w3.org/2007/OWL/testResultOntology> . [] a :PositiveEntailmentRun , ADDYOURRESULT ; :test [ test:identifier "DisjointClasses-001"^^xsd:string ] ; :runner ADDIDTOYOURTEST . ]]] So that the tester can just copy/edit this. The field that is the most complicated to 'find' for this test is :PositiveEntailementRun; I would expect a number of responses going wrong... Thanks! Ivan > > -- Ivan Herman, W3C Semantic Web Activity Lead Home: http://www.w3.org/People/Ivan/ mobile: +31-641044153 PGP Key: http://www.ivan-herman.net/pgpkey.html FOAF: http://www.ivan-herman.net/foaf.rdf
Received on Wednesday, 3 June 2009 09:01:37 UTC