Re: Testing the (RL) testing...

Markus Krötzsch wrote:
> Hi Ivan,
> 
> I just answer directly to your remarks on multiple syntaxes, conversion, and 
> "normativity" in this context. The test harness questions I con only leave to 
> Mike.
> 
> The reason why the field is called "normative syntax" is that this is a 
> syntactic form that is normative *for the test*, i.e. one that tools can use 
> to check if they pass the test (the test ontology also allows non-normative 
> syntax forms that do not have an official status). The normative versions have 
> been carefully checked before approving a test case in the working group, so 
> there is some quality commitment that could not really be given for 
> automatically generated translations. This is why these syntaxes are 
> specifically marked, even for tests that do not involve syntax conversions. 
> Maybe we should change "normative" to some other term in order to avoid 
> confusion between "normative syntaxes for OWL" and "syntax of a normative 
> ontology used in this test"?

Sandro offered 'original', and it works for me. I just want to avoid
unnecessary misunderstandings with regard to the word 'normative'...

> 
> If the UMan conversion at some point is so reliable that we completely trust 
> it, then we would still need the "alternative syntax" option, or otherwise the 
> tests would rely on the current availability and correctness of the conversion 
> service (which may change after tests were approved). The alternative syntaxes 
> are also the ones that are included in the exported test metadata.
> 
> The alternative syntaxes might become less relevant for users who browse the 
> wiki and who may be happy with the online conversion -- this hoped-for 
> situation is what motivated the current layout, where "alternative syntax" is 
> not very prominently displayed. One could of course hide these links even 
> further once the conversion is more reliable.
> 
> Alternatively, one could re-design the UI to have a better labeling and 
> placing of the "alternative syntax" links -- but if there is hope that the 
> UMan conversion will be fixed soon, I would rather not change all this (in any 
> case, the alternative syntaxes will never be as complete as the service's 
> conversions).
> 


O.k., I understand. I would propose we postpone this issue until we have
a clearer view on the status of the M'ter conversion service.

And, as Michael just said: thanks for all the work. I am whining here
but I should have started by saying that all this really looks good:-)

Cheers

Ivan


> -- Markus
> 
> 
> On Mittwoch, 3. Juni 2009, Ivan Herman wrote:
>> Hi Mike,
>>
>> Thanks! Comments below...
>>
>> (Just a side remark: I tried to look at the tests from an OWL RL point
>> of view. Many remarks might very well be valid for other profiles or for
>> OWL Full, but I concentrated on this case only for now...)
>>
>> Mike Smith wrote:
>>> I've quoted and responded to those bits for which I have useful feedback.
>>>
>>> On Thu, May 28, 2009 at 06:15, Ivan Herman <ivan@w3.org> wrote:
>>>> - Markus, I did download the RL tests[1]. However, I must admit that, at
>>>> least for me, this has only a limited usability as is. To test my
>>>> implementation, I need the individual 'premise' ontologies independently
>>>> of one another, and all in RDF/XML. The file[1] includes all these as
>>>> string literals, so I'd have to make an extra script that extracts those
>>>> string literals, and stores the results in separate RDF/XML files
>>> Alternatively, people can do this by writing a small amount with the
>>> harness, even if the goal is to run tests with a non-Java tool.  I
>>> added a bit the the Test_Running_Guide page to hint at this.
>> O.k. I have not tried to run the tool, and the
>>
>> http://github.com/msmithcp/owlwg-test/tree/master
>>
>> does not hint at using this tool just to extract the specific tests (I
>> presume this is on your t.b.d. list) but that is a good way to do it
>> indeed.
>>
>> I (and testers) actually would be interested to know how the harness can
>> be run. What does it require if I have, say, a web service returning an
>> expanded RDF graph using the RL rules, to use this harness?
>>
>> My comments below are related to the case when the harness cannot be run...
>>
>>>> - I picked one test (DisjointClasses-001[3]). It is a bit discomforting
>>>> that the whole test is described in Functional Syntax that, as I said, I
>>>> do not understand
>>>>
>>>> - However, I find the link at the bottom which says 'Auxiliary syntax
>>>> documents' which does present the whole test in RDF/XML[4]. This is what
>>>> I really need! Great.
>>> Each test page shows the format the test was initially created in -
>>> for most this is RDF, for some it is functional syntax.  Some tests
>>> (mostly those with fs) have multiple normative formats.  If an
>>> auxiliary syntax link is available (as it was in this case), it is
>>> because the test was manually translated to have multiple normative
>>> formats.  Both formats are included in the "download owl" link and the
>>> exports, and test test may be used as a syntax translation test.
>> I know I am a pain in the back side here, my apologies:-( But, at the
>> moment, the syntax translators via the M'ter service do not work. When
>> do we plan to have that up and running? We have already contacted some
>> of our potential implementers/testers and the deadline we give them to
>> complete the tests (mid July) is fairly short. Ie, these translations to
>> other formats should be available very soon...
>>
>> A cosmetic issue: the page says 'Normative syntax: Functional'. I am not
>> sure what this means and I think we should be careful using the
>> 'normative' word in this case. It of course makes sense for tests that
>> convert one syntax to the other, but not for others...
>>
>>>> I wonder whether that link should not appear in a more prominent place
>>>> on[3] and not labelled as 'Auxiliary' but simply as 'RDF/XML version'.
>>>> Alternatively, we could have a complete alternative of [3], with all the
>>>> additional infos there, but in RDF/XML instead of FS. That could then be
>>>> linked from[2], ie, we can save the user some extra hops.
>>> That link is not just for RDF/XML.  A test could be initially in
>>> RDF/XML and that link would provide a functional syntax version, or an
>>> OWL/XML version.
>> So if the M'ter conversion service works for all the tests, I am not
>> really sure what the reason of having those links are. Aren't these just
>> a source of confusion then?
>>
>>>> - This particular test is labelled (on [3]) as 'applicable under both
>>>> direct and RDF-based semantics'. However, as far as I can see, this test
>>>> cannot be completed using the OWL RL Rule set. This may be an example
>>>> where the Direct semantics of RL and the RDF based semantics with the
>>>> rules diverge or, more exactly, where the Rule set is incomplete. This
>>>> is fine per se, as long as this is clearly stated on the test page
>>>> somewhere; otherwise implementers may not understand why they cannot
>>>> complete this test.
>>> The entailed ontology in this test does not satisfy the requirements
>>> of Theorem PR1.  I believe, then, that the RL + RDF Semantics
>>> entailment checker could return unknown.
>> I would rather say 'non applicable'. Maybe an extra class should be
>> added to the result ontology indicating this. At the moment I see
>> 'failing run', 'passing run', or 'incomplete run', and none of these
>> really describe this case...
>>
>>>                                            The test cases indicate
>>> applicability of  Direct Semantics and RDF-Based semantics.  They do
>>> not have an indicator for the partial axiomization of the RDF-Based
>>> semantics provided by the RL rules.
>>>
>>> ***
>>> I believe this was discussed in the past but no action was taken.
>>> Would you like to propose enhancing the metadata for RL tests to
>>> indicate if PR1 is satisfied?
>>> ***
>> I think this is certainly good to have.
>>
>>>> - Provided I run the test, eyeball the result, and I am happy with what
>>>> I see, I presume I have to record it using[6]. First of all, it would be
>>>> good to add some comments/annotations to that ontology because it is not
>>>> 100% clear what the various terms mean. Also, the premise was that the
>>>> implementer does not understand FS, which makes it a bit of a challenge
>>>> for him/her...
>>> I've modified the page to include a description of the example and
>>> provided a link to the ontology in RDF/XML.  Hopefully that makes it
>>> more approachable.
>> Yes, thank you. But that was not really my point. What I am wondering if
>> it was possible to add an extra field to, say, the
>>
>> http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/DisjointClasses
>> -001
>>
>> that provides most of the necessary answer data.  Ie, a field saying:
>>
>> [[[
>> @prefix xsd:  <http://www.w3.org/2001/XMLSchema#> .
>> @prefix :     <http://www.w3.org/2007/OWL/testResultOntology> .
>> []
>>     a :PositiveEntailmentRun , ADDYOURRESULT ;
>>
>>     :test [ test:identifier "DisjointClasses-001"^^xsd:string ] ;
>>     :runner ADDIDTOYOURTEST .
>>
>> ]]]
>>
>> So that the tester can just copy/edit this. The field that is the most
>> complicated to 'find' for this test is :PositiveEntailementRun; I would
>> expect a number of responses going wrong...
>>
>> Thanks!
>>
>> Ivan
> 
> 

-- 

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF: http://www.ivan-herman.net/foaf.rdf

Received on Wednesday, 3 June 2009 12:29:01 UTC