RE: Please help on description-logic-xxx test cases.

Hi,

Thanks so much for your comment and links to the docs.
I'm carefully reviewing the documents and my OWL inference
rules...

As for my previous message, I was just asking for some
help from DL gurus and implementors. It was not a formal
comment or request. Thanks.

Best,
Minsu

> -----Original Message-----
> From: public-webont-comments-request@w3.org 
> [mailto:public-webont-comments-request@w3.org] On Behalf Of 
> Jeremy Carroll
> Sent: Saturday, November 29, 2003 8:14 AM
> To: public-webont-comments@w3.org; Minsu Jang
> Cc: Sean Bechhofer
> Subject: Re: Please help on description-logic-xxx test cases.
> 
> 
> 
> 
> 
> Hi Minsu
> 
> I am copying Sean on this message, he was the author of these 
> particular tests 
> (as you can see from the dc:creator in the Manifest files),  
> 
> The tests themselves come from DL'98.
> 
> The following link gives the introduction:
> http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
> Vol-11/Intro.ps
> 
> Page 2 of that document is probably the best description of 
> these tests.
> 
> The original test data is still available from Horrocks:
> http://www.cs.man.ac.uk/~horrocks/FaCT/dl98-test.tar.gz
> 
> 
> (I have just noticed that the link in the CR document was 
> broken, I believe 
> the above links are OK). 
> See [DL 98] in the references in the OWL Test Cases
> 
> > I have added to my OWL inference rulebase a bunch of inference
> > rules for owl:intersectionOf and owl:complementOf, and it made
> > my Bossam engine successfully pass five description-logic-2xx tests,
> > which are 201,202,204,205 and 207. :-)
> > But I got two failures on 203 and 206. :-(
> 
> > What are the purposes of these tests? The descriptions on
> > the tests just say something cryptic like k_branch, k_d4,
> > k_dum, k_grz, k_lin, k_path, and k_ph. I cannot see any
> > differences between the tests by reading premise documents.
> > They just look very similar to each other.
> 
> From my point of view your message reveals the purpose of 
> these tests - to 
> break your system !! (And other peoples). A failing test is 
> an opportunity to 
> improve your code.
> 
> We chose to include tests from previous work by the Description Logic 
> community. We hoped to gain from their experience of some 
> things that are 
> difficult to implement.
> The tests in the test suite are intended to have a range of 
> difficulty, so 
> that even the best systems struggle to pass all of them. We 
> have tried to 
> avoid really impossible tests (except perhaps in the extra 
> credit section).
> 
> In the acknowledgements section there is the list of test 
> authors. You will 
> see that it is fairly long, and because of that the test 
> themselves show a 
> variety of flavours - those authored by myself and Sean tend 
> to have rather 
> cryptic abstract concept names - and we do not appear to be 
> thinking about a 
> real world problem. I personally tend to think about OWL in a 
> fairly abstract 
> way, and my tests are merely symbolic manipulation. Those 
> from Dan Connolly 
> or Jos De Roo tend in general to refer to real world 
> problems, and hence tend 
> to be easier to understand.
> 
> I hope this message helps - 
> I take it that your comment was not a request to change the 
> document in any 
> way, merely one implementor talking to another ...
> 
> If you actually want it to be taken as a formal comment, 
> maybe a request for 
> additional clarifying text to be included in the document, 
> please reply and I 
> will take such a request to the working group. (Personally I 
> would not be too 
> happy - because it looks like a lot of work to do that for 
> every test).
> 
> Good luck, I hope you get the rules right soon.
> 
> Jeremy
> 
> 

Received on Friday, 28 November 2003 19:51:12 UTC