- From: Jim Hendler <hendler@cs.rpi.edu>
- Date: Thu, 14 Aug 2008 18:18:19 -0400
- To: Ian Horrocks <ian.horrocks@comlab.ox.ac.uk>
- Cc: Michael Schneider <schneid@fzi.de>, "Ivan Herman" <ivan@w3.org>, <public-owl-wg@w3.org>, "Alan Wu" <alan.wu@oracle.com>
On Aug 14, 2008, at 5:46 PM, Ian Horrocks wrote: > [snip] > > Actually, I much prefer your idea of defining conformance w.r.t. the > syntactic fragment, i.e., conformant OWL RL reasoners must be > complete for query answering as defined in the current section 4.4. > This would be a slightly less strict condition, but it has the > advantage of being non-procedural and of coinciding with the > syntactically defined class of ontologies for which rule-based > implementations are complete. I was with you (and Michael) up to here - but now I get worried. The real problem is that while this is easy to say, most implementors don't have PhDs in AI or strong enough logical backgrounds to prove that their implementations are "complete for query answering as defined in section 4.4" -- certainly a lot of people may well write programs in either procedural languages or using logic programming which would likely be sound and may well cover the cases - but they'd have no way to prove it (and as the code may be proprietary or part of a much larger system, there'd be no easy way to "outsource" the proof). Seems to me this could have one of two bad results 1 - people will simply ignore the definition and claim conformance, which is bad, or 2 - people will take it seriously, which will keep them from building the easy rule-based implementations they could have by simply using their favorite rule systems (and integrating that into whatever application they are using), thus hindering the growth of the market for OWL ontologies) For OWL 1.0, we realized that there was no easy way to deal with this, and we developed the idea of having soundness as a stated goal and a test suite -- the more tests you passed, the more likely you were to be correct w/respect to the semantics - not a great solution, but it worked pretty well in its day. Problem is this WG isn't doing a test suite as far as I can tell, so we cannot use this out. So it seems to me the WG could take a hard line on what conformance is, at the risk of it being mainly ignored (and I can tell you from personal experience that going out into the blogosphere and saying that some company pushing a product isn't conformant is not a smart thing to do if you value your sanity) or we can take an easier line and at least get people to think about the issue of soundness, which is easier to assert (although not necessarily to prove) My personal opinion is to take the easier line, because OWL 1.0 has shown us that the market is actually pretty good at working these things out -- a lot of OWL DL reasoners started to emerge, but some, like Pellet, Fact and Racer, by dint of being well grounded, became widely used, while a lot of "partial implementations" (many of which were actually rule-based) didn't make it. Heck, I have a PhD in AI, but it's a lot of years out of date with respect to the knowledge I'd need to prove conformance as defined in the quote above -- so perhaps it would be better to stick to things that us "simple folk" out here can understand -JH p.s. an alternative is to bring back the test suite, but that is a lot of work as the OWL 1.0 folks can attest to!
Received on Thursday, 14 August 2008 22:19:56 UTC