- From: Alan Ruttenberg <alanruttenberg@gmail.com>
- Date: Wed, 6 Aug 2008 06:53:15 -0400
- To: Jeff Thompson <jeff@thefirst.org>
- Cc: Owl Dev <public-owl-dev@w3.org>, Michael Schneider <schneid@fzi.de>
- Message-Id: <049AEB17-B475-4E53-9CC3-F3D579DE9162@gmail.com>
>> It seems that people have taken these approaches: >> 1) Only use small toy ontologies like "pizza", >> 2) Use a large ontology but with only very simple structure such >> as names and addresses, >> 3) Rely on Pellet to "perform a consistency check before it starts >> a reasoning task" without knowing how long it will take or whether >> the algorithm it uses is valid. A couple of comments: A consistency task *is* a reasoning task. It can be shown that the other common reasoning tasks for OWL can be reduced to consistency checks. There is another option, which is the approach we have taken with the Neurocommons. 4) Build large ontologies that use as much of the expressiveness of OWL as we need to. Do sound and complete reasoning on portions of it using Pellet or Fact++ as sanity check or for specific tasks. Do incomplete reasoning on the rest of it to get possibly incomplete results on particular tasks using whatever strategies we can. Challenge the theorists and reasoner developers to do more with these artifacts (they seem to like it when they are given real, well modeled, challenges). Among recent developments in the direction of reasoning developments I can point to the release of SHER, the development of Hermit, and the implementations (current and upcoming) of the tractable fragments Michael alludes to. > Only if no one of the internally known languages match, then an > original OWL-DL reasoner will be used as a default reasoning > engine, which may or may not lead to poor efficiency. > > I hope other people in this list can confirm my guess here, and can > elaborate on the state of the art a bit. It's my understanding that this is already the case with current OWL- DL reasoners. For instance one issue I hit a while ago was that the mere presence of an inverse property statement (but no use of the property) caused an ontology I had to stop being able to be classified in available memory/time. The reason, it turned out, was that Pellet was deciding which reasoning strategy (implemented by different code) to use, and the one that understood inverse properties did worse on my ontology. I don't know whether the simple check for the presence of the definition is still all that is done, but one can see obvious modifications that would improve this, for example checking that the inverse property is only used in simple facts and in that case rewriting them backwards to avoid the use of the inverse. Regards, Alan
Received on Wednesday, 6 August 2008 16:03:39 UTC