W3C home > Mailing lists > Public > semantic-web@w3.org > September 2006

Re: Performance issues with OWL Reasoners => subclass vs instance-of

From: Phillip Lord <phillip.lord@newcastle.ac.uk>
Date: Mon, 18 Sep 2006 11:20:10 +0100
To: Chimezie Ogbuji <ogbujic@bio.ri.ccf.org>
Cc: William Bug <William.Bug@drexelmed.edu>, "Kashyap, Vipul" <VKASHYAP1@partners.org>, chris mungall <cjm@fruitfly.org>, semantic-web@w3.org, w3c semweb hcls <public-semweb-lifesci@w3.org>
Message-ID: <u8xkhiihx.fsf@newcastle.ac.uk>

>>>>> "CO" == Chimezie Ogbuji <ogbujic@bio.ri.ccf.org> writes:

  >> ABox is more complex than TBox, although I believe the difference
  >> is not that profound (ie they are both really complex). For a DL
  >> as expressive as that which OWL is based on, the complexities are
  >> always really bad. In other words, no reasoner can ever guarantee
  >> to scale well in all circumstances.

  CO> Once again: pure production/rule-oriented systems *are* built to
  CO> scale well in *all* circumstances (this is the primary advantage
  CO> they have over DL reasoners - i.e., reasoners tuned specifically
  CO> to DL semantics).  This distinction is critical: not every
  CO> reasoner is the same and this is the reason why there is
  CO> interest in considerations of using translations to datalog and
  CO> other logic programming systems (per Ian Horrocks suggestion
  CO> below):

Well, as I am speaking at the limit of my knowledge I cannot be sure
about this, but I strongly suspect that what you say is wrong. 

Any computational system can only be guaranteed to work well in all
circumstances if it is of very low expressivity. If a system
implements expressivity equivalent to Turing/Lambda calculus, then no
such guarantees are ever possible, nor can you determine
algorithmically which code will perform well and which not.

Part of the problem with DL reasoners and their scalability is,
indeed, their relative immaturity. But, part of the problem is because
that is just the way that universe is built. Ain't much that can be
done about this. 

  >> Another interesting approach that has only recently been
  >> presented by Motik et al is to translate a DL terminology into a
  >> set of disjunctive datalog rules, and to use an efficient datalog
  >> engine to deal with large numbers of ground facts. This idea has
  >> been implemented in the Kaon2 system, early results with which
  >> have been quite encouraging (see
  >> http://kaon2.semanticweb.org/). It can deal with expressive
  >> languages (such as OWL), but it seems to work best in
  >> data-centric applications, i.e., where the terminology is not too
  >> large and complex.

  CO> I'd go a step further and suggest that even large terminologies
  CO> aren't a problem for such systems as their primary bottleneck is
  CO> memory (very cheap) and the complexity of the rule set. The set
  CO> of horn-like rules that express DL semantics are *very* small.

Memory is not cheap if the requirements scale non-polynomially. 
Besides, what is the point of suggesting that large terminologies 
are not a problem? Why not try it, and report the results?

Received on Monday, 18 September 2006 10:40:17 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 07:41:53 UTC