Re: Performance issues with OWL Reasoners => subclass vs instance-of

> Well, as I am speaking at the limit of my knowledge I cannot be sure
> about this, but I strongly suspect that what you say is wrong.
>
> Any computational system can only be guaranteed to work well in all
> circumstances if it is of very low expressivity. If a system
> implements expressivity equivalent to Turing/Lambda calculus, then no
> such guarantees are ever possible, nor can you determine
> algorithmically which code will perform well and which not.
>
> Part of the problem with DL reasoners and their scalability is,
> indeed, their relative immaturity. But, part of the problem is because
> that is just the way that universe is built. Ain't much that can be
> done about this.

I disagree and my point is that the universe you speak of is framed by a 
specific reasoning algorithm.  But your point is taken (below) that 
experimentation and results are what is needed.  The reality is that the 
world of production systems and DL/FOL reasoning are somewhat isolated 
from each other and both can benefit greatly from the other.

>  >> Another interesting approach that has only recently been
>  >> presented by Motik et al is to translate a DL terminology into a
>  >> set of disjunctive datalog rules, and to use an efficient datalog
>  >> engine to deal with large numbers of ground facts. This idea has
>  >> been implemented in the Kaon2 system, early results with which
>  >> have been quite encouraging (see
>  >> http://kaon2.semanticweb.org/). It can deal with expressive
>  >> languages (such as OWL), but it seems to work best in
>  >> data-centric applications, i.e., where the terminology is not too
>  >> large and complex.
>
>  CO> I'd go a step further and suggest that even large terminologies
>  CO> aren't a problem for such systems as their primary bottleneck is
>  CO> memory (very cheap) and the complexity of the rule set. The set
>  CO> of horn-like rules that express DL semantics are *very* small.
>
>
> Memory is not cheap if the requirements scale non-polynomially.
> Besides, what is the point of suggesting that large terminologies
> are not a problem? Why not try it, and report the results?

I plan to.  I simply don't think the assumption that Tableau Calculus 
represents the known limitations of DL reasoning is a very useful one.

Chimezie Ogbuji
Lead Systems Analyst
Thoracic and Cardiovascular Surgery
Cleveland Clinic Foundation
9500 Euclid Avenue/ W26
Cleveland, Ohio 44195
Office: (216)444-8593
ogbujic@ccf.org

Received on Tuesday, 19 September 2006 11:14:10 UTC