Performance anlaysis (was Re: Reflexivity and antisymmetry uses cases?)

On 18 Jan 2007, at 10:41, Ian Horrocks wrote:
[snip]
> Unfortunately, while incredibly useful, worst case complexity  
> analysis is still a relatively coarse grained tool, and doesn't  
> tell us anything about the likely behaviour of reasoners on KBs  
> that use a potentially problematic combination of constructors, but  
> in an "apparently harmless" way - in fact it is interesting to note  
> that most implemented algorithms actually have much higher worst  
> case complexities than the known worst case for the underlying  
> problem. On the other hand, we do have a reasonably good (empirical  
> and theoretical) understanding of what constitutes a "dangerous" KB  
> - i.e., one for which reasoning is likely to be hard (we know,  
> e.g., that problems almost invariably arise when GCIs cannot be  
> fully absorbed, or when the KB is not very "modular"); there is no  
> reason why tools could not include this kind of analysis (recent  
> work by Cuenca Grau and Kazakov on provides, e.g., a very nice way  
> to determine how "modular" a KB is) and so at least warn users as/ 
> when their KB became "dangerous" - in fact I expect this sort of  
> functionality to be added to tools in the near future.

I'll add that the state of knowledge about scalability in general is  
advancing *very* rapidly, esp. wrt ABoxes, but also in general. The  
insight behind the EL family (*very* roughly, universal  
quantification is dangerous) underlies a lot of interesting work  
<http://iswc2006.semanticweb.org/items/Kershenbaum2006qo.pdf>). (Also  
consider modularity, new insights from the resolution work,  
incremental reasoning, etc. etc.)

I was really happy to see your post Michael, because I just s started  
some work on building "performance profiling" tools and services for  
OWL ontologies. Some of these are relative to a particular reasoner;  
some are just for helping work out what's going on (e.g., isolate  
hotspots).

In general, the goal is to support better transparency, as well as  
good communication between modellers and the system managers/ 
implementors.

One should recall that "even" relational databases often need  
extensive tuning and a good understanding of the technology to  
produce acceptable performance in many production settings. I'm  
hopeful that, for OWL,  we can demystify what's going on and reach a  
level of sophistication that makes deploying ontologies much less hit  
or miss.

When combined with impact analysis tools:
	<http://clarkparsia.com/weblog/2006/06/18/eswc-best-paper-award/>

I think it will become *much* easier to tune ontologies to particular  
situations. In general, I prefer to leave it to the *user* to decide  
what entailments are critical (thus, how much incompleteness they can  
tolerate), since, after all, they know their application best!  
(Approximations, for example, can be built into the reasoner, or done  
upon the ontology. In either case, you need a good idea of what's  
happening. I'm hoping the performance analysis tools and services I'm  
working on (with Taowei Wang) will help you achieve that good idea.

Cheers,
Bijan.

Received on Thursday, 18 January 2007 11:17:48 UTC