Re: Question on DL negation

On Mar 6, 2007, at 9:41 PM, Michael Schneider wrote:

> Hi, Bijan!
>
> It took me some time to work through your last answer. :) And there  
> are still open points, which I do not completely understand.  
> Perhaps, you can help me again. First this one:
>
>> In addition to Uli's wise words,
>
> Hm, did I overlook a mail? Who is Uli, and what were his words of  
> wisdom? :)

Uli Sattler. For some reason her email seems to not have made it  
through. Sigh.

Can we settle on a list for this :)

>> I, as usual, recommend the  description logic complexity navigator:
>> 	http://www.cs.man.ac.uk/~ezolin/logic/complexity.html
> > You can see by playing around with combinations of the role
> > constructors the effects on complexity.
>
> Thanks a lot, I really did not know this site. This is really an  
> expert tool. I, by far, do not understand everything there, but at  
> least, I should now be able to answer myself my original question  
> by marking the respectively checkboxes on this form.
>
>   1) hitting the "OWL1.1" button in the RBox cell
>
> Ok, as expected, "the ABox consistency problem" is "decidable".
>
>   2) checking "role intersection", "role union" and "role complement"
>      in the "Role constructors" cell
>
> Now, for every combination of those three role constructors, I get  
> a complexity "NExpTime-hard". Probably bad from a complexity point  
> of view, but at least still decidable, right? The question is, what  
> this result means for practical ontologies, because all those  
> complexity classifications always only regard worst case scenarios.
>
> BTW1: Why is the complexity classification for the combination  
> "OWL1.1 plus role constructors" more specific ("NExpTime-hard")  
> than for OWL1.1 alone (just "decidable")?

I think that's a bug. The SROIQ bit is new and it's not properly  
overriding the role constructor check boxes. Uli would know, but it  
might be the case that having those other constructors will interact  
with the (restricted) role composition in SROIQ. But maybe not.

> OWL1.1 is a sublanguage of the OWL1.1+constructors language, so  
> OWL1.1 should have "NExpTime-hard" as an upper bound.

No, just a bug. OWL 1.1's exact complexity is unknown.

> BTW2: There is a "role chain" entry in the "Role constructors"  
> cell. Shouldn't it be getting checked, when I press the "OWL1.1"  
> button? That's one of the new features of OWL1.1, AFAIK. Currently,  
> this checkbox keeps unchecked. When I check it manually, the  
> complexity again changes to "NExpTime-hard".

I call out to Evgeny :)

> BTW3: I cannot see a feature "disjointness", neither for concepts,  
> nor for roles.

Yes, the navigator starts from ALC and tends to stick to logics with  
fairly robust negation. It would be nice to update it to handle,  
e.g., the various tractable fragments:
	<http://owl1_1.cs.manchester.ac.uk/tractable.html>

> Doesn't the addition of disjointness adds significantly to the  
> complexity of a DL?

Well, disjointness is *weaker* than full negation. For example, in  
OWL, A disjointWith B is just syntactic sugar for A subclassof  
complementOf(B).

> I thought that at least it would, when adding concept disjointness  
> to OWL-Lite.

OWL Lite is SHIF (even though it lacks many useful constructors).  
That is you can simulate arbitrary negation in OWL Lite. Thus adding  
explicit disjointness doesn't increase the complexity.

Jeremy Carroll had a tool that would encode certain OWL DL ontologies  
(that are shif) as OWL Lite. The resulting ontologies were horrible  
to reason with :)

(Much of that is a bit weird if we're talking about logics complete  
for the same worst-case complexity class, since there will be a  
polynomial reduction anyway. But OWL Lite is weird for several reasons.)

Practically speaking, stating the disjointness between concepts can  
*help* performance of reasoning. For example, for query answering in  
the LUBM, Evren Sirin found that for some queries, adding a simple  
disjointness between...er....I think professors and students, would  
hugely improve performance. Why? Because the query was asking  
something like "everyone who's taught a class last semester". Now the  
domain of taught was faculty (and profs were a subclass), but without  
disjointness, we have to *show* that each student didn't teach a  
class explicitly. If they are disjoint, then once we know x is a  
student, we know she can't be part of an answer to that query and we  
can stop.

Similarly, in a life sciences ontology, two black box debugging  
techniques died when trying to diagnose an unsatisfiable class.  
that's because they *delete* axioms trying to find a minimal subset  
of the ontology that makes the class unsatisfiable. Some subsets were  
*harder* (i.e., both pellet and fact++ would run out of member) to  
reason with than the whole!

So, a feature which can up the worst case complexity (negation) can  
make things easier in certain contexts.

> Or can disjointness be expressed in terms of the other mentioned  
> features? At least, I do not see how this were possible for /role/  
> disjointness, when only having the features of OWL1.0.

Yes. Role disjointness is not in OWL 1.0.

>> Remember that all this is just (a fragment of) FOL (well, except  
>> if  you add transitive closure per se), so all the constructors  
>> are just  normal propositional (for the most part) connectives on  
>> binary  predicates.
>> Expressive role constructors are associated with propositional   
>> dynamic logic (and converse propositional dynamic logic).
>
> I haven't dealt with "dynamic logic" so far, so I am not able to  
> understand this. But I am going to read about it, when I find the  
> time (I see Wikipedia has an article).

Think of a program as a state transition system. We can model states  
as individuals. Propositions true in a state are the classes that  
individual is a member of. Transitions are roles. When you have  
"regular expressions on roles",i.e., the role constructors, you are  
able to express various control constructs.

>> It's also instructive to see how arbitrary concept negation is   
>> difficult. You can see in the tractable fragments document that  
>> most  of them allow concept (i.e., restricted) disjointness:
>> 	<http://www.cs.man.ac.uk/~ezolin/logic/complexity.html>
>
> Is this the correct link? This is the same as above (the DL  
> complexity tool), but you said something about a "tractable  
> fragments document".

No. Cut and paste failed me:

		<http://owl1_1.cs.manchester.ac.uk/tractable.html>

>> Lots of recent work (e.g., on modularity, the EL family, and ABox   
>> summarization) suggests strongly that unrestricted universal   
>> quantification and negation make things difficult. If you can  
>> control  them in a number of ways (either by analysis or by  
>> linguistic  restrictions) you can get better behavior.
>
> I do not understand this last paragraph.

Traditionally, in descriptions, people have started with a very  
limited logic that included universal quantification (allValuesFrom).  
Coming from semantic nets and frames, this seemed a reasonable choice  
(though one can justify an existential reading as well). However, it  
doesn't take much added to these limited logics to push the worst  
case complexity up to EXPTIME (i.e., to ALC). Thus the class of "very  
expressive description logics" were born. Ian Horrocks, in his thesis  
and in the FaCT reasoner, showed that such logics could be reasonable  
to work with in real (and large ontologies). So this is the mid 90s  
to early 2000s. In the past few years, folks at the Unversity of  
Dresden asked, "heck, what happens instead of starting with universal  
quantification, we start with existential? Some ontologists think  
most use of universal quant is a mistake anyway". Surprisingly, it  
turns how that that family of logics, the EL family:
	<http://lat.inf.tu-dresden.de/systems/cel/>
is robustly polynomial for key  inference services. That is, you can  
add extremely powerful features (like GCIs and restricted nominals)  
without problems.

In the past year or two, researchers at IBM have been working on  
techniques for scalable abox reasoning using database technology  
(they weren't the first). They observed, roughly, that if you can  
isolate the effect of universal quantification, you can isolate the  
reasoning on individuals. (They have other tricks too, fwiw.) See:
	<http://iswc2006.semanticweb.org/items/Kershenbaum2006qo.pdf>
	<http://www.research.ibm.com/iaa/techReport.pdf>

Finally, in work started by Bernardo Grau and myself at Maryland, and  
carried on by Bernardo and others at manchester, it turns out that if  
you are careful in your use of negation (among other things) you can  
extract semantically sound parts of an ontology and reason (with  
respect to certain signatures and problems) with just the fragment. See:
	<http://www.cs.man.ac.uk/~bcg/Publications.html>
esp.
	<http://www.cs.man.ac.uk/%7Ebcg/KR0603CuencaB.pdf>
	<http://www.cs.man.ac.uk/%7Ebcg/IJCAI-GrauB936.pdf>
	<http://www.cs.man.ac.uk/~bcg/www433-cuenca.pdf>

(Check the references. It owes a lot to the work of Lutz, Kutz,  
Zakharyaschev, and Wolter, among others.)

Hope this helps.

> Anyway, thanks for citing the above really cool tool. I probably  
> will play around with it again. Hopefully, this will keep me from  
> reading a complete boring book on description logics! ;-)

The DL handbook is, IMHO, very readable. The first chapter is a nice  
(if a bit rosy) history. Chapter 2 is excellent and most accessible.  
Ian's chapter on optimization is really good for understanding how  
tableau reasoners work.

Cheers,
Bijan.

Received on Wednesday, 7 March 2007 00:23:40 UTC