W3C home > Mailing lists > Public > public-owl-dev@w3.org > October to December 2007

RE: [OWLWG-COMMENT] ISSUE-55 (owl:class)

From: Michael Schneider <schneid@fzi.de>
Date: Tue, 11 Dec 2007 14:57:16 +0100
Message-ID: <0EF30CAA69519C4CB91D01481AEA06A05A5995@judith.fzi.de>
To: "Alan Ruttenberg" <alanruttenberg@gmail.com>
Cc: "Owl Dev" <public-owl-dev@w3.org>, <hendler@cs.rpi.edu>, <boris.motik@comlab.ox.ac.uk>, <pfps@research.bell-labs.com>, <ian.horrocks@comlab.ox.ac.uk>, <dlm@ksl.stanford.edu>, <hans.teijgeler@quicknet.nl>

Hi, Alan!

>-----Original Message-----
>From: Alan Ruttenberg [mailto:alanruttenberg@gmail.com] 
>Sent: Monday, December 10, 2007 7:24 PM
>To: Michael Schneider
>Cc: Owl Dev; hendler@cs.rpi.edu; boris.motik@comlab.ox.ac.uk; 
>pfps@research.bell-labs.com; ian.horrocks@comlab.ox.ac.uk; 
>dlm@ksl.stanford.edu; hans.teijgeler@quicknet.nl
>Subject: Re: [OWLWG-COMMENT] ISSUE-55 (owl:class)
>Here's my understanding of the situation (if I've got it wrong  
>somewhere, please correct me).
>On Dec 8, 2007, at 3:20 PM, Michael Schneider wrote:
>> But, AFAICS, this would only become a real problem, if in this  
>> ontology some class is used as an individual (metamodelling).
>Or if the class has instances that are literals.


>> But in such a case, even after changing rdfs:Class to owl:Class,  
>> the resulting ontology would still be an OWL-Full ontology: There  
>> would, for example, be an 'rdf:type' triple with some class being  
>> at the individual position, or a class with an object or 
>data property
>> attached.
>The type triple is inferred in OWL Full - it doesn't have to be  

True. In OWL-Full, if I have some class <C>, then I have the implicitly
entailed type triple

  <C> rdf:type owl:Thing

because in OWL-Full owl:Thing equals rdfs:Resource, which is the whole RDF
universe containing everything, including all classes.

But please see my discussion below!

>> The OWL-DL reasoner would refuse to work in such a situation, of  
>> course.
>Because the triple would sometimes need to be inferred by the  
>reasoner itself, the DL reasoner can't detect the situation in all  
>cases. Strictly speaking, it can only detect the case where it  
>certainly shouldn't work.
>> So it looks to me that this recommendation is safe.
>I would say, no. However it might be ok if the user was warned, or  
>made an explicit declaration to that effect.
>> Or to summarize these recommendations in a simple rule of thumb:  
>> Assume 'rdfs:Class' in RDFS ontologies, assume 'owl:Class' in OWL  
>> ontologies.
>How do you tell the difference between and RDFS ontology and and OWL  

Practically, by looking whether there is an OWL ontology header or not (at
least OWL editors should add a header automatically). Or whether there is
other OWL vocabulary used.

Formally, it is not possible to tell the difference: For instance, if I have
an arbitrary OWL ontology and feed its RDF serialization into an RDFS
reasoner, the RDFS reasoner will happily accept this ontology. An RDFS
reasoner works on /every/ RDF graph. Though the result would certainly not
be as /expected/, because the RDFS reasoner only takes RDFS vocabulary and
the respective semantic conditions into account. A OWL axiom like

  <C> owl:equivalentClass <D>

is just an ordinary RDF triple for an RDFS reasoner. From this, all the RDFS
reasoner can deduce are existential statements like

  _:x owl:equivalentClass <D> # "There exists something with the given

>I think what might "work" is commonly called "duck typing", as in, if  
>it walks like a duck and quacks like a duck, then....


>The application in this case would be to look for an *explicit*  
>mention of something that might be only *inferred* in an OWL Full  
>ontology. Absent the explicit mention, you might assume that that the  
>author did not intend there for such statements to be inferred  
>either. This would be a change from the current semantics, and  
>possibly a reasonable ones, depending, IMO, on how the OWL Full  
>advocates voted.

Looking for explicit information, and making assumptions about the
intentions of the ontology author is probaly the only possibility. Imagine,
one morning, you find the following (RDF-serialized) ontology on your desk 

  { <C> owl:equivalentClass <D> }

together with a note which asks for a list of all entailments from this
ontology. Assume further that you don't have the slightest idea who was the
author of this ontology.

So what are then the entailments? This inherently depends on what ontology
language this ontology belongs to. So what ontology language does this
ontology belong to? This is completely unclear! It might be RDFS, and then
the entailments would just be the existential statements mentioned above.
But the ontology uses OWL vocabulary, so perhaps this is really OWL. But
what dialect of OWL? Is it OWL-DL? Then you would get the following

  some OWL-DL entailments:
   * <C> a owl:Class
   * <D> a owl:Class
   * <C> rdfs:subClassOf <D>
   * <D> rdfs:subClassOf <C> 
   * <D> owl:equivalentClass <C>

But it might also be OWL-Full. Then you would receive all the existential
statements from RDFS entailment, the above OWL-DL entailments, and
additionally (as argued above):

  some additional OWL-Full entailments:
  * <C> a owl:Thing 
  * <D> a owl:Thing

... and more.

Fact is that without knowing what the original author of the ontology had
/intended/, you will not be able to savely answer the question for a list of
all entailments. Only the author knows to which ontology language his
ontology belongs to. If you cannot directly ask the author, then for an
RDF-serialized ontology there will at least always be both RDFS and OWL-Full
as perfectly possible options. And RDFS and OWL-Full have very different
semantics, and thus the set of entailments will always differ significantly.

So what can you do, if you do not have the authorative answer of the
ontology's author himself? All you can do is applying heuristics. And what
information should these heuristics be based on different from what is
explicitly visible in the given ontology? I believe that this is the only
possibility. Trying to do a language classification based on implicit
entailments is not possible, because implicit entailments themself
inherently depend on the ontology language. This looks like a hen-egg



Dipl.-Inform. Michael Schneider
FZI Forschungszentrum Informatik Karlsruhe
Abtl. Information Process Engineering (IPE)
Tel  : +49-721-9654-726
Fax  : +49-721-9654-727
Email: Michael.Schneider@fzi.de
Web  : http://www.fzi.de/ipe/eng/mitarbeiter.php?id=555

FZI Forschungszentrum Informatik an der Universität Karlsruhe
Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe
Tel.: +49-721-9654-0, Fax: +49-721-9654-959
Stiftung des bürgerlichen Rechts
Az: 14-0563.1 Regierungspräsidium Karlsruhe
Vorstand: Rüdiger Dillmann, Michael Flor, Jivka Ovtcharova, Rudi Studer
Vorsitzender des Kuratoriums: Ministerialdirigent Günther Leßnerkraus
Received on Tuesday, 11 December 2007 13:57:29 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:58:16 UTC