W3C home > Mailing lists > Public > semantic-web@w3.org > April 2010

Re: Ontologies for RDF structures, not just atoms

From: Rinke Hoekstra <hoekstra@uva.nl>
Date: Wed, 21 Apr 2010 09:12:05 +0200
Cc: Semantic Web <semantic-web@w3.org>, Katasonov Artem <Artem.Katasonov@vtt.fi>
Message-Id: <73B274E7-1534-4C2B-BD02-1DCBFF620FD1@uva.nl>
To: Holger Knublauch <holger@knublauch.com>

FWIW I'd like to point out that there are particular reasons why these things cannot be expressed in OWL 2 DL (several things *can* be expressed in OWL 2 Full). 

If you don't care for the open world assumption anyway, or fail to see the usefulness of bounded expressivity for the purpose of decidability, then that's fine. You can either go for a closed-world solution (cf. SPARQL/SPIN/IC's) or hop one level higher to OWL 2 Full. 

The only warning I'd like to give is that mixing open-world with closed-world reasoning can give unexpected results: 

*) a DL reasoner will know more about your individuals than a SPARQL processor will, as it can reason with individuals that are not explicitly asserted in the ABox (which are invisible to a SPARQL processor).
*) the combination of DL with SPARQL is a bit harder to check for consistency (DL entailments leads to SPARQL 'rules' firing, that may warrant additional DL entailments and so on and so forth).

I don't know to what extent these issues are addressed by SPIN or the Pellet implementation of integrity constraints.

And again, all this may or may not be a problem depending on your situation. 


On 21 apr 2010, at 04:26, Holger Knublauch wrote:

> Artem,
> what you basically ask for is a generic mechanism to match arbitrary RDF graph patterns to trigger something (behavior or inferences). I was surprised to hear that most people here on this list suggest to bend OWL to meet this requirement, using artificial intermediate concepts (e.g. as reified relationships), while what you most likely talk about is simply a rule language. SPARQL (CONSTRUCT) is a very good candidate for this, as it allows you to match almost any RDF graph pattern in the WHERE clause. I have recently blogged about the strengths of SPARQL compared to the limitations of OWL 2:
> http://composing-the-semantic-web.blogspot.com/2010/04/where-owl-fails.html
> In the article above I suggest the use of SPARQL (in particular the SPIN framework [1]) which provides a structured framework for maintaining SPARQL-based rules and constraints, going beyond the various ad hoc frameworks that people also use in practice. The SPIN user community seems to be growing rapidly and we have a lot of practical evidence now for its general usefulness.
> Please also have a look at SPIN's user-defined functions and property functions (magic properties) that make it easy to define on-the-fly inference rules. As an alternative to "physically" inferring new triples (as Bernard suggests) this would allow you to make those inferences on demand only, leaving a much smaller memory foot print and fewer maintenance issues such as change management to worry about.
> Regards,
> Holger
> [1] http://spinrdf.org 
> On Apr 21, 2010, at 2:23 AM, Katasonov Artem wrote:
>> Hi Pierre-Antoine and Bernard.
>> Thank you both a lot for your answers.
>> To clarify, I was referring as unnatural and unscalable to adding 'reified' views explicitly to the RDF dataset, in order to survive with an OWL reasoner alone.
>> What you both propose is to have a generic rule engine as the reasoner (e.g. based on SPARQL CONSTRUCT) instead of OWL. In fact, this is exactly what I do in my work – I use SPARQL patterns as “definitions” of classes instead of OWL restrictions, etc.
>> Bernard, you give a rule for a heterosexual couple. What if I want to define the generic couple and then the heterosexual couple as a subclass of it? Two independent SPARQL rules? I mean that ability to have hierarchies of classes as in OWL is still nice.
>> Pierre-Antoine, you wrote “but not all ontology languages would be able to express it (I don't think OWL is)”. To rephrase my original question,
>> are there ANY ontology languages (or at least efforts towards) that are able to do that?
>> Artem
>> From: Bernard Vatant [mailto:bernard.vatant@mondeca.com] 
>> Sent: 20. huhtikuuta 2010 11:26
>> To: Pierre-Antoine Champin
>> Cc: Katasonov Artem; semantic-web@w3.org
>> Subject: Re: Ontologies for RDF structures, not just atoms
>> Hi
>> To follow-up with Pierre-Antoine
>> Indeed, "reifying" relationships as facts is a common practice, both natural and scalable, and you do it whenever you want to add extra information to the relationship (such as date, quality, contract, whatever). 
>> If OWL does not allow to assert logical links between relationships and facts, SPARQL CONSTRUCT can do the trick.
>> Suppose you have so far represented simple binary relationships "ex:dates" and want to transform them into facts of type "ex:Couple". You can use the following, even qualifying au passage the couple, inferring from the gender of the members :)
>> CONSTRUCT { ?c   a  ex:Couple.
>>                       ?c   ex:member   ?x.
>>                       ?c   ex:member   ?y. 
>>                       ?c   ex:type  ex:Heterosexual.}
>> WHERE  {?x   a  ex:Man.
>>               ?y   a ex:Woman.
>>               ?x   ex:dates  ?y}
>> Actually SPARQL CONSTRUCT is quite handy to translate one type of representation into another.
>> Bernard
>> 2010/4/20 Pierre-Antoine Champin <swlists-040405@champin.net>
>> Hi,
>> On 19/04/2010 13:41, Katasonov Artem wrote:
>>> (...)
>>> OWL is all about defining concepts corresponding to classes of
>>> resources: e.g. “Person”, “Woman”, “Husband”,  “Mother”, etc. But
>>> what about a concept like “Couple”? I bet you understand what I mean
>>> by this word, and this means that this concept is a part of an
>>> ontology we share. The meaning of “couple” is very simple: there is
>>> John, there is Mary, and there is some ex:dates link between them.
>>> But OWL is useless for defining this concept - of course, unless we
>>> introduce a couple as a resource, define a couple of properties to
>>> link John and Mary to this resource, and add these all explicitly to
>>> the linked data space - which is, I believe, is unnatural and not
>>> feasible at a scale.
>> I find it neither unnatural or unscalable, really :)
>> If you need to consider the Couple as a concept, then you have to
>> 'reify' it, litteraly 'make it a thing', i.e. a resource. Of course, you
>> would only do that *when* you need to consider the concept between a
>> relation (or more generally, a structure as you call it).
>> As a matter of fact, this is exactly what you do in english. If you only
>> need to state the fact that "John is dating Mary", you don't even need
>> the notion of a couple. That notion becomes necessary if you want to
>> state more things about their relation: "They have been a couple for two
>> years; they are a very happy couple.".
>> The good thing with english is that you can easily swap from the
>> 'relation' view ("dates") to the 'reified' view ("couple"). In an
>> ontology, you have to *commit* (as in "ontological commitment") to a
>> particular representation, which really depends on the needs of your
>> application. You can also accept both representations, and add inference
>> rules that would state the equivalence between them :
>> there is a ?couple involving ?john and ?mary
>> ?john dates ?mary
>> but not all ontology languages would be able to express it (I don't
>> think OWL is).
>> Hope this helps. For more material about that, I recommend reading the
>> related section in Linked Data Patterns [1]. Funny to see the example is
>> almost the same :)
>> pa
>> [1] http://patterns.dataincubator.org/book/qualified-relation.html
>>> In other words, I speak about an ontology of concepts that correspond
>>> to RDF fragments (structures, sub-graphs) rather than just RDF
>>> atoms.
>>> In past few years, I found myself designing and using some ad-hoc
>>> frameworks for such ontologies in two different projects and,
>>> therefore, contexts. One was about interpretation of RDF-encoded
>>> mental structures (beliefs, intentions) and communications (speech
>>> acts) in multi-agent systems. The other, current, is about
>>> interpretation of RDF-encoded software models. In both cases, the
>>> problem in nutshell is the following: there is an RDF graph for
>>> something and we want to detect if a certain “situation” occurs in it
>>> – to make some conclusions based on that. It is like answering a
>>> question about if there are any couples in a group of people. As
>>> “situation” of interest are many, we want to have a formal ontology
>>> of those. In other words, I believe that the applications where the
>>> problem occurs can be rather widespread.
>>> I plan to work on this further, maybe to generalize on the approaches
>>> I used before.
>>> I wanted to ask for any tips about related previous research,
>>> discussions, postponed issues in standard-setting groups. Anyone is
>>> familiar with anything like this? Please comment also on the problem
>>> as such. Thanks in advance
>>> Artem Katasonov VTT Techncal Research Center, Finland
>> -- 
>> Bernard Vatant
>> Senior Consultant
>> Vocabulary & Data Engineering
>> Tel:       +33 (0) 971 488 459
>> Mail:     bernard.vatant@mondeca.com
>> ----------------------------------------------------
>> Mondeca
>> 3, cité Nollez 75018 Paris France
>> Web:    http://www.mondeca.com
>> Blog:    http://mondeca.wordpress.com
>> ----------------------------------------------------

Dr Rinke Hoekstra

AI Department         |   Leibniz Center for Law    
Faculty of Sciences   |   Faculty of Law            
Vrije Universiteit    |   Universiteit van Amsterdam
De Boelelaan 1081a    |   Kloveniersburgwal 48      
1081 HV Amsterdam     |   1012 CX  Amsterdam        
+31-(0)20-5987752     |   +31-(0)20-5253497         
hoekstra@few.vu.nl    |   hoekstra@uva.nl           

Homepage: http://www.few.vu.nl/~hoekstra
Received on Wednesday, 21 April 2010 07:12:37 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:45:17 UTC