- From: Aldo Gangemi <a.gangemi@istc.cnr.it>
- Date: Thu, 22 Apr 2004 22:15:49 +0200
- To: Natasha Noy <noy@SMI.Stanford.EDU>, rector@cs.man.ac.uk
- Cc: public-swbp-wg@w3.org, Nicola Guarino <guarino@loa-cnr.it>, welty@us.ibm.com
Dear Natasha, Alan and all,
I like the ongoing discussion, and firstly the great integration work
by Natasha.
Moreover, Alan (as usual) has pointed to a couple of fundamental
issues in the discussion, e.g. the implied "prototype" notion.
After paying my dues, I should say that -as is- the emerging pattern
is pragmatic, rather than ontological. I explain why, and a possible
progress.
Why not (yet) an ODP?
a) my suggestion to what an ODP should contain is: a description of
one or more solution(s) to one problem of content representation in
an ontology
b) I call a problem of content representation in an ontology a
"generic use case" for ontology design
c) a generic use case for ontology design is about a certain intended
meaning one wants to express in an ontology. As Alan puts it: "What
are you trying to say"
d) my suggestion amounts to say that, in case alternative solutions
are included in a pattern, despite different constructs (and
semantics), the intended meaning should be preserved across them
e) in Natasha's first draft, each solution has a different (formal)
semantics, and this is ok, but
f) no two solutions convey the same intended meaning
g) in fact, Alan's analysis reveals different generic use cases for
ontology design.
Now to a possible progress:
We could say that a pragmatic design pattern is a set of ontology
design patterns having a common starting generic use case, but not
necessarily preserving the original intended meaning.
In other words, a pragmatic design pattern (with alternatives) tries
to compare different interpretations.
On the other hand, an ontology design pattern provides the
implementation of the same interpretation. The only alternatives
should preserve that interpretation, even when changing the solution.
But where the interpretation is? it is in the core classes and
relations chosen to express the solution.
And finally the example of what I am suggesting:
why different interpretations? let's try to formulate the approaches
in terms of core classes (rdf:type will be used with inheritance,
i.e. including indirect types). I will only assume that animals are
kinds of (physical) objects and animal images are images (not very
strong assumptions, I hope ;)). I use a compact notation in order to
show the differences intuitively. Individuals are quoted. Different
parts of the approaches are preceded by stars. The OWL-DL version of
my Approach 4 is added at the end:
Lion image pragmatic design pattern
Approach 1
"LionImage" rdf:type Image
LionClass rdfs:subClass Object
*** "LionImage" dc:subject LionClass ;;;a Class
Approach 2a
"LionImage" rdf:type Image
LionClass rdfs:subClass Object
*** "LionSubject" rdf:type Object ;;;(since LionSubject rdf:type Lion)
*** "LionImage" dc:subject "LionSubject" ;;;an Object
Approach 2b
"LionImage" rdf:type Image
LionClass rdfs:subClass Object
*** "LionSubject" rdf:type Subject
*** Subject rdfs:isDefinedBy Object
*** "LionImage" dc:subject "LionSubject" ;;;a Subject
Approach 3
"LionImage" rdf:type Image
LionClass rdfs:subClass Object
*** "LionSubject" rdf:type Object
*** "LionSubject" parentSubject "MammalSubject"
*** "LionImage" dc:subject "LionSubject" ;;;an Object
Approach 4 (mine, trying to describe the "prototype" notion on
OWL-DL. "a" is a local namespace - see below)
"LionImage" rdf:type Image
LionClass rdfs:subClass Object
*** "LionSkolem" rdf:type Lion
*** "Prototype" rdf:type a:Role
*** "LionSkolem" a:plays "Prototype"
*** "LionImage" a:subject "LionSkolem" ;;;an Object
Approach 4 skolemizes (as Alan suggests) the poor anonymous lion
taken into the picture, but also restricts that lion to be a
"prototype" of its class. I don't even try to explain you how it is
possible to generate a meningful and consistent treatment of roles in
this way (I refer to my recent KR and WWW papers, email me for
references). The OWL-DL abstract syntax for approach 4 is included
below, but before that, I recap on what I have shown.
The different approaches "actually say" different things, and I don't
accept that the original modeller was so ambiguous! Loosely speaking,
I point you at the different facts that are depicted by the different
approaches:
Approach 1: a picture represents the class of all lions
Approach 2a: a picture represents a certain guy "LionSubject", who is a lion
Approach 2b: a picture represents a certain guy "LionSubject", who is
a "subject", and is "defined by" the class of all lions
Approach 3: a picture represents a certain guy "LionSubject", who is
a lion and has a parent called "MammalSubject" (who is a mammal)
Approach 4: a picture represents some lion chosen as a prototype of its class
After this layman description of the approach, I think that 4 is the
only one that some mentally healthy person could conceive, if only
some ontology tool or methodology could give her the chance to
express it :).
But this is not my main point. What firstly counts is that the
different approaches are not *better* or *worse*: they are just
*about different facts*.
But to evaluate the difference, I had to understand the difference
between objects, images, subjects, roles, etc. If we start from
those, we would generate one "layman fact", and the possible
alternatives should conform to that. For example, alternative
constructs that can be interpreted consistently across all facts like
"a picture represents some lion chosen as a prototype of its class"
could be included in a same design pattern for "pictures representing
the prototype of some class".
Ciao
Aldo
_____________________________
OWL-DL for the Approach 4
_____________________________
Namespace(rdf = <http://www.w3.org/1999/02/22-rdf-syntax-ns#>)
Namespace(xsd = <http://www.w3.org/2001/XMLSchema#>)
Namespace(rdfs = <http://www.w3.org/2000/01/rdf-schema#>)
Namespace(owl = <http://www.w3.org/2002/07/owl#>)
Namespace(a = <http://ontology.ip.rm.cnr.it/ontologies/subjects#>)
Ontology( <http://212.34.219.175/subjects.owl.rdf>
ObjectProperty(a:plays)
ObjectProperty(a:subject)
Class(a:Animal_image partial
a:Image)
Class(a:Lion partial
a:Animal)
Individual(a:Lion_image$101
type(restriction(a:subject
someValuesFrom(intersectionOf(restriction(a:plays someValuesFrom
(oneOf(a:prototype)))
a:Lion))))
type(a:Animal_image))
Individual(a:prototype
type(a:Role))
)
--
*;*;*;*;*;*;*;*;*;*;*;*;*;*;*;*;*;*;*;*;*;*
Aldo Gangemi
Research Scientist
Laboratory for Applied Ontology, ISTC-CNR
Institute of Cognitive Sciences and Technologies
(Laboratorio di Ontologia Applicata,
Istituto di Scienze e Tecnologie della Cognizione,
Consiglio Nazionale delle Ricerche)
Viale Marx 15, 00137
Roma Italy
+3906.86090249
+3906.824737 (fax)
mailto://a.gangemi@istc.cnr.it
mailto://gangemi@acm.org
http://www.loa-cnr.it
Received on Thursday, 22 April 2004 16:16:04 UTC