Re: blog: semantic dissonance in uniprot

On 24 Mar 2009, at 15:17, eric neumann wrote:

> Bijan,
>
> I have a (possibly) naive question, but one that comes up in the  
> context of a digital record/rep of the protein :
>
> Are OWL ontologies supposed to be applied to only digital  
> representations of real world things,

No. Indeed, that's toward the  "working against OWL" end of the  
spectrum..

> or do some believe they actually can be applied to the real-world  
> things "even when no record of the object exists in the digital  
> space"?

That's the more common understanding. OWL ontologies are interpreted  
into mathematical structures (e.g., sets and sets of pairs and  
elements of those sets) which, ideally, should be isomorphic to the  
domain you are trying to model.

That is, among the models of your ontology, it would be nice if one  
of the models interpreted Individuals as individual(ish) things in  
your domain. So, in an ontology about Persons, "bijan" should be  
mappable into me (or to a corresponding element in an isomorphic  
structure).

In the world there are digital things (like programs, records, etc.)  
which can be (sometimes) modeled as individuals. I can model these  
side by side with the physical objects those digital object represent  
(or are created by). This is the case when I'm doing entity  
reconcilation, for example, since the person and the 2 records about  
her are numerically distinct (i.e., all differentFrom each other).

Other times, I don't make the distinction because, for my purposes,  
the record is a sufficient proxy for the entity and keeping them  
distinct would complicate things too horribly.

[snip]
> In addition, I also don't see references to any object being  
> fundamentally different to a digital record (san descriptive  
> triples perhaps)... can someone provide me with a counter example?


An ontology is, of course, itself (in our case) a computational  
artifact. And if we don't add enough assertions to distinguish  
between a representation and an object represented, then we can't  
distinguish them using our ontology.

Methodologically, it's sometimes helpful to view ontology  
construction as the process of removing unintended models. Each  
additional axiom (hopefully) distinguishes enough features of the set  
of interpretations that a few more interpretations become non-models.

In the ideal limit, we have all and only intended models as models of  
our ontology. That's what's known as a *verified* ontology.  
Furthermore, it'd be nice if our intended models were isomophic to  
"the world" (under some conceptualization).

Often the ideal limit isn't reasonable or feasible or helpful.  
Frictionless pullys are pedagogically useful, after all.

These slides contain some discussion of this:
	http://www.cs.man.ac.uk/~bparsia/2009/comp60462/semantics-and-services/

Hope it helps.

Cheers,
Bijan.

Received on Tuesday, 24 March 2009 17:38:48 UTC