Re: rdf inclusion

Drew,

Thanks for pointing out the terminology issue. When talking in "Web
circles," I have a habit of using the term "definition" in a much looser
way than understood by logicians and other KR people. I do this because
people from other communities seem to grasp the term better than if I
say "axiom" or "description." I usually hope that those who think of
"definition" in the technical sense, realize that is not what I mean,
but I should always be careful to point this out, just to avoid
confusion.

Also, I think your two points about importing ontologies being different
from pointing to a web page are well-stated and I whole-heartedly agree
with you.

Jeff

Drew McDermott wrote:
> 
>    [Jeff Heflin]
>    My personal opinion is that if you're using an ontology language, every
>    term you use must be defined in some ontology (even if only to say that
>    it is a class or property).
> 
> I agree with the sentiment, but please let's not use the word
> "definition" the way you and Dan B. are using it.  Ontologies express
> relationships among terms, but they almost never define them.  People
> colloquially speak as though a statement like "living things are
> partitioned into vegetables and animals" is a definition of,
> say, "vegetable" and "animal," because it is, in some sense, a
> "declaration" of these symbols, and in the computer world it is
> usually the case that declaring something is necessary and sufficient
> to define it.  But (as I know you know) in a KR language that is not
> the case.  In fact, as Pat Hayes has argued, it is hard to say what the
> *logical* difference is between the "paritition" statement above and
> the seemingly humbler statement that "Sally is a vegetable."
> 
> But you're completely correct that importing an ontology is different
> from pointing to a web page or even a set of assertions.  At the risk
> of repeating what you said, here are the two key reasons why:
> 
> 1. The purpose of an ontology is to allow agents to draw conclusions,
>    and in particular to detect inconsistencies (e.g., type errors) in
>    datasets.  A dataset without its associated ontology has no
>    rationale that I can see.  If I give you a set of facts about
>    numbers, but I view Peano's axioms as an option that you can take
>    or leave, then the set of facts could be taken to be saying
>    anything at all about any topic at all; what in the world is the
>    point?
> 
> 2. Ontologies are *small* and *internally consistent.*  I think the
>    argument about importing assumes that once I start following
>    pointers to the web I could wind up anywhere.  That may be true in
>    general, but it's not true for ontologies.  Whoever designed an
>    imported ontology didn't just throw together some stuff they found
>    on Google.  They had to think through many tough questions, and at
>    every stage slight changes from the design decisions taken would
>    have introduced subtle gaps or inconsistencies.  Chances are the
>    designers had to backtrack several times as these pitfalls were
>    encountered.
> 
> Point (2) has as a consequence that if my dataset imports Ont-1, I can
> be confident that all the ontologies Ont-1 imports are, in the
> designers' minds, *coherent pieces of Ont-1.*  If they weren't, the
> designers would have imported something else or built what they
> needed.  Furthermore, the chain of imports is likely to be shallow;
> and if Ont-1 imports Ont-2 and Ont-3, following the import links from
> Ont-2 and Ont-3 is likely to get you to a common ancestor more
> frequently than chance would predict.  Coherent theories just don't
> look like balls of string.
> 
>                                              -- Drew McDermott

Received on Thursday, 25 April 2002 16:01:30 UTC