Semantic commitment (was RE: rdf inclusion)

> [Pat Hayes]
> But wait a minute.  What does it even mean for your ontology to say 
> what my reasoning engine can or cannot do? Of course I CAN add 
> triples from one graph without adding triples from another. All that 
> any ontology can do is to express some propositional content.

Exactly. Said another way, an ontology proposes a vocabulary to express
(semantic) information about real-world things in a *very* particular point
of view.

Of course, the semantic grounding of an ontology will enable an inference
engine to corellate the descriptions of this ontology and determine
the pertinence of its use with allready collected (commited ?) by this
engine.

But is it a reason to not look for mecanisms that enable those engines
to make shortcuts in inferences. Even in designing small ontologies, i
noticed
the need to break them in small modular parts. The main reason is for a
better reuse. (I think ontologies as implementations, there are "outside"
terms you could see as an interface of the ontology)

If my engine assert that it may have faith in an ontology O. If O
says explicitely that terms used to define it may be found definitions in
a particular place (some other ontology) that are consistent with the way
they are used in O, this is perfect.

I aggree that the use of such a mecanism *must not* be mandatory. It is a
supplementary way to link ontologies. I expect semantic *understanding* by
grounding decontextualization is only *one* of the algorithms we will use.

> What 
> another engine does with that content can be reasonably expected to 
> conform to the semantics of the language, but that's about all. If 
> the engine decides to ignore some of what you say, that's it's 
> business, not yours. Ignoring part of any RDF graph is perfectly 
> valid considered as an inference, after all: an RDF graph entails all 
> its subgraphs.

> [Pat Hayes - from another mail]
> There is no way to explicitly agree or disagree with another piece of RDF.

So you noticed it too. This is a problem. We should have a way to do so.
RDF and Ontologies enable to exchange the What. And we need to say
What we are *sure* to put in relief What about we are not *sure*. (in the
point of view of a particular engine.)
Logic Web Language would enable to exchange the How. And the Proof stair
to exchange the Why.

> I think this entire discussion is in a dream world. First, there are 
> no clear notions of definition to appeal to.

I thought the goal of such a mailing-list was to clear (if not define)
such an issue area. Sorry if i'm wrong.

> Second, no ontology can restrain the actions of a remote inference engine.

But we could gain in immediate interoperability. As TimBL (don't have ref
sorry) and some other folks said, we must expect different kind of
reasonning engines with different kind of logic capabilities, so you must
understand that all engines won't do semantic decontextualization. I
could even say that IMHO first applications won't...

You may see it as an inference shorcut, if you really matter. And your
inference engine may or may not take this information in account to draw
conclusions. As you said, we can't force it to. But this is not what we
want too. This information is also a statement so you can unassert it.

> Third, why would one want things to be different?

(Explanations above) Is this Stop Energy ?? Foward Motion needs this kind of
mechanism and i'm still convinced it should find a core definition in RDF
itself, maybe however in a first basic generalized semantic (i mean also
between instances document).

Just a last word, i changed the topic name as i find the *inclusion* term
mis-chosen. I propose to call it *semantic commitment* and in this way i
hope it will be easier to find a definition for it. Jeff Hefflin, why not
make a (dictionnary based :-0) RFC ? I may help to.

Didier Villevalois.
didier@phpapp.org

Received on Tuesday, 30 April 2002 05:07:54 UTC