Re: Cross-ontologies reasoning

On Dec 17, 2003, at 3:52 PM, ewallace@cme.nist.gov wrote:

> Bijan Parsia wrote:
>> On Dec 17, 2003, at 1:24 PM, Francis McCabe wrote:
>>
>>> The 'problem' I was referring to was that of automatically mapping 
>>> one
>>> ontology (written I assume by person or persons A) to another 
>>> (written
>>> by persons B).
>>>
>>> People have asserted that there exist automatic tools for doing that.
>>> And I was pointing out some corner cases.
>>
>> For the record, I don't believe that I, personally, made such an
>> assertion. Nor did I intend to. I didn't read anyone else in this
>> thread as doing so.
>
> I believe someone from NI made such an assertion.
[snip]

Well, I see Jack's message:
	http://lists.w3.org/Archives/Public/public-sws-ig/2003Dec/0023.html

Followed by my message:
	http://lists.w3.org/Archives/Public/public-sws-ig/2003Dec/0024.html

Followed by Frank's admonishment:
	http://lists.w3.org/Archives/Public/public-sws-ig/2003Dec/0025.html

And, I'm sorry, I don't see that either Jack or me saying anything 
about automatically mapping entire ontologies into each other. We did 
respond about what sorts of reasoning one might try "cross ontologies".

I didn't originally read Frank's reply as insinuating that either Jack 
or I were making the silly claims.

There does seem to be some difference between "reasoning across 
ontologies" and "mapping between ontologies". But whatever.

Ok, I do see some loose talk at the end of Jack's message, basically:

 >  You do it by establishing axioms that express equivalencies,
 > sub-class, or other relationships between the two ontologies (or many
 > more ontologies) and use a mechanism such as owl:import to provide a
 > linkage.  If you have an inferencing technology, then you can maintain
 > logical consistency across these relationships.  "Closeness" is a
 > matter of interpretation and can be influenced somewhat by the form of
 > the "bridge" axioms expressed.  If ontologies are far apart -- ie
 > different concepts -- the logic processor would not infer that they
 > represent the same or similar things.

Eh. I don't see that he's said anything remotely problematic here. Add 
some axioms. Smush together. The reasoner will tell you 1) if the 
result is consistent, and 2) some inferred equivalencies and 
subclassing (e.g., it will classify the result). But the axioms are all 
shlooped in by people.

My pointers were talking, basically, about other sorts of axioms (and 
relationships supported by inference mechanisms) floating around (to 
simplify a bit).

Cheers,
Bijan Parsia.

Received on Wednesday, 17 December 2003 16:46:15 UTC