W3C home > Mailing lists > Public > www-rdf-interest@w3.org > October 2002


From: Yanosy John-QJY000 <jyanosy@motorola.com>
Date: Mon, 28 Oct 2002 16:33:50 -0600
Message-ID: <0B0A39652BB0D411BCCF00508B9512EC06AA361E@tx14exm05.ftw.mot.com>
To: "'www-rdf-interest@w3.org'" <www-rdf-interest@w3.org>
Are there not certain elements of knowledge that could indicate some aspects
of trust for computer implementations, after an initial phase where some
human evaluated dependent ontologies that were linked, imported, etc.,  into
a new ontology? I only concern myself with determining the integrity of
these ontologies.

1. It may be reasonable to assume that if there were no changes in related
ontologies since the original evaluation, then the level of trust has not

2. If there was a change in the related ontology the type of change may
indicate the seriousness of the change:

	a) It might be useful to know if additional expressions were added.
Would this not indicate that any original queries to any KB based on this
ontology would still have the same results? And thus an application would
not need to be changed that used this ontology.

	b) It might be useful to know if any well formed expression in the
original ontology was modified in any way.  This is more serious and
probably indicates that an application using a KB would have to be

	c) It might be useful to know if any expressions were deleted. Again
this is serious and would indicate that an application would have to be
evaluated and retested.
	d) it might be useful to know if its set of related ontologies were
modified and if any of them had been changed in any way. The answer depends
upon the form of change of the related ontology.
	e) it might be useful to know which expressions were changed, it
might be possible for a system to then evaluate whether an application has
any queries that would be impacted by this change. I don't know if this
could be done at runtime or would require off line evaluation with the
original tool used to create the integrated ontology, once an alert occurs.
The results may indicate no impact on an application and thus no need to
change or retest the application.
	e) there must be other changes in an ontology using OWL that do no
necessarily change the results of queries by an application.

3. A simple unique signature could be created for each ontology at original
creation time, and then incorporated as part of its meta data. Any time
change occurs within any ontology, then the signatures could be compared.

I think this is somewhat similar to the problem the software component
industry faced with managing the use of different versions of software
components? I believe that the solution used was to incorporate into
software platforms a form of registration system that provided management
functions for software components, especially alerting functions that
indicated when a newer component may be replaced by an older version, and
giving choices for selection among different versions of software

Also in computer software virus detection techniques are not signatures used
to identify and classify viruses as well as integrity of installed software.

I suspect that integrity checks would be a good start, to at least enable
automatic detection of changes in ontologies after an initial human
evaluation phase.

Nothing new here, but just a recognition that though this problem is much
more complex, we should not ignore the minimal things that can be done to
enable improvements.

I am not sure if these only require meta data for an ontology, or specific
language elements within an ontology language itself.

Best Regards,
John Yanosy Jr.
Fellow of the Technical Staff
5555 N. Beach St., Ft. Worth, TX 76137-2794
Tel: 1-817-245-6665
Fax: 1-817-245-6580
2-Way Pager: 1-800-SKYTEL2, PIN:2456665
Received on Monday, 28 October 2002 17:34:42 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:44:38 UTC