Re: OWL 2.0 ...

Two personal comments (I'm not speaking for JPL, Caltech or NASA).

1) a "single" definition of OWL.

Recently, I found some problems w/ the OWL API:

http://lists.w3.org/Archives/Public/public-webont-comments/2005May/0005.html

One suggestion would be to define OWL in a "minimalist" fashion
where a single "source" definition of OWL would be transformed into
various artifacts such as the OWL API's lexer, parser, renderer, 
internal model, etc..

2) some way to talk about OWL w.r.t. expressiveness, consistency, 
inconsistency, reasoning "problems" & "solututions"...

The "Handbook of Description Logics" has an interesting passage
about role-value chains where the author gives an intuitive sense about 
the significant increase in complexity
such chains induce in a description logic that supports them vs. another
logic that doesn't.

The author uses the idea of checking role value chains the analogue of 
traversing
an edge corresponding to the role from the instance that has the role
to the node that corresponds to the role value for that instance. 
Intuitively,
a tree (without forward/cross edges) has simpler complexity than a graph 
that isn't a tree:
a tree has, at most, a unique role-value-chain path between any pair of 
nodes
whereas a pair of nodes in a graph can have multiple such paths thereby
increasing the complexity of checking logical properties along node-to-node
path connections.

This is a very common situation in engineering models today whose semantics
are anything but howlish (e.g., UML, XSD, ... ) Does it mean it's 
hopeless to use OWL for
engineering models? I don't think so but it is far from clear how one 
might go about structuring
an ontology for a lambda engineering modelling language of a vanilla 
engineering domain
and produce an ontology doodle that will blow up in our faces when we 
plug it w/ a gizmo reasoner.

Fortunately, some folks have thought about these issues clearly enough 
to write various
recommendations about it (e.g., "classses-as-values"); however, some of 
the recomendations
involve solutions that are "outside" OWL in the sense that there is not 
(yet) a definition for what
it means to reason over an ontology + some of its annotations. If we 
wanted to use OWL
to actually describe the recommendations, the issues defined, etc... 
then we would effectively
confront the same issues that show up in other areas (e.g., businesss 
process modeling)
At some level, it is an expressiveness issue w/ the language, at another 
level it is an architecture
issue of orchestrating multiple ontologies & their rones. It could be 
something else too.
Regardless, it's clear that there are difficult challenges and it seems 
to me OWL should allow
us to reflect on our own challenges. The issue isn't much different 
than, say, matching two concepts
from different ontologies.

3) modularity and ontology "calculus"

In part, the above comments point to language (what we say) and 
architecture (how we organize what we say)
with reasoning in between. If OWL were a programming language, then you 
could look at the modular aspects
of the language. A languge with good modularity greatly simplifies the 
abstraction process required
to use the language for building blocks that we can in turn assemble 
into various architectures with the confidence
that the properties of the architecture can be analyzed from the 
assembly of its parts. In this sense, I believe
OWL could have improvements w.r.t. modularity and a calculus of OWL 
modules (whatever that is) would
greatly help current issues (e.g., mapping, merging, refactoring, ...)


-- Nicolas.

Received on Friday, 27 May 2005 00:24:09 UTC