Why do we need reasoning services?

Dear All,

At the f2f I was left with the worrying impression that some doubted
the need to (be able to) provide reasoning services for OWL. If this
is the case, I hope that closer consideration of the Semantic Web
context will help to change minds:

Why do we need reasoning services? A key use of ontologies on the
semantic web will be to provide a vocabulary of terms whose meaning is
well defined - at least w.r.t. other terms - and can be used to
facilitate communication/cooperation between "automated processes".

As an example, let us consider the furniture ontology from one of the
use cases. One could imagine that an application of this ontology
would be to enable process P1 entrusted with the task of buying a
"late Georgian table" to determine if the "British table made in 1790"
being offered by process P2 really does meet the specification. In
order to do this, P1 needs to reason about the meaning of the various
terms w.r.t. the furniture ontology (which is providing the "shared
understanding" that allows the two processes to interact).

For this to work, we need at least two things:

1. A well defined semantics. There needs to be some absolute measure
as to whether the table P2 is offering really does meet P1's
requirements. Without such a measure, there is no way even to specify
(never mind build) software that can perform the reasoning task, and
no way to resolve disputes about meaning or the results of reasoning
processes.

2. We need to be able to build "effective" reasoners. Ideally, we
would like them always to give the correct answer. The closest we can
get to this in practice is to design the language so that it is
possible to build sound and complete reasoners that have good "typical
case" performance (it is only possible to build reasoners with good
worst case performance for languages that are VERY weak). At the very
least, reasoners should know when they were not able answer, i.e., be
able to differentiate between a no and a don't know (which incomplete
reasoners are not always able to do - when such a reasoner answers NO,
there may be no way to determine if this really means a provable NO or
simply a failure to prove YES).

A crucial point w.r.t. semantic web applications is that the main
users/consumers of ontologies will be automated processes, and will
not have a human being's ability to apply a sanity filter to the
information on which they base their decisions/actions. Moreover,
experience suggests that users are much less tolerant of errors made
by machines than they are of human error (possibly because machine
error, while less frequent, can often be more catastrophic).

It is also worth pointing out that even incompleteness can be a
serious problem in this context. E.g., if process P1 takes some
action, or passes on some result to a process P2 based on the fact
that a resource r is not of type C, then this action/information will
be "unsound" in the case where an incomplete reasoner failed to detect
the fact that r really is of type C. After combining/chaining the
results of several incomplete reasoners, we may end up with little
more than random noise.

Ian

Received on Saturday, 19 January 2002 06:53:21 UTC