Re: Why do we need reasoning services?

I would like to second Ian's email and also add to it.

1 - there is a mention of needing well defined semantics in order to
determine if something meets a specification.  I agree with this and would
take it further - Even if one is only concerned with "coming close" to the
specifications of a request, it is important to know how close we got and
what portion(s) of the specification we did not meet.  Arguably this is
part of the dissatisfaction with search engines when they return responses
- you know the responses all matched and in some sense are close, but you
do not know how they differ and in what aspect the closeness occurred.
While a ordered list or answers provides some information, you typically
do not know how the relevance rating was determined.  A well defined
semantics gives us the option to retrieve both exact matches and also to
relax certain constraints on a query and then to retrieve those matches
thus giving the user a more precise answer about answers that came "close"
to being an exact match.

2 - another reason to support reasoning services is so that an application
can not only make a conclusion but also so that the application can defend
or justify the conclusion.
I have been spending time lately with analysts trying to understand how we
can make systems more useful to them and their consistent number one
criterian is understanding the results of a system, thus understanding
when they can truly trust the system to come to the same conclusion they
would.
When an underlying system has well defined semantics, the foundation for
providing justifications to the implicit facts (not just the explict
facts) is available.

deborah

Ian Horrocks wrote:

> Dear All,
>
> At the f2f I was left with the worrying impression that some doubted
> the need to (be able to) provide reasoning services for OWL. If this
> is the case, I hope that closer consideration of the Semantic Web
> context will help to change minds:
>
> Why do we need reasoning services? A key use of ontologies on the
> semantic web will be to provide a vocabulary of terms whose meaning is
> well defined - at least w.r.t. other terms - and can be used to
> facilitate communication/cooperation between "automated processes".
>
> As an example, let us consider the furniture ontology from one of the
> use cases. One could imagine that an application of this ontology
> would be to enable process P1 entrusted with the task of buying a
> "late Georgian table" to determine if the "British table made in 1790"
> being offered by process P2 really does meet the specification. In
> order to do this, P1 needs to reason about the meaning of the various
> terms w.r.t. the furniture ontology (which is providing the "shared
> understanding" that allows the two processes to interact).
>
> For this to work, we need at least two things:
>
> 1. A well defined semantics. There needs to be some absolute measure
> as to whether the table P2 is offering really does meet P1's
> requirements. Without such a measure, there is no way even to specify
> (never mind build) software that can perform the reasoning task, and
> no way to resolve disputes about meaning or the results of reasoning
> processes.
>
> 2. We need to be able to build "effective" reasoners. Ideally, we
> would like them always to give the correct answer. The closest we can
> get to this in practice is to design the language so that it is
> possible to build sound and complete reasoners that have good "typical
> case" performance (it is only possible to build reasoners with good
> worst case performance for languages that are VERY weak). At the very
> least, reasoners should know when they were not able answer, i.e., be
> able to differentiate between a no and a don't know (which incomplete
> reasoners are not always able to do - when such a reasoner answers NO,
> there may be no way to determine if this really means a provable NO or
> simply a failure to prove YES).
>
> A crucial point w.r.t. semantic web applications is that the main
> users/consumers of ontologies will be automated processes, and will
> not have a human being's ability to apply a sanity filter to the
> information on which they base their decisions/actions. Moreover,
> experience suggests that users are much less tolerant of errors made
> by machines than they are of human error (possibly because machine
> error, while less frequent, can often be more catastrophic).
>
> It is also worth pointing out that even incompleteness can be a
> serious problem in this context. E.g., if process P1 takes some
> action, or passes on some result to a process P2 based on the fact
> that a resource r is not of type C, then this action/information will
> be "unsound" in the case where an incomplete reasoner failed to detect
> the fact that r really is of type C. After combining/chaining the
> results of several incomplete reasoners, we may end up with little
> more than random noise.
>
> Ian

--
 Deborah L. McGuinness
 Knowledge Systems Laboratory
 Gates Computer Science Building, 2A Room 241
 Stanford University, Stanford, CA 94305-9020
 email: dlm@ksl.stanford.edu
 URL: http://ksl.stanford.edu/people/dlm
 (voice) 650 723 9770    (stanford fax) 650 725 5850   (computer fax)  801
705 0941

Received on Sunday, 20 January 2002 12:39:38 UTC