Re: AW: (SeWeb) KAON - KArlsruhe ONtology and Semantic Web Infrastructure

[released from www-rdf-interest spam filter -rrs]

Date: Wed, 9 Oct 2002 12:46:43 -0400 (EDT)
Message-ID: <OFCBFD061C.DFDBDF6A-ON85256C4D.004238DB@cps.msu.edu>
Received: from tux.w3.org (tux.w3.org [18.29.0.27])
	by frink.w3.org (8.11.6+Sun/8.11.6) with ESMTP id g99GkeB04756
	for <www-rdf-interest@frink.w3.org>; Wed, 9 Oct 2002 12:46:40 -0400 (EDT)
Received: from islnotes.cps.msu.edu (islnotes.cse.msu.edu [35.9.24.214])
	by tux.w3.org (8.9.3/8.9.3) with ESMTP id MAA16305;
	Wed, 9 Oct 2002 12:46:40 -0400
From: sticklen@islnotes.cse.msu.edu
To: seth@robustai.net
Cc: kaw@swi.psy.uva.nl, Alexander Maedche <Maedche@fzi.de>,
         seweb-list@cs.vu.nl, "John F. Sowa" <sowa@bestweb.net>,
         www-rdf-interest@w3.org, www-rdf-logic@w3.org, www-webont-wg@w3.org,
         flair@cis.ohio-state.edu

Seth et al,

I have been following this interchange with a lot of interest. I am not 
part of your community directly, but there are a lot of resonances here 
with what happened back in the horse and buggy days of the 80s as more 
semantic-based approaches to knowledge-based systems first appeared. These 
approaches evolved after the initial successes of rule based systems. One 
of the pushes for that evolution  was that rule based approaches on very 
large knowledge bases demanded  that "programmers" build into the rule 
bases implicit control structuring. Early task specific approaches like 
KADS or GT put constraints on the task of capturing expertise and embedding 
it in a computer system. The goal was to  raise the level from working as a 
"programmer" free to take any path to "get the job done" ... to a system 
implementer working within the framework provided. The key thing was that 
the framework came with a numb! er! of conceptual (you might call them 
semantic) constraints. Those constraints defined the general nature of the 
task at hand and it was up to the system implementer to match the task to 
the framework for implementation.

In the end the approach seems to have fizzled. The major reason (my view) 
is that the frameworks and semantic constraints to problem solving types 
were too rigid. There was realization of this and a move to provide 
flexibility by having multiple problem solvers (of possibly different 
types) work in concert - but it came to late. The people who had to build 
real systems in the world got feed up with the constraints, and they - and 
the field - just moved on. My view of that was it was throwing out the baby 
with the bath water.

So why is this resonant with what you wrote? Because in your last sentence 
you wrote...

          ...and programmers can have a structure to share
         with that is independent of
        all of those quibbels.

The problem (again just my view) is that the tact you lay out reduces 
building smart systems to a programming activity using what is arguably a 
general purpose language. Ie, in concept to a Turing machine. You can of 
course as a very clever programmer do anything you want that is computable 
in such a framework. And the work, in addition to building a real artifact 
that works in the world, might shed light on how to improve the general 
purpose, Turing capable framework. But what else is learned? In particular, 
what else is learned about how problem solving by real people works? What I 
would hope is that over time enough is known about structures that support 
real world problem solving in its own terms so that by using those 
structures as implementation vehicles we are able to capture (increasingly) 
more complex problem solving in the vocabulary of that is natural to the 
problem solving being modeled.

The difference is like what our brethren in computational modeling have at 
their disposal. A finite element model may be indispensable to capturing a 
lot about a solid artifact. But sheds no light on the underlying processes 
because its building blocks are the stock and trade of finite models, not 
the physical  building blocks of an auto fender, or a wing section, or 
whatever is being modeled. A simulation model of a high performance 
aircraft (eg) has building blocks that include the fuel system, the nav 
system, and ... In the world of computational modeling of problem solving 
unconstrained programming in a Turing compatible environment is analogous 
to modeling a solid in finite elements. I think that where eventually 
capturing expertise has to go is to an environment that is closer to the 
systems simulation environments.

This may all seem like an argument from a decade ago, and well past its 
time. Thing is... if community tasks are not completed out of frustration 
or difficulty, then the field does not really learn or progress in 
capability...

Just a few of my cents worth on all this....
   Jon Sticklen
    Mich State Uni






Seth Russell <seth@robustai.net>

10/08/2002 12:16 PM
Please respond to seth

         To:        Alexander Maedche <Maedche@fzi.de>
         cc:        "John F. Sowa" <sowa@bestweb.net>, 
www-rdf-logic@w3.org, www-rdf-interest@w3.org, www-webont-wg@w3.org, 
seweb-list@cs.vu.nl, kaw@swi.psy.uva.nl
         Subject:        Re: AW: (SeWeb) KAON - KArlsruhe ONtology and 
Semantic Web Infrastructure


Alexander Maedche wrote:

 >With respect to ontology editors we
 >were confronted with the problem that
 >each ontology modeling tool implements
 >its own "specific data model", typically
 >focusing on a specific representation
 >paradigm. Thus, this results in the fact
 >that it is impossible that one just
 >takes a specific tool and uses it as
 >a frontend for some specific backend
 >software. Thus, the only thing that works
 >is to provide import/export facilities.
 >In our case we provide an import tool for
 >Protege-based ontologies and RDFS ontologies
 >in general.
 >
That is certainly true; however lamentable.  Lamentable because our
tools do not seem to be able to share the higher level  programming
resources simply because of the plethora of data models designers are
allowed to choose from.  Lisp was great, it gave us a common data model,
we were able to share many programming resources.  However, Lisp did
not seem to have enough restraints on the many ways we can represent
knowledge, we still have too many choices if we want our tools to share
methods.  But I think there is a structure that does have enough
restraints for our purposes.  It's just labeled directed graphs.  I've
been playing around with these for some time [1], they seem to work.
Note that attempting to integrate the data model with a semantic model
theory will just give us more choices and more bickering between
designers and gets the logicians involved too.  So ... why not just not
do that .... ?  There can be very little bickering about what a labeled
directed graph is.  The semantic model theories naturally factor into
the vocabulary of the arc labels;  logicians can continue to bicker
about which logics apply to which classes of arc labels , and
programmers can have a structure to share with that is independent of
all of those quibbels.

[1] http://robustai.net/mentography/Mentography.html

Seth Russell



----------------
KAW-list home: http://www.swi.psy.uva.nl/mailing-lists/kaw/home.html
archive: http://www.swi.psy.uva.nl/mailing-lists/kaw/recent.html

Received on Wednesday, 9 October 2002 14:14:48 UTC