- From: Bill de hÓra <dehora@eircom.net>
- Date: Fri, 11 Apr 2003 10:21:10 +0100
- To: Patrick.Stickler@nokia.com
- CC: pfps@research.bell-labs.com, www-rdf-interest@w3.org
Patrick.Stickler@nokia.com wrote: > If researchers want to grapple with the problem of ambiguity of > denotation in knowledge bases, fine, but ambiguity should not be > considered an acceptable characteristic of the SW architecture and > should be guarded against, flagged, and fixed always. Certainly, but it is unwise to expect it can be eradicated. > Ambiguity of denotation on the SW will *always* be detrimental. But to think it won't be there or can simply be architected out of existence is simpleminded positivism. > Ambiguity of URI denotation on the SW will happen, but will always be bad. My objection to your opinion is this: a semantic web system that can't or won't deal with ambiguity and prefers a perscriptive rather than descriptive approach is consigned to being a toy system. IMO, history bears this objection out. Insisting that the entire domain of the semantic web be a toy isn't going to have a happy ending. I hope at least we can learn *something* from 50 years of AI research. I feel an 'eight fallacies of the semantic web' is needed. > But there are different ways of dealing with it. One can presume > unambiguous denotation and when one gets undesirable/unreliable > results from a SW reasoner, one can identify the source of the > the problem and either correct it or exclude that source from > ones reasoning due to being unreliable/untrusted. It's good that we're recognizing the need to keep people in the reasoning loop, to deal with the ambiguity. So, I wonder whether we have any idea if housekeeping these knowledge bases is a viable task. Should we expect a rehash of the expert systems debacle? Or perhaps Forbus and De Kleer will become a deserved best seller if they port their code to Perl ;) > The SW agent itself need not have any other presumption but that > URIs have unambigous denotation. That makes total sense. The problem with the current architecture is that it has no layer in its cake for distribution of denotations. There's RDF on the web, magic happens, and out comes a graph. This unfortunately involves URIQA, which without architectural guidance is a patch. I think that speaks quite badly of the architecture, rather than your program, which I'm really looking forward to. > I guess time will tell which position is correct. We don't need to wait to find that out - being correct is irrelevant. Utility is king in a complex system. It's evident that inference engines are inadequate for complex environments - they require filters to makes sense of the world. Layered architectures derived from robotics, insects and search engines are much more useful than inference engines and theorem provers or handwaving about architecture. As a matter of fact, the scruffy approach is already winning the argument, without resorting to five year plans - any ability to do logical inference with symbols is simply a useful optimization to a system predicated on statistical processing. And the chances are most of things people will want to do with the semancic web will tolerate a level of ambiguity that makes building the damn thing cost-effective. Anyone who needs more precision will have to pay for it, that's how engineering works. > Then I think that you and I will not be using the same semantic web. Yes, you'll be using a subset. Bill de hÓra -- Sorry, I don't know the word "sorry".- SHRDLU
Received on Friday, 11 April 2003 05:22:38 UTC