- From: Drew McDermott <drew.mcdermott@yale.edu>
- Date: Tue, 27 May 2003 15:34:26 -0400 (EDT)
- To: www-ws@w3.org
[Alan Wexelblat] Date: Fri, 23 May 2003 19:11:48 -0400 (EDT) ... I'm not a Cyc expert, but my understanding is that Cyc replaces the notion of universal and existential relationships/truth with the notion of a scope (called a context) within which things are asserted and reasoned about. By using contexts you explicitly bring in notions such as "under normal circumstances." ... I don't see how to model this kind of thing with a DAML-class language. The problem is not with the language, but with the reasoning system that the language seems to imply (description logic, or perhaps some other traditional formal deduction system). Deciding which Cyc context to be in requires some kind of _nonmonotonic_ reasoning system, in which a conclusion C may be drawn from premises P, but not from P + new information. An example of a nonmonotonic inference arises when C = "I now own 1000 widgets", and P = {"I paid Like-New Widgets $1,000,000," "Like-New Widgets contracted to deliver 1000 widgets to me if I paid $1,000,000"}. The new information might be "Like-New Widgets is under a restraining order because of fraudulent trade practices." The Semantic Web community tends to deprecate nonmonotonic reasoning. You often hear of how it will provide formal proofs of assertions such as "Like-New Widgets is contractually obligated to deliver 1000 widgets to me," as indeed it could in the example. Unfortunately, many conclusions, such as "They will actually deliver the widgets," seem just as necessary for many purposes, and are manifestly not susceptible of formal proof. I think nonmonotonicity is unavoidable. It need not take the form of one of the nonmonotonic logical frameworks. Probabilistic reasoning (such as the Bayes net example I hand-waved about) is inherently nonmonotonic, in the sense that learning new information can cause the subjective probability of an assertion to go up or down. [me] > The issue is not whether we can build a perfect system, only whether we > can build a cost-effective system. [Alan] True. But to estimate the cost for such a thing would require having nontrivial additional technologies and socio-legal structures in place. If the Semantic Web depends on the existence of such things, where are they going to come from? If it does not, who will use it without them? The hope is that there is a substantial "SW problem" that is not AI-complete, i.e., that there are many useful inferences that are doable with existing technologies, and when a false conclusion is drawn we can go in and fix it by hand before too much damage is done. I think this is a reasonable hope. Many practitioners summarize it by saying that the "The SW is not AI." A better slogan would be "The SW is sober, unpretentious AI; and by the way, there are a lot of practical reasoning techniques discovered by AI researchers that work quite well in circumscribed domains." Unfortunately, this doesn't fit on a lapel button. A pessimist might agree that this is technically feasible, but still express fear of the liability lawsuits that will ensue if a spectacular blunder is committed by SW software. Perhaps this is what you meant by "socio-legal structures." Be optimistic; if the W3C pays off enough Republican legislators, we can tort-reform this problem away. -- -- Drew McDermott
Received on Tuesday, 27 May 2003 15:34:29 UTC