W3C home > Mailing lists > Public > www-rdf-logic@w3.org > September 2000

random thoughts on web logic

From: pat hayes <phayes@ai.uwf.edu>
Date: Fri, 22 Sep 2000 20:26:39 -0500
Message-Id: <v04210114b5f1abace935@[205.160.76.86]>
To: www-rdf-logic@w3.org
(The following message arose from a discussion at the end of the DAML 
kick-off meeting between Tim Berners-Lee, Dan Connolly, Drew 
McDermott and myself. Tim and Dan were arguing that 'web logic ' must 
be monotonic, while Drew was arguing that nonmontonic reasoning was 
most suitable.
I am posting this to rdf-logic at Jim Hendler's suggestion as a way 
to prompt some discussions/get some issues on the table/etc.. 
Responses/comments/disagreements welcome.)

-Pat Hayes
-------------------------------------
Tim and Dan, greetings.

Re. our discussion at the end of the DAML meeting. I now think that 
both you AND Drew were right about nonmonotonicity. For the security 
in B2B transactions which W3C wants, you need a nonmonotonic LOGIC 
whose proofs, once checked, stay checked. But Drew was also right in 
that there is no way that you or anyone else can manage without 
making nonmonotonic INFERENCES, and that indeed people will make 
those inferences from your monotonic conclusions whether you like it 
or not (and if tested in court they will probably stand up under case 
law.) Fortunately there is a way around this apparent impasse, since 
one can represent the nonmonotonicity as what might be called 
fungible assumptions. For example, consider the inference from 'P 
with 99% probability' to 'P'. This is a nonmonotonic inference, since 
it isnt deductively valid, and if I were to add the extra assumption 
'not P' I would have a consistent extension of my original 
assumptions which refuted the conclusion. If I expected the logic to 
make that inference for me, it would have to be a nonmonotonic logic. 
However we can add an extra assumption: (if (P with 99% probability) 
then P) - call this Q - and then the conclusion of P follows 
monotonically, since now adding the negation causes an inconsistency 
in our assumption set. Claims like Q are like inference fuses which 
are put there in order to break when an inconsistency arises. Another 
example, well-known in AI, is due to John McCarthy, where one puts a 
special precondition in an action description of the form '( not 
(Unusual  ?s))' where ?s is the state to which the action is applied. 
Then if the predicted outcome doesn't in fact happen, one can 
(monotonically) conclude that the state must have been unusual. In 
order to do useful planning one has then to make a blanket assumption 
that states arent unusual, and this is usually thought of as making 
the entire system into a nonmonotonic logic, but it could just as 
easily be made explicit and classified as an inference fuse. All the 
actual *reasoning* involved is monotonic (until the fuse burns.)

Drew's point can now be phrased by saying that people will invoke 
inference fuses all the time in order to keep their conclusions clean 
and not cluttered up with hundreds of qualifications, but they will 
be willing to agree, when things go wrong, that they were making 
slightly rash assumptions and be willing to backpedal. People are 
like that. Your point is that the logic itself must be explicit about 
what assumptions it is making, and that a conclusion  of A from B, 
once checked, must stay sound in the future. OK, so your logic needs 
to have a way to indicate which of its assumptions are the fuses. 
This might be phrasable as a matter of 'trust' and committment. If I 
send a proof to you with some of its assumptions marked as 'I vouch 
for this' , and you act on the conclusion and get screwed, and it 
turns out that it was a non-vouched-for assumption that went wrong, 
then caveat emptor logicum; but if it was one that I had warranted, 
then its my fault. But this requires that you separate *assertion* 
(which might be glossed as 'I believe this in good faith' or 'Im 
telling you that I take this to be true' ) from *committment*, which 
is more like : I guarantee that this is so and take responsibility 
for it. Only a rash person will warrant his inference fuses.

BTW, Ive thought of a few more complications about 'taking 
responsibility'. For example consider three wise agents A, B and C, 
and suppose they all talk the same language, and A asserts that (P 
foo) and B asserts that (not (P baz)) and C asserts that A#foo= 
B#baz. One of them must be wrong. If A and B were left to themselves 
they could infer that (not (A#foo= B#baz)). In fact any two of this 
trio, if left to themselves, could conclude that the third one was 
wrong. (One can get a similar effect using disjunction and negation.) 
What is one to make of this? Seems to me that in a case like this, A 
and B have a certain claim to priority, since they make no reference 
to C, while C is making a claim about names that 'belong' to A and B. 
(One can hardly blame A and B if this other crazy C guy insists on 
getting their names confused, when they are capable of proving him 
wrong on the basis of their own assumptions, right?) But this line of 
reasoning assumes that an agent 'owns' the names it uses, in some 
sense which I'd like to try to get clear.

Heres another thought about DAML 0.5. The URI chains can have loops, 
eg if A uses the name B#foo to define baz and B uses the name A#baz 
to define foo. Does this bother you? I think it might actually be an 
opportunity to define a useful notion of 'ownership' (by a linked 
group of mutually referring agents) (of a set of names. ) Think of it 
as a kind of referential handshake: A and B agree that baz and foo 
are mutually connected in meaning. The only snag I can think of is 
this situation arising accidentally, without A and B being 'aware' of 
it, since the loops can get arbitrarily long and therefore 
arbitrarily difficult to detect. However it would be fairly easy to 
check that a particular collection of names was loop-free. This is 
awfully reminiscent of the problems of garbage collection, and maybe 
one would need a kind of global web-crawling process to be searching 
for referential 'grounding'. A grounded proof would be one in which 
every name used was warranted to have a secure grounding, where a 
grounding of a name is a definitional chain which ends in a warranted 
source.  Websites could exist whose sole function is to be such a 
source, ie they are securely maintained by agencies responsible for 
the meaning of certain public names. They wouldnt need to actually 
maintain the definitions, only provide the warranted reference to the 
places where the (pieces of the) definitions are to be found. If 
those in turn refer back to the secure namesource site, this mutual 
'handshake' reference provides both the warrant and the meaning, and 
keeps both of them secure, and provides a way to refer any queries to 
the source of the warrant.

Pat Hayes
---------------------------------------------------------------------
IHMC					(850)434 8903   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola,  FL 32501			(850)202 4440   fax
phayes@ai.uwf.edu 
http://www.coginst.uwf.edu/~phayes
Received on Friday, 22 September 2000 21:24:13 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 11:10:32 UTC