- From: pat hayes <phayes@ai.uwf.edu>
- Date: Fri, 13 Apr 2001 11:26:12 -0500
- To: "Seth Russell" <seth@robustai.net>
- Cc: www-rdf-logic@w3.org
>From: "pat hayes" <phayes@ai.uwf.edu>
>
> > You may have to give more axioms, yes, of course. But that just means
> > saying a rather larger P, or asserting a whole lot of P's. That
> > doesn't change anything.
>
>Ok.
>
> >The more you say, the more you restrain the
> > world; that is the very nature of asserting propositions. (That is
> > surely WHY we ever assert anything: to express that the world is
> > constrained in some way.)
>
>Well it is quite a different thing to {restrain the world}, than it is to
>{express that the world is constrained}. The difference is between talk an
>action. You seem to have used both terms in your paragraph above somewhat
>interchangeably, and I don't know to which you refer.
OK, fair enough. I was being careless here as I didnt realize that
this was a significant point for you. I meant to be talking about
{express that the world is constrained}. Of course, simply asserting
a sentence only makes a CLAIM about the world; it doenst actually DO
anything in the sense of acting in that world. But I took this for
granted, since we are here talking about langauges for making
assertions and making claims (RDF and DAML+OIL and N3).
>Thing is that when
>you do logic with pencil and paper in relationship to a professor, you are
>usually doing only the latter; yet if you do it with a keyboard and a
>powerful computer you can be doing both or either. One hopes, of course,
>that one is not just flapping one's lips here ... so me thinks we need to at
>least provide for the former case ... especially if we're talking to the US
>Defense Department, who might just attach the firing of a ICBM to somebody's
>assertions.
All the above seems to be rhetoric, and Im not quite sure what you
are actually saying. Of course there are systems (both mechanical and
mammalian, by the way) which act on the basis of (the content of)
what you say to them. When dealing with such things, you need to be
careful about what you say to them, since the content of what you say
might cause them to do things that you might come to regret. All
true. I thought we were talking about this content, and how to
specify it as accurately as possible. All this talk of ICBMs seems to
underscore that necessity rather than argue against it, right?
> > However, you don't need to specify any operations. I'm not sure quite
> > what you mean by that word in this context.
>
>You can't get the computer to do anything without activating it's operating
>instructions. If you never activate those instructions, you are just doing
>paper and pencil logic.
Yes, but that is just a platitude. The instructions might be drawing
inferences or setting off a missile. What matters is not that
instructions get activated, but what the operational effects are on
the world.
For the record, I do not think of 'logic' as anything particularly
restricted to pencil and paper. Machines have been using logics for
many years now.
> > Making an assertion does restrain the world, in that it makes some
> > claims about the way the word is. But that doesn't 'close' the world
> > in any useful sense. In fact it is notoriously difficult to close the
> > world using logic: it is impossible to restrict the world to finite
> > things, or to the natural numbers, etc..
>
>Let me see if I can restate that paragraph from my point of view.
>
>Making an assertion to a model does not restrain the world; rather it just
>restrains that closed world model of the world. Only when our model can
>behave in the world, can such assertions indirectly restrain the world. So
>the computer's behavior *is* the interpretation. Otherwise it's not only
>notoriously difficult, but is actually impossible, to restrict the world to
>anything whatsoever just by using logic. Or to say that more arrogantly:
>Logic would be irrelevant.
That is not a restatement of my paragraph.
> > Truth arises from a relationship between expressions and
> > interpretations, and has nothing to do with processes.
>
>It's the interpretation part that always seems, to me, to get lost inside
>the professor's head. Precisely what does it mean, again?
Im getting bored explaining basic logical terminology to people. It
has nothing to do with heads. The point is: what do you claim about
the world when you assert a sentence? Not that asserting something
MAKES it true; but suppose your claim were correct and your sentence
were true, what difference would that make to how the world is? If
someone believed you, what could they figure out about the world from
this new belief that you had provided for them? Model theory (logical
semantics) is just a mathematical technique for giving precise
answers to questions like that. Until one has some kind of answer,
making assertions really is pointless, since there is no way to
relate the assertion (which is just a piece of text, or maybe
hypertext) to the world where actions need to get done in.
> But in
>today's world we can (and do) substitute the computer for the professor's
>head; and can arrive at more tangible results which actually do restrain the
>world in a definite way.
This particular professor has been putting logic into computers, and
getting tangible results, probably since before you were born.
(Sorry, I couldn't resist it.)
> If I say P to the computer, the computer does Q;
> the computer's action is a real restraint on the world.
>
>Isn't that a more tangible anchor of this notion of interpretation?
Yes and no.
No, because it is too simple as stated, in that it mixes up
assertions (P) with actions (Q). (And not all computer actions are
real restraints.)
But yes, in a deeper sense, since we probably do need to relate
logical meaning to actions in some larger scheme of things. I'm not
saying that relating sentences to actions is easy or not worth doing.
In fact I think it is an exciting research area that needs some new
ideas badly, and would welcome any ideas you or anyone else might
have. And certainly it would be useful to find a way of relating
content of a proposition to the effects of actions which in some
sense use, or are based on, or are caused by an event of
understanding, that proposition. But (1) we do need to keep our
terminology straight, and (2) I don't think this is going to be easy,
and the current 'semantic web' effort is just one experiment in an
ongoing research effort here. For a start, what counts as an 'action'
in this discussion? Is drawing a private conclusion an action? Is
downloading a file an action? Is sending a single byte, or a single
packet, an action? And so on.
>Where have I gone wrong?
I am not qualified to answer that question.
Pat Hayes
---------------------------------------------------------------------
IHMC (850)434 8903 home
40 South Alcaniz St. (850)202 4416 office
Pensacola, FL 32501 (850)202 4440 fax
phayes@ai.uwf.edu
http://www.coginst.uwf.edu/~phayes
Received on Friday, 13 April 2001 14:24:06 UTC