Re: Bound variables

   Drew, I am puzzled as to what you are talking about.

No doubt you're not the only one.

   Can you expand a bit on what you mean by a "scenario" here? (Let's 
   avoid the word "situation" which already has at least three distinct 
   technical AI/logical meanings, none of which I think you mean. 
   Correct me if Im wrong.)

I'll go with "scenario" too.  

   >Perhaps this is all wrong, but my main purpose in introducing it is to
   >explain why we need to describe scenarios.

   It would help to know what they were before trying to describe them.

A scenario is a type of state of affairs, as opposed to a particular
state of affairs.  I don't mean to suggest that it's the state at a
particular time; perhaps "type of true proposition" is more accurate.
For example, a scenario might be "There's a war, and afterwards an
economic depression"; or, "A disaster happened, and several people had
a premonition that such a thing was going to happen."  (A scenario
doesn't even have to refer to a possible state of affairs, although
logically impossible states will give us the usual problems.)

   >...
   >The obvious way to represent scenarios is with lambda-expressions,
   >which bind variables that then take part in descriptions.

   In the usual meaning of lambda-expressions, they denote functions 
   (from whatever their variables are interpreted to denote, to the 
   value of the body when the variables are so interpreted.) Should we 
   infer that a scenario is a kind of function? (From what to what?)

Because is a scenario is a *type* of state of affairs, we require a
*description* of a set of individuals that take part in it.  A
description is a function from n-tuples to propositions.  Hence a
scenario has type T1 x T2 x ... X Tn -> propositions.  The expression

(lambda (x y) (war x) & (econo-depression y) & (immediately-precedes x y))

is a function from Object x Object -> prop, i.e., a scenario involving
two objects such that one is a war, the other is a depression, and the
first precedes the second.  (It's probably a good idea to refine the
types, but that's an orthogonal issue.)

   >For
   >instance, if I'm trying to explain what makes cars go, the scenario
   >might be
   >
   >(lambda (x y)
   >   (is-a x car) & (is-a y human)
   >   & (able y (cause (move x)))
   >   & (method y (move x)
   >             (push (lambda (z) (is-a z pedal)
   >                               & (part z x accelerator)))))

   That seems to be a binary function to a truthvalue, ie a relation. 

We can't quite have that, because we're dealing in hypothetical
entities.  So reinterpret "&" (and other connectives) to operate on
propositions and return a proposition as value.  (If you wish, treat a
proposition as a function from possible worlds to truth values.)  

   However it also seems to say that the way to make the car go is to 
   push a function, which suggests that you don't have the usual 
   semantics of lambda-expressions in mind. (Or else 'push' is some kind 
   of higher-order functional (?))

That was sloppy.  I should have used the location (do-for-some p a),
where p is a description of an object (T -> prop) , and a is a
description of an action (T -> action).  So the "push" part should
have read

   (do-for-some (lambda (z) (is-a z pedal)
	                    & (part z x accelerator))
                (lambda (z) (push z)))

I hope this clarifies things.

The only remaining question was about the "metaness" of one of my
examples.  I just meant that the example talks about ontologies as
objects, in the midst of a discussion of ontologies as notation
systems. 

                                             -- Drew

Received on Thursday, 2 November 2000 11:24:15 UTC