# Re: Bound variables

From: pat hayes <phayes@ai.uwf.edu>
Date: Wed, 1 Nov 2000 14:45:27 -0600
Message-Id: <v0421010cb626258f0888@[205.160.76.86]>
To: Drew McDermott <drew.mcdermott@yale.edu>

```Drew, I am puzzled as to what you are talking about.

.....
>Suppose I am trying to describe a theory as an object.  (This is part
>of our "homework" assignment.)  For instance, I might want to
>formalize "Einstein was looking for a unified field theory."  This is
>perhaps too ambitious, but part of our assignment is to describe our
>projects, and many projects have, among other things, the goal of
>finding a "theory to explain X."  What sort of object is X in this
>sentence?  We might at this point start cataloguing the sorts of
>things that can be explained.
>....

>However, I think this is going in the wrong direction.  Typically when
>you try to explain something you don't yet know exactly what it is you
>want to explain.  So there is a sentence of explain in which the X in
>"explain X" is a "situation" or "scenario," and to explain it is to
>answer questions about it.  You explain it to degree P if the
>probability that you can answer a random question on a topic related
>to the scenario is P.

Can you expand a bit on what you mean by a "scenario" here? (Let's
avoid the word "situation" which already has at least three distinct
technical AI/logical meanings, none of which I think you mean.
Correct me if Im wrong.)

>Perhaps this is all wrong, but my main purpose in introducing it is to
>explain why we need to describe scenarios.

It would help to know what they were before trying to describe them.

>...
>The obvious way to represent scenarios is with lambda-expressions,
>which bind variables that then take part in descriptions.

In the usual meaning of lambda-expressions, they denote functions
(from whatever their variables are interpreted to denote, to the
value of the body when the variables are so interpreted.) Should we
infer that a scenario is a kind of function? (From what to what?)

>For
>instance, if I'm trying to explain what makes cars go, the scenario
>might be
>
>(lambda (x y)
>   (is-a x car) & (is-a y human)
>   & (able y (cause (move x)))
>   & (method y (move x)
>             (push (lambda (z) (is-a z pedal)
>                               & (part z x accelerator)))))

That seems to be a binary function to a truthvalue, ie a relation.
However it also seems to say that the way to make the car go is to
push a function, which suggests that you don't have the usual
semantics of lambda-expressions in mind. (Or else 'push' is some kind
of higher-order functional (?))

>If I'm trying to explain the relationships between two ontologies that
>cover the same ground (as we in fact are), the scenario is
>
>(lambda (ont1 ont2 con)
>   (is-a ont1 ontology)
>   & (is-a ont2 ontology)
>   & (is-a con context)
>   & (often  (lambda (e1 e2)
>                   (expression e1 ont1)
>                   & (expression e2 ont2))
>             (lambda (e1 e2)
>                (meaning e1 con) = (meaning e2 con))))
>
>(often s1 s2) is a generalized quantifier that means
>"It's not unusual for a objects satisfying s1 to satisfy s2 as well."
>
>I hope the "metaness" of this example doesn't bother people.

I have no idea what metaness you are talking about, which may be the
source of my puzzlement.

>If it
>does, rephrase the whole discussion in terms of cars and pedals

???

Pat

---------------------------------------------------------------------
IHMC					(850)434 8903   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola,  FL 32501			(850)202 4440   fax
phayes@ai.uwf.edu
http://www.coginst.uwf.edu/~phayes
```
Received on Wednesday, 1 November 2000 15:41:59 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:45:35 UTC