W3C home > Mailing lists > Public > www-rdf-logic@w3.org > October 2000

Bound variables

From: Drew McDermott <drew.mcdermott@yale.edu>
Date: Mon, 30 Oct 2000 12:15:29 -0500 (EST)
Message-Id: <200010301715.MAA10590@mr1.its.yale.edu>
To: www-rdf-logic@w3.org
CC: drew.mcdermott@yale.edu

In trying to work with DAML, I realize that there are a lot of things
I don't know about the language, that go beyond the "wheel part-of
car" type examples.

Suppose I am trying to describe a theory as an object.  (This is part
of our "homework" assignment.)  For instance, I might want to
formalize "Einstein was looking for a unified field theory."  This is
perhaps too ambitious, but part of our assignment is to describe our
projects, and many projects have, among other things, the goal of
finding a "theory to explain X."  What sort of object is X in this
sentence?  We might at this point start cataloguing the sorts of
things that can be explained.

(Digression: Here are some things that one might try to explain:

Event tokens: The French Revolution

Event types: Why lemmings jump off cliffs

Nonevents: Why communism didn't do well in the U.S.

Properties: Why the sky is blue.

Structures: Coalitions in Israeli politics

Games: The evolution of the rules of American football.)

However, I think this is going in the wrong direction.  Typically when
you try to explain something you don't yet know exactly what it is you
want to explain.  So there is a sentence of explain in which the X in
"explain X" is a "situation" or "scenario," and to explain it is to
answer questions about it.  You explain it to degree P if the
probability that you can answer a random question on a topic related
to the scenario is P.

Perhaps this is all wrong, but my main purpose in introducing it is to
explain why we need to describe scenarios.  You may find it easier to
make up another reason.  (E.g.: an intelligence analyst may want to
formalize the report of an agent.  The report is not necessarily
believed, but may have a lot of internal structure.)

The obvious way to represent scenarios is with lambda-expressions,
which bind variables that then take part in descriptions.  For
instance, if I'm trying to explain what makes cars go, the scenario
might be 

(lambda (x y)
   (is-a x car) & (is-a y human)
   & (able y (cause (move x)))
   & (method y (move x)
             (push (lambda (z) (is-a z pedal) 
                               & (part z x accelerator)))))

If I'm trying to explain the relationships between two ontologies that
cover the same ground (as we in fact are), the scenario is

(lambda (ont1 ont2 con)
   (is-a ont1 ontology)
   & (is-a ont2 ontology)
   & (is-a con context)
   & (often  (lambda (e1 e2)
                   (expression e1 ont1)
                   & (expression e2 ont2))
             (lambda (e1 e2)
                (meaning e1 con) = (meaning e2 con))))

(often s1 s2) is a generalized quantifier that means 
"It's not unusual for a objects satisfying s1 to satisfy s2 as well."

I hope the "metaness" of this example doesn't bother people.  If it
does, rephrase the whole discussion in terms of cars and pedals
instead of ontologies and meanings.  The question is, How do we
express these things in RDF/DAML?  (Answers I will refuse to accept
are those that involve expressing them using quasi-quotation.)

                                             -- Drew
Received on Monday, 30 October 2000 12:15:36 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 11:10:32 UTC