Re: Meaning

   Date: Tue, 10 Jun 2003 20:24:50 +0200 (CEST)
   X-PH: V4.4@mr4
   From: Stanislaw Ambroszkiewicz <sambrosz@ipipan.waw.pl>
   Reply-To: Stanislaw Ambroszkiewicz <sambrosz@ipipan.waw.pl>
   Cc: www-ws@w3.org, sambrosz@ipipan.waw.pl
   Content-MD5: tKyPZfi8TeQj/WMbr3wHmw==
   X-YaleITSMailFilter: Version 1.0c (attachment(s) not renamed)

Sorry to take so long to respond.  I was out for a few days.  However,
I think the issue of "how meaning works" in the semantic web is
extremely important, and constantly catching people up.

   Drew McDermott <drew.mcdermott@yale.edu>

     >  If "owns" really has the meaning it has in natural 
     >  language, then X already knows the meaning before he 
     >  starts dabbling in protocols and plans. He or the 
     >  committee can issue information about how the word
     >  translates into different natural languages, 
     >  clarifications of important borderline cases, and so forth.

   [Stanislaw Ambroszkiewicz]
   In order to have nothing to do with natural language and 
   its meaning, let me translate the story into the world of robots. 
   Robot X (being at initial situation sIn) performed action A 
   and then perceived (via its sensor, say a camera) the situation 
   sOut. Situations sIn and sOut are images represented 
   as arrays of pixels in the robot memory. X had a goal to 
   achieve, say G, represented as a collection of situation. 

   Suppose that the robot had a built-in routine for performing 
   data abstraction on the basis of its experience.  
   For simplicity, assume that the actions have deterministic 
   effects. After performing the action A several times at 
   different initial situations, the robot was able to compute 
   a common pattern P for the initial situations that lead 
   to G after performing the action A by X. 
   The pattern may be represented as the string P(?sIn, X, A) 
   describing what initial situation ?sInthe lead to the 
   effect G after performing action A by robot X. 

   Then, the robot can also abstract from A and from X. 
   That is, the robot can compute a class of actions that 
   once performed lead to the same goal, and so on. 

   If there is a common syntax where the pattern P can be 
   expressed as a formula, the robot can publish it and speak 
   to other robots in terms of this relation.  
   However, what about the meaning of P(?sIn, ?x, ?a) ?
   How can the meaning of this formula be published? 

I don't quite understand the scenario.  By "pattern" here I assume you
mean that there is some formula P, expressed using predicates in the
"common syntax."  It might be a conjunction of atomic formulas, or
something arbitrarily complex.

Alternatively, "pattern" might be "sensory data pattern," or something
like that.

Either way, I don't think there's really a problem.  In the second
case, the sensory data pattern might be useful only to robots with
sensors similar to the one that learned P, but the key in both cases
is that all the robots, or agents, share a common vocabulary.

It might sound like agents could never acquire new concepts, since
everything they talk about is expressed in a static language.  I
agree: the language has to evolve.  To take a really simple case,
suppose a new microprocessor chip comes on the market, and we add to
the electronic-parts ontology a new term, "MPC8765A," to refer to it.
Now one agent can offer to buy 100 copies of the MPC8765A from another
agent.  What does the symbol "MPC8765A" mean?  It refers to a type of
chip.  Why does it mean that?  Because there is an appropriate chain
of causal links between that chip, the people who invented the term
for it, and the people who revised the ontology to include the term.
Do the agents buying and selling the chips know what "MPC8765A" means
or how it means that?  No.  Of course, a human purchasing agent
usually doesn't know the full meanings of the terms he or she uses.
He or she defers to the experts, a phenomenon first discussed by the
philosopher Hilary Putnam.  (I can never remember the names of
flowers, so there's a sense in which I don't know the meaning of
"marigold" and "iris."  But I know they refer to flowers, and I know
where to find an expert who does know the meanings.)  In the case of
computerized purchasing agents, they're presumably not even capable of
discussing what the terms mean; that's not their job.

   ...

      From: Drew McDermott, Wed, May 21 2003 Subject: Meaning: 
	"... The formal specification answers essentially all
	questions about the meanings. ... "

   [Stanislaw Ambroszkiewicz]
   Where is the meaning in a formal theory? 
   It is only a syntax, i.e., a naming convention and some 
   rules how to transform one string onto another. 

Right.  I meant to say it answers all interesting (and answerable)
questions, such as, Could P be true of an object and Q be false?

   You may say that, according to Alfred Tarski, a formal 
   semantics can be constructed for this theory. But this 
   semantics is only a translation from one formal theory into 
   another one. 

That's not true, but I don't want to get drawn into a discussion of
that issue.  Instead, I'd like to point out the sense in which a
Tarskian model can specify meanings of things, and where its
limitations lie.

Normally when we construct Tarskian interpretations, the goal is to
propose a semantics for a small subset of the symbols used in the
theory.  For example, around 1960 Kripke (and independently Hintikka)
proposed a semantic framework for modal logics in which "necessarily
P" was true if it was true in all possible worlds related to this
world by an "accessibility relation" R.  The properties of R were
correlated with different variants of modal logic.

But notice that in the semantics for a formal theory where you can say
"Necessarily (there are 9 planets in the solar system)" nothing at all
is said about the meanings of "planet" or "the solar system."  All
Kripke semantics does is explain the meaning of "necessarily," just as
Tarski's original semantics explained the meaning of "exists."  
Tarski's framework is extremely useful for working out the meanings of
such "logical" symbols.  It doesn't help at all for explaining the
meanings of non-logical symbols like "planet."

   According to another Polish logician, Jerzy Los, 
   meaning of a language comes from describing decision making 
   and action executions that correspond to that decisions. 
   Hence, a formal language is necessary, however its meaning 
   should be related primary to the action executions rather 
   than to axioms. However, axioms are important; it is much 
   more easy to operate on axioms using formal reasoning 
   techniques, than to operate on the original meaning. 
   Nevertheless, the reference to the original meaning should 
   be of some importance especially in the case of so called 
   machine readable semantics in an open and heterogeneous 
   environment, e.g., the Web. 

   Why shouldn't we even be trying to solve the problem of 
   how words get their meanings? 
   It is my job (as a researcher) to try!

That's okay with me.  The question is what we mean by "so called
machine readable semantics," and the answer is: >> There is no such
thing, but it is not necessary. <<  

I think this issue causes such grief because of the following problem:
Computer scientists are comfortable discussing data structures and
protocols.  So they are all happy building web services right up to
the WSDL or even BPEL level.  The problem is that to go one step
further you have to start using formal languages with symbols like
"owns."  (So that, for instance, eBay can advertise the fact that
if you follow such-and-such a protocol regarding object X, you will
own X.)  This freaks non-AI people out, because none of the techniques
they are familiar with for describing meanings will work for the case
of "owns."  So progress stalls while people debate what
"machine-readable semantics" would look like, even though it looks
like an infinite recursion from the get-go.

AI people went through this debate on formally specifying meaning two
decades ago, and have learned that it's just a distraction.  There are
buzz phrases people still use, such as "procedural semantics," but
they are basically irrelevant to whether I can put the symbol "owns"
in an axiom and have it mean actual ownership.  I would claim it means
ownership for reasons similar to the reasons why "MPC8765A" means a
certain type of chip (see above), but I could be completely wrong, and
it wouldn't make any difference.

The real issue is that the tools required to manipulate declarative
languages with terms like "owns" are different from the tools required
to manipulate WSDL.  For example, instead of an automatic
SOAP generator you might want a planning algorithm that can find a
plan for coming to own something.  (E.g., the algorithm discussed in 
Drew McDermott 2002 Estimated-regression planning for interactions
with web services.  {\it Proc. AI Planning Systems Conference 2002}.)
Getting these tools to work is much harder than writing SOAP
generators.  But the problems have to do with getting search spaces
right and keeping them small, not with the meanings of terms.  Just
relax and you'll see: the meaning problem is going to recede into the
background. 

-- 
                                             -- Drew McDermott

Received on Friday, 20 June 2003 14:08:09 UTC