Re: Stipulative Ontologies

Just cherry picking a couple of replies to Pat.

I think this thread is a non-starter, personally.

On Apr 6, 2004, at 8:38 PM, Pat Hayes wrote:
[snip]
>> Meanings happen to intelligent agents when they encounter a symbol and
>> know what the intention of the sender of it was, by some sort of prior
>> agreement, and so interpret that symbol correctly.
>
> Agents can never do that.  Human beings can't do that: that is a 
> fantasy. Think about it: how could the prior agreements ever get off 
> the ground? You have to use language to come to an agreement in the 
> first place.  Exactly what goes on in NL understanding by people is 
> not fully understood, to put it mildly, but its way more subtle and 
> complicated than this simple Wittgenstinean picture.

*THAT* is Wittegnsteinian?! Surely not!

Still too simple. Gricean accounts quickly get way more complicated 
that this. And any convention based account is going to have to spend 
quite some time eliding the "some sort of prior agreement" so they 
aren't prior and aren't agreements :)

And meanings don't *happen* to agents. Most agents. *My* agents, anyway.

[snip]
>>  and it has nothing to do
>> with english or natural language or even OWL for that matter.  If I 
>> send you
>> a Java class file, coded to carry out my intentions on your machine, 
>> and you
>> have a Java virtual machine, which can interpret it, then my meaning 
>> has
>> been transmitted.  There is nothing mysterious about that.
>
> The behavior of your code has been transmitted.

While I agree that my intentions, or most of my intentions, haven't 
been transmitted (how do we pick out which ones?), I don't knwo the 
behavior has been trasmitted. Frankly, only the *code* has been 
transmitted. That code bears a startling number of relations to the 
behavior of a computer, most of which is fairly underdetermined, 
certainly by the author's *understanding* (much less intentions).
[snip]

Cheers,
Bijan Parsia.

Received on Tuesday, 6 April 2004 22:48:16 UTC