Re: homonym URIs (Re: What if an URI also is a URL)

John Black wrote:
Finishing my response. Sorry the structure is now messed up.
>
> Ian Davis wrote:
>
>> On 13/06/2007 14:10, John Black wrote:
>>>
>>> Please forgive me, Ian, I'm going to highjack this for myself.
>>>
>> By all means :)
>>
>> I wrote:
>>
>>>> Out of interest how do you attach the English word "Venus" to the 
>>>> physical body that you are referring to?
>>
>> To which you part of your reply was:
>>
>>> When I write the word Venus, I do it expecting you, my readers, to have 
>>> had similar experiences to me. I expect you studied the planets in first 
>>> grade, have access to WikiPedia, can clearly see the sky, that some 
>>> trusted elder spoke the word "Venus" and pointed your attention to a 
>>> bright light in the sky, etc., etc. Also, if I know that there may be 
>>> confusion, because the word can be ambiguous, I may add to the dance 
>>> with a little jig, as in, "I think Venus, the planet, is wonderful." If 
>>> I have already established the context, however, I may count on you to 
>>> disambiguate it yourself. In a report about the planets of our solar 
>>> system, I expect you to infer yourself that I mean Venus to refer to the 
>>> planet, not the tennis player.
>>
>> This is what I expect and it makes its point very well. My question was 
>> somewhat rhetorical to test whether I understood the debate adequately. I 
>> think I do now.
>>
>>
>>>
>>> Restoring some of Pat's remarks, "The only way out of this is to 
>>> somewhere appeal to a use of the symbolic names - in this case, the IRIs 
>>> or URIrefs - outside the formalism itself, a use that somehow 'anchors' 
>>> or 'grounds' them to the real world they are supposed to refer to."
>>
>> This is the bit I don't understand. I'm not a logician nor a philosopher 
>> so I'm applying the little common sense I have to this problem. It seems 
>> to me that there is no difference here between the symbol 
>> "http://example.com/venus" and the symbol "venus" (being the word I would 
>> utter in conversation). Neither can be attached to the physical object 
>> that they refer to. However they can both be understood by relating them 
>> to other symbols.
>
> This is the discussion I had hoped for, Ian. Thanks for your thoughtful - 
> and thought provoking - remarks. I don't know the answers, this is what I 
> am currently investigating. Also, I have to make a living, so a full reply 
> may take a while (a few days). But just for starters here is a thought, 
> thanks partly to some things I read in John Dewey's "Experience and 
> Nature":
>
> Trying to understand symbols only by relating them to other symbols was 
> tried and failed. First, before empirical science was perfected, academics 
> tried to explain things in terms of symbols. Symbol systems defined 
> symbols in terms of more symbols, sometimes producing marvelous edifices 
> but fairly useless in practice.  It was revolutionary when early 
> sceintists said you had to start with measurements and observations, and 
> proceed by experiment to establish the basic layer of symbols, 
> relationships, and laws. Later, during the early days of AI, something 
> similar happened. Brilliantly conceived systems of automated reasoning 
> were created and built. But here too, systems were based primarily on 
> high-level symbols. Symbolic logic, symbolic computation, and symbolic 
> reasoning running on powerful high-speed computers were expected to 
> produce systems of great generality and power.
>
> But they didn't. Many people, John Searle, for example, now argue that the 
> problem is that in the only systems known to be intelligent, humans, 
> symbols are at the top of a huge base of primary experience, all the 
> blooming, buzzing confusion of being in the world of sites, sounds, 
> tastes, emotions, movements, and actions. These human symbols gain their 
> power through their connection to this primary experience. For one thing 
> it relates them back to the world from which they came. This makes them 
> more effective symbols. The poor results of academics and early AI 
> researchers might have been due to believing that you could sort of pluck 
> the cherries of symbols off the deep-rooted tree of experience and use 
> them as you pleased, in this disconnected state. In fact, it was thought 
> that you could gain even more power and generality by freeing yourself of 
> the restrictions of human experiences. The problem was that these systems 
> proved rigid and blind whenever the problems they confront strayed from 
> the narrow tests, prototypes, and goals used during their design and 
> creation.
>
> Paradoxically, it is now thought that the flexibility, robustness, and 
> real power of symbolic computation, especially in areas requiring common 
> sense, are due in fact to this grounding of symbols in experience.
>
> So, to be brief :) I disagree that we communicate about Venus by just 
> relating symbols to other symbols.
>
> John Black
> www.kashori.com
>
>
>> In my naive view of human cognition, I imagine that you and I can 
>> negotiate a shared meaning of a symbol by relating them to many other 
>> symbols to the degree that really only a single thing can be the 
>> referent. My conceptual graph of relations between symbols is isomorphic 
>> to yours so the thing at the heart of that graph must be the same as 
>> yours.

I don't think a shared meaning or an isomorphic conceptual graph is 
required. Sometimes our purpose involves each of us focusing our respective 
attentions on the same particular, individual thing. To do this, we must 
each already have had some individual experience of associating that symbol 
with that individual. However, the way each of us individually makes that 
association may not be the same. So there is no need for a 'shared 
meaning' - we just both need to be able to make the association in some 
effective way. Furthermore, we both need to know that the other has formed 
this same association. In fact, we need to have common knowledge of the 
association between the symbol and the individual.That is, I need to know 
that you know the association. And I need to know that you know that I know 
the association, etc.

>> Clearly this is easier when you are able to indicate the referent of a 
>> symbol by pointing to it, e.g. an apple on a tree. However in the 
>> semantic web we only have a single sense with which to relate things 
>> together. It's like sharing knowledge with someone when neither of you 
>> can see, taste, feel or move.

Right, sharing the experience of indicating a thing and speaking the name at 
the same time can be a rapid road to common knowledge of an association 
between a symbol and a thing.

But what is this 'single sense' that we do have? And who do you mean by 
'we'? There are human agents involved in the semantic web and there are 
machine agents involved.

>>
>>> What he is calling "...outside the formalism itself...", I am referring 
>>> to as shared experiences, following John Dewey, and which Herbert Clark 
>>> calls common ground, Kripke calls a name baptism, and Searle refers to 
>>> as the background.
>>
>> OK, this is new stuff for me, but it seems to me that it's like pointing 
>> to the apple in my example above.
>>
>>
>>> So the big, big question, IMHO, for the semantic web is this. What can 
>>> be done to mimic, in some minimal, but sufficient way, using existing 
>>> web technologies, in a way that machines can utilize if possible, the 
>>> grounding of URI in something outside the formalism of RDF/OWL/etc.?
>>
>> Isn't that where humans come in? Your RDF states that a particular URI, 
>> when dereferenced using HTTP, provides a depiction of an apple. I try 
>> that, see an apple and conclude that I now understand the meaning of your 
>> URI (it may take a while if I don't understand your property).

Why do human come in here? Also, according to what I hear, If  HTTP 
operating on the URI returns the picture, then it names the image, not the 
apple depicted. Your RDF names the apple itself with a different URI. But I 
think I see what you are driving at. Associating an image with a URI which 
names a thing of which that that image is a depiction of is a type of 
grounding the symbol in the visual world.

>> The following metaphor came to me while writing the above about having 
>> only a single sense. I wonder if it helps illuminate why I think the 
>> semantic _web_ is different to the world of semantics that came before:
>>
>>
>> Imagine you and a stranger can only communicate through the use of 
>> coloured pebbles that you may place in various arrangements. Suppose this 
>> stranger arranges a crimson pebble between a yellow pebble and a blue 
>> one. Next they place another crimson pebble next to the blue one and a 
>> white one next to the new crimson one. Then they place a brown pebble 
>> next to the blue one and a grey one beside that. Finally they place a 
>> green pebble and a black one with yet another crimson pebble between 
>> them.
>>
>> You start to see a pattern forming. Somehow the crimson pebbles link the 
>> other pebbles together and you suspect it represents some common 
>> relationship. However, since you can only communicate in coloured pebbles 
>> you can never relate that to the rest of the world.
>>
>> Now, suppose you learn that you can turn the pebbles over. You turn the 
>> yellow pebble over and discover a picture of a young Elvis painted on it. 
>> You turn the white one over and discover a picture of an older Elvis. You 
>> turn the blue pebble over and there's nothing on the other side. The same 
>> is true for the crimson pebble. When you turn the grey pebble over you 
>> see the number 1935 inscribed on it.
>>
>> Now you turn to the second arrangement of pebbles. Under the green one is 
>> a picture of Bill Haley. The crimson one is blank, as is the black one.
>>
>> Perhaps you might infer that the red pebble somehow denotes "picture", 
>> although with this limited evidence there are many other possible 
>> explanations. The more arrangements of crimson pebbles touching pebbles 
>> with pictures on the back you see, the more confidence you might gain 
>> that the crimson pebble denotes "picture". Even more so if you have no 
>> contradictory evidence. At some point you may even infer that the blue 
>> pebble denotes Elvis Presley.

I've drawn this on paper, but a graphic on your blog would help, especially 
an interactive one :)

>> Turning the pebbles over is grounding it in the real world; in your human 
>> experience.
>>
>> In the Semantic Web the equivilent of turning the pebbles over is 
>> dereferencing a URI. For HTTP URIs, we perform a GET. A human can do it 
>> to discover what a URI denotes and by looking at lots of these patterns 
>> they can gain confidence in their interpretations of the URIs that denote 
>> the relationships between these things.

I accept this. I like it. And yes, I think its possible that minimally 
intelligent agents could gradually evolve a 'pebble' language and use it to 
communicate in this way. And more generally, with pebbles equivelant to URI, 
turning them over to dereferencing them with HTTP, red with properties, 
other colors with subject or object or bnode, graphs of pebbles with sets of 
tripples linked by properties, and so on, you thus show that such agents 
could evolve a language out of RDF using information returned by HTTP from 
each URI to ground them in the world. Whether this is true or not could even 
be investigated empirically. You could produce data showing just how many 
arrangements were needed to gain a certain level of confidence and so on. Of 
course, you would need to clarify how all this is used by two or more 
communicating agents to send and receive messages. The Talking Heads 
experiments carried out by Luc Steels in Paris do something very much like 
this.

>>
>>
>> That's how I see the Semantic Web.

And it is not far from the way I see it.. And this is why I'm so interested 
in the semantic web technologies. It does seem that the technology is very, 
very close. That is why I spend so much of my time on it, thinking, writing, 
exchanging emails - all without compensation :) I am proponent.

Still what you are describing, gradually evolving an interpretion of the 
denotation of a URI by repeated exposure to information it indicates through 
HTTP operations, is quite different from the assertion that suggest you can 
practically just download the denotation of that URI off the web, as if it 
were a zip archive: denotation-of-URI.tar.gz.

Similarly, when Tim responds to calls for an official standard or widely 
accepted defacto standard for establishing the denotation of a URI for any 
particular utterance of it by saying it was already done for the RDF/OWL 
standards according to the long, expensive, slow dance of the W3C standards 
development process, 
http://www.w3.org/2005/10/Process-20051014/tr.html#Reports, he either misses 
the point or is playing with us. As far as I can tell, there is no 
equivalent to the process that was used to establish the denotation of 
RDF/OWL URIs for use by the rest of the world as it invents new 
vocabularies. On the other hand, the process used by dbpedia.org may be the 
beginnings of a widely accepted defacto standard.

In my mind, its a question of quality and of avoiding hype. It is often 
stated that you should 'put some usefull information about the URI at the 
location it references'. Then later it is claimed that that information 
provides the meaning of or denotation of the URI. One of my points is that 
there is a discrepancy between those two, between 'usefull information' and 
'denotation'.  I would like to see those claims stopped, or better yet, help 
make the claim true.

John


>>
>> Ian
>>
>>
>>
>>
>>
>
>
> 

Received on Friday, 15 June 2007 12:07:50 UTC