RE: How does RDF/OWL formalism relate to meanings?

>  > From: Thomas B. Passin
>>
>>  John Black wrote:
>>
>>  > ... You haven't mentioned resources, URIs, or the
>>  identification of the
>>  > former by the later.  Is that involved?  So far, it seems that you 
>>  > only need the main URI to hang the owl:FunctionalProperty
>>  elements on,
>>  > since the meaning is actually created by the addition of functional
>>  > properties.  Turning the question around, how does the
>>  URI/Resources
>>  > formalism relate to these meanings?  It would seem that the main
>>  > URI was almost irrelevant, you could substitute one for another
>>  > and, due to the functional properties, the meaning would be
>>  the same.
>>  > Is this right?  How about the notion of the strength of one of my
>>  > employees.  Are there functional properties I can use to get a URI
>>  > to communicate to my audience that one of my employees is really
>>  > strong?  How do I make a URI stand for my notion of strength, so
>>  > that I can be sure my audience gets it?
>>
>>  It seems to me that you are conflating several different things -
>>
>>  1) Conveying something to a human audience.
>>  2) The machine-understandable "meaning" of some block of
>>  RDF/OWL/whatever.
>>  3) The "meaning" of URIs, and
>>  4) The "meaning" of a resource.
>
>Well, yes, they do seem to have become conflated, and I am trying to
>tease them apart.

Actually they are (carefully) not conflated in the specs.  Maybe you 
should start with the specs and say what it is about them that isnt 
adequate for you.

>
>>  These are not totally unrelated, but they are not the same,
>>  either.  Pat
>>  Hayes (for one) has been telling you this.
>
>Tell me about it, Bijan and Peter too.
>
>>  For human consumption, there are several primary ways to convey the
>>  intended "meaning" of some particular URI - remembering that a URI
>>  stands in some way for a "resource", which in turn may be retrievable
>>  over a network or not, and may be abstract or not.
>
>As I was asking Dan, why do I need to remember that a URI stands
>in some way (what way?) for a "resource"?  How does that relate
>to the meaning of the URI?

I don't think it is very useful to speak of "the meaning" of a URI. 
Suppose I say, URI's (by themselves) just don't have meanings. I 
would be willing to stand up and defend that position in a formal 
debate if required. (If the TAG is reading this, I acknowledge that 
they might *identify* something...) . Now, does that answer your 
question?

>  > The most straightforward way is to attach NL labels to the
>>  things in the
>>  RDF graph.  People read the labels and get some sense out of
>>  them.  They
>>  see and read the relationships and get some sense out of them.  Note
>>  that this process is very different from a process of precise
>>  communication between machines or autonomous agents.
>>
>>  Another thing that is done is to have the URI for a resource point to
>>  some readable text about the intended "meaning" (or at least
>>  identity)
>>  of the URI.  There is some ongoing debate about how to make
>>  that work,
>>  bearing in mind that the URI in question may or may not intend to
>>  identify the actual web page that would be retrieved by
>>  dereferencing it.
>>
>>  Next up the ladder would be to arrange for RDF to be received upon
>>  dereferencing the URI of a resource.  The RDF (possibly with
>>  some OWL,
>>  which of course is also RDF) would be more precise than text,
>>  but this
>>  may or may not help a person grasp the ideas better.
>>
>>  Another step is to identify a resource by identifying
>>  properties - that
>>  might be an inverse functional property, but not necessarily.
>>   This is
>>  like saying "the man standing next to my car" - this does not
>>  describe
>>  an inverse functional property but may nevertheless be all that is
>>  needed.  Typically, several such properties would be used together to
>>  help identify the resource (which may represent a concept or
>>  a term in a
>>  vocabulary) in question.
>>
>>  Remember that, to computers, it is all bits, that is to say, symbols.
>  > So the whole ball of wax - Semantic Web-wise - is pretty much
>>  a bunch of
>>  abstract symbols and rules for their manipulation.  That is like
>>  Newton's laws, for example.  They can be studied in an abstract
>>  mathematical way as symbol manipulation, or they can be
>>  studied from a
>>  point of view grounded in the motion of real things that have
>>  mass.  The
>>  only way you are going to get a computer so grounded is to have its
>>  results and its inputs be somehow appropriately interact with
>>  the real
>>  world.  And getting that to happen is something ouside of the
>>  realm of
>>  abstract symbol manipulation.
>>
>>  So you want your "audience" to grasp your notion of "strength"?  Then
>>  tell them what it is, tell them that the URI
>>  http://www.example.org/ontology/strength has a lable "John Black's
>>  notion of Strength".  Point them to some online text that
>>  explains it.
>>  Just don't expect an RDF processor to magically know to make
>>  all those
>>  inferences that you hope your audience would.
>>
>>  What about OWL? Well, with neither RDF nor with OWL can you
>>  say that "no
>>  man can have an arm strength of more than 500 kg", just to
>>  make a crude
>>  example here.  But with OWL, you can restrict - in a standard
>>  way that
>>  an OWL processor would honor - the allowed values of
>>  properties (to some
>>  degree), including the "John Black's Strength" property.  That may
>>  accomplish enough of what you want to be worth doing.
>>
>>  I realize that this post was long.  Does it start to address some of
>>  your questions?
>
>Yes.  Thanks for taking the time to answer.  Unfortunately it answers
>them in a way that is unsatisfactory.  For my applications, I need to
>send messages from people to machines, from machines to people, and
>from machines to other machines.

OK so far.

>  And at each step, I need for the
>people or machines to get my notion of strength, for they may need to
>make decisions based on it.

At this  point a miracle occurs.  Machines CANNOT "get" human-level 
meanings of words. Sorry, but its just a fact. Ive been working in AI 
now for over 30 years, and I wish sincerely that they could, but I 
know they can't, and aren't likely to be able to in the forseeable 
future (IMO: others are more optimistic.)  However, on the bright 
side, they can 'get' some of it, for some purposes, and do useful 
things with the little glimmerings of it that they can get; and by 
virtue of being able to run with this little amount of knowledge at 
remarkable speeds over remarkable amounts of data, can do things with 
it that humans couldn't possibly do.

>  All of your methods of interpreting my
>URI/RDF/OWL for the meaning involve resorting to natural language. 
>And due to the machine to machine links, this just won't work.

Don't rush to this conclusion. The basic point is that the machine 
(or the human for that matter) does not need to UNDERSTAND the 
meaning in order to pass it on to someone, or something, that does. 
All that is required is that the passer-on doesnt do anything that 
will change any meanings.  Consider: your picture above (before the 
miracle) could be applied to the current non-semantic Web, where the 
people write HTML with NL text in it, the machines do things to it, 
and then humans read the text on their screens. The machines don't 
understand the text, but they can transmit it and even process it in 
certain ways without either understanding it or breaking the meaning 
that it carries from humans to humans. Meaning isn't something that 
needs to be understood in order to be preserved in a message. OK, 
what we can do with SW technology, hopefully, is to have the machines 
do some smarter things than this based on a certain level of 
comprehension of the SW markup. They can draw conclusions in ways 
that are potentially very useful, according to rules defined relative 
to agreed-on formal semantics. But the stuff they manipulate might 
well have other 'meanings' for the humans (or indeed other software, 
for that matter) at either end, and we don't have to insist that this 
other meaning, if it exists, is accessible to the SW engines. All we 
need to know is that the SW engines don't break it: and to do that, 
all we need to know is that this meaning, whatever it is, is 
*preserved* under SW inference. So if your URI  means "strength" to 
you, then as long as the inferencing process doesn't break that URI 
it can still mean 'strength' to someone who speaks your language and 
reads it. If the formal ontologies that manipulate the OWL containing 
that URI are all *compatible* with your human-level comprehension of 
that URI, everything is fine. They don't have to actually be up to 
the task of grokking all of your meaning.  So the human audience at 
the far end can look up your URI and find your essay about the true 
nature of strength, and be enlightened as to your true meaning; and 
the poor dumb machines can access any OWL stuff that they can find 
that uses your URI and draw some mundane low-level conclusions which 
only reflect a pale shadow of the full meaning. But as long as the 
OWL isn't actually *incompatible* with your intended meaning, 
everyone should be happy.

Now, a parenthetical remark about the above. I've used the 'meaning' 
word here rather loosely, which is dangerous. Its very important to 
realize that the message doesnt 'carry' the meaning. All it does is 
send enough stuff so that a sufficiently elaborate representation of 
the meaning can be constructed in the head of the recipient. Every 
native speaker carries around inside their own head the meanings of 
all the words of their native language. When we talk to one another 
we don't transmit *meanings*: we transmit *words*. We already have 
the meanings, or the words would convey nothing. So as long as the 
words are transmitted undamaged, the meaning will be conveyed to 
anyone or anything competent to read it. You don't need a 
meaning-pipe to send the meanings through.

Pat Hayes

>Perhaps you could translate each of the above methods into something
>that a machine could do.
>
>John Black
>
>>
>>  Cheers,
>>
>>  Tom P
>>
>>


-- 
---------------------------------------------------------------------
IHMC	(850)434 8903 or (650)494 3973   home
40 South Alcaniz St.	(850)202 4416   office
Pensacola			(850)202 4440   fax
FL 32501			(850)291 0667    cell
phayes@ihmc.us       http://www.ihmc.us/users/phayes

Received on Thursday, 8 April 2004 13:53:16 UTC