- From: Pat Hayes <phayes@ihmc.us>
- Date: Tue, 6 Apr 2004 19:38:24 -0500
- To: "John Black" <JohnBlack@deltek.com>
- Cc: <public-sw-meaning@w3c.org>
> > From: Pat Hayes [mailto:phayes@ihmc.us] >> Sent: Tuesday, April 06, 2004 1:22 PM >> To: John Black >> >> > > From: Bijan Parsia [mailto:bparsia@isr.umd.edu] >> >> Sent: Monday, April 05, 2004 9:02 PM >> >> >> >> On Apr 5, 2004, at 8:24 PM, John Black wrote: >> >> >> >> > A consideration, relevant to these discussions, is the degree of >> >> > control desired by the author over the meaning of a semantic web >> >> > document. >> >> >> >> And the degree of control possible. >> >> >> >> Plus, the degree of control *other* people have over *my* >> >> document, if >> >> I use "their" URIs. >> > >> >Control over meaning is possible and frequent. For example, >> """In the >> >following triple, let x = "JohnBlack"; let y = "is"; and let >> z = "correct". >> >Here is the triple: :x :y :z. """ My meaning for x, y, and >> z is clear, >> >unambiguous, and undisputable due to the stipulative >> definitions I provided. >> >> But you didn't, and it isn't. > >Yes I did, and I have. I said my meaning for x, y, and z is clear. Clear to who, or to what? OWL and RDF are intended for use by software reasoning engines, not by people. Typical software reasoning engines (1) don't understand English and (2) are about as intelligent as a hamster. And in any case, you are wrong. You claimed that your assignment is unambiguous and undisputable. There is no such thing as unambiguous and undisputable assignment of meaning in natural language. > I >did not say I had defined "correct". It is you who has overreacted to the >english meanings of the strings, all I said was z = "correct". Perhaps I misunderstood. Did you mean that the 'meanings' assigned were those literal quoted character strings? If so, your triple is not legal RDF. If you use URIrefs (as you must for at least the property) then the meaning has to be the intended denotation of the URIref, and its associated property extension, which is probably too large to list (and may well be infinite). > > What exactly do you mean by "correct", >> for example? (Do you have a correctness ontology somewhere?) And is >> there only one John Black on the entire planet? So which of them are >> you referring to? etc.. >> >> More generally, setting out to define OWL meanings by reference to >> English words is hopeless. It is impossible to capture the full >> meaning of any English word in any piece of OWL; but on the other >> hand, the precision of meaning that can be achieved by OWL exceeds >> that of the most careful English statement. Formalisms like OWL treat >> URI meanings like very thin needles, whereas English words are like >> wide, flexible plastic sheets. They are just incomparable. >> > >But how then does one thread this tiny, little needle? Good question. By writing axioms, is the standard answer (read 'ontology' for 'axiom'). But there are other ways. If there is some fixed system of referential constructs one can anchor to those for some purposes (SS numbers for US citizens, numerals for numbers) , and the Web itself provides several such. But this only gets to referents, not meanings. Of course one can 'ground' the terminology in various ways, in some cases to English words or Wordnet synsets: but this kind of grounding is always approximate and vague. To see some of the complications, take a look at the number of distinct meanings that CYC needs to assign for a single English word. My favorite is the 15 or so senses of 'cover' that it has been obliged to distinguish (cover with hair, cover with a blanket, cover with paint, cover with an umbrella...) . Its obvious what 'in' means, right? Well, no, its not: linguists have managed to distinguish between 10 and 100 distinct meanings, and if you include metaphorical uses this goes up one or two orders of magnitude. Just about every English word is multiply ambiguous when viewed from an ontological-precision perspective. > The way you are >describing it there is almost no way to get meaning into the system at all. >I'm not talking about the meaning *preserving* inferential machinery. What >I'm talking about is how do you put the meaning into the URIs and how do you >get it out. I really don't understand what you are talking about here. Meaning isn't a kind of liquid: it doesn't get put in or taken out. You don't put meaning into URIs. They are just labels to hang assertions on. >How does one encode a URI with meaning so that it can be used >at all? And how does one decode it on the other end? You don't decode it. You draw conclusions about it, and maybe use it in GET protocols. >Obviously, you can't. Right. >Meanings happen to intelligent agents when they encounter a symbol and >know what the intention of the sender of it was, by some sort of prior >agreement, and so interpret that symbol correctly. Agents can never do that. Human beings can't do that: that is a fantasy. Think about it: how could the prior agreements ever get off the ground? You have to use language to come to an agreement in the first place. Exactly what goes on in NL understanding by people is not fully understood, to put it mildly, but its way more subtle and complicated than this simple Wittgenstinean picture. But in any case, never mind NL for the moment: have you ever looked at the code of a software agent? Is it even remotely plausible to say that things like this can know what human intentions are? I speak as an AI enthusiast, but with a realistic vision of what we can achieve. > And this hardly seems >incomparable to natural language. What else in the universe could be >more similar? > >> >It would be silly for you to come along and say, """I >> dispute that what >> >you meant by x was "JohnBlack", In fact, what you meant by x was >> >"BijanParsia"! """ Of course, the truth value of the >> overall triple is >> >debatable, in either case, but *my* meaning is under *my* control. >> >> You intended meaning may indeed be under your control (though I bet I >> can produce doubt in your mind by a bit of Socratic questioning), but >> your intentions are kind of irrelevant since you can't transmit them >> over a Web. What you can transmit is OWL, and the meaning of THAT is >> determined by the OWL model theory. That's all that you have. Now try >> doing some stipulation. > >No, I use OWL to transmit my intentions, Dream on. OWL doesn't communicate intentions. > and it has nothing to do >with english or natural language or even OWL for that matter. If I send you >a Java class file, coded to carry out my intentions on your machine, and you >have a Java virtual machine, which can interpret it, then my meaning has >been transmitted. There is nothing mysterious about that. The behavior of your code has been transmitted. If all you ever want to communicate about is the behavior of code, then yes, you can do that. But meanings are typically not computable things. Ontologies are not typically about domains that satisfy the second recursion theorem. >When I use >OWL, my meaning is NOT determined by the model theory any more than the >behavior of my Java class is determined by the Java language specification. But surely the behavior of your class is determined, in part , by the Java spec. If the spec had been written differently then your code would work differently, if run on a correct interpreter. Right now, according to the specs, all the meaning you will ever be able to transmit using OWL over a network is determined by the OWL model theory. YOUR meaning may be anything at all, but that's in your head. I have no access to that meaning. All I can see is the OWL that you send me, and all I know about it is what the specs tell me. Go read the specs and see what they say about meaning. <snip> > > > >> >Perhaps we could say authorship, and compare it to citation. I can >> >quote your paper, but I should put it in quotes and include >> a citation >> >so my readers can look it up and read for themselves what you meant. >> >This respect for your meaning, by citing your entire paper, does not >> >prevent me from using quotes from your paper. In fact, it makes it >> >more valuable. I don't have to redo all your definitions, > > propositions, >> >or arguments. I can use your conclusions and cite the rest. >> >> Look, NONE of these metaphors are relevant. The entire business of >> handling formal ontologies and asking about their meaning is quite >> unlike the business of communication between speakers of a natural >> language. > >Yes, they are ALL relevant. You make working with ontologies sound like >some mysterious black art. Well, its not black, but it is a technology with a track record and a reasonably elaborated theoretical background. Its not just some brand-new bloggers fantasy dreamt up a few months ago to make the Web better. >If using an ontology is that hard, what use is >it? I don't have time to explain. Go read a book. >And again, what else in the universe is more similar? > >> If its like anything, it would be more like a kind of >> primitive telepathy but between intellects about the size of a >> dormouse. > >Right, and that is because the meaningful uses of intelligent agents is >initiated by humans, No, that is flat wrong. Humans write the code, but they don't initiate the agent's actions. That is the entire point: to have software take initiatives and act on our behalf. >and in the end, it is still humans that >interpret the final results for their meaning. Sigh. Ive met this attitude before. I'd suggest actually reading something about AI and software agents before going further in this debate. At the very least, don't assume that the 'meaning' for the agent is what it would be for a human playing the part of the agent. And don't assume that all that an agent can do is to report back to a human. > In a way, it seems to >me that all meaning is social. Who is ready to come out and say >that their agent really "understands" the meaning of the RDF it parses? >Isn't the truth still that all of our agents are far less capable than >even a dormouse of interpreting the meaning of anything? So who >here is doing the semantic work of the semantic web? These dumb agents are. So we can't expect miracles. All the same, these dumb things can get a lot of useful work done, if we design them properly. <snip> > > > You are here presuming a social setting in which the stipulative >> definition has a certain role. But this social setting is not > > applicable to web ontology traffic, so your point loses its force. > >Yes I am presuming a social setting, and rightly so, in my opinion, so >my point doesn't lose it's force. I don't think we are at a point of >letting our agents loose to go their own way, inventing OWL all by themselves. I didnt say they would be inventing OWL. But the agents USING the OWL will indeed be going their own way, "loose" as you so graphically put it. The interesting aspects of meaning for the SW are those which are utilized by the agents in these lonely milliseconds where they have to decide what to infer and what to do, and they don't have time to wait for us to tell them. >We still give it meaning and we interpret the meaning >so given it. (Us humans I mean). No doubt humans will do that. They will also do things like cluck with disapproval when they see pornography, things like that. But that is largely irrelevant to the *operation* of the Sweb, since none of that 'social meaning' is visible to the software. So there's no point in discussing it here. And what we do have to discuss isn't like this at all. Think about it: if we follow your line here, the Sweb is just the Web, but it has people at BOTH ends. People (who write the OWL) are the web servers and other people (who interpret the outputs of the software agents) are the web clients. If this is correct, the Sweb is less technologically interesting than the old Web. What we are all trying to build is something more sophisticated and newer, where software agents and humans interact in new ways. It really isn't like a huge NL conversation between equals. > >> It >> is always possible for one agent to disagree with another. The Web, >> however, unlike English, provides a way to follow any name to its >> source, which allows readers to determine which sources are more >> definitive than others. If English were like this, we could all, just > > by reading a word, know everything that its inventor wished us to >> know about the meaning he had in mind. In this case, the entire >> apparatus of stipulative definitions would be largely unnecessary: >> rather than set out to attach a special meaning to an existing word, >> one would just use a new word. English of course is not like this, >> so we have a different way to share meanings which depends on >> flexible social agreements which at times need to be explicitly >> denied or over-ridden. However, to repeat, the semantic web is not >> like a bunch of people talking in English. > >And I too will repeat, I can't think of anything that is more similar, >in spite of how immensely far we have to go. Well, you might respond the substantive point made here about URI being a very special class of 'names' quite different from NL names. This seems to me to be relevant independently of ones social-meaning position. Pat > >> >> Pat Hayes >> >> > >> >> Cheers, >> >> Bijan Parsia. >> >> >> >> >> > >> >**If you really feel your position is being misrepresented, >> I apologize, and I >> >will look for particular citations to refer to, or stand corrected. >> > >> >John Black. >> >> >> -- >> --------------------------------------------------------------------- >> IHMC (850)434 8903 or (650)494 3973 home >> 40 South Alcaniz St. (850)202 4416 office >> Pensacola (850)202 4440 fax >> FL 32501 (850)291 0667 cell >> phayes@ihmc.us http://www.ihmc.us/users/phayes >> >> -- --------------------------------------------------------------------- IHMC (850)434 8903 or (650)494 3973 home 40 South Alcaniz St. (850)202 4416 office Pensacola (850)202 4440 fax FL 32501 (850)291 0667 cell phayes@ihmc.us http://www.ihmc.us/users/phayes
Received on Tuesday, 6 April 2004 20:38:27 UTC