- From: pat hayes <phayes@ai.uwf.edu>
- Date: Wed, 23 Oct 2002 13:29:30 -0500
- To: seth@robustai.net
- Cc: "www-rdf-comments@w3.org" <www-rdf-comments@w3.org>
>Graham Klyne wrote: > >>I agree that "no formal inference path" might include non-RDF >>inferences, and that one might define 'B:oneOfThem' in such a way >>that there is a formal inference. >> >>But, in this case, I think the use of English text in an >>rdfs:comment to convey the intended meaning makes any formal >>inference path rather unlikely. > >Hmmm ... it seems to me that the formal axioms for both rdfs and >daml have always been expressed in English rdfs:comment(s) and >English descriptions in specification documents. What's the >difference between transcribing those into an axiom used in a formal >computer inference and translating "This means the same as >rdfs:subClassOf" into {B:oneOfThem daml:equivalentTo >rdfs:subClassOf}? There is no difference. The issue is, WHO is doing the translation from English to a formalism? If the meaning is supplied in English on the web, and if the semweb agents are to take account of that meaning, then THEY must be able to read the English and provide the translation. If it is part of a spec, then the readers will be the human software developers. The difference is central. > >Behind my quibble is a very important major question. Will the >culture of the semantic web embrase the idea that people can coin >their own terms defining them with formal languages based on >previously defined RDF terms? Those new terms then become part of >the language of the semantic web if they gain popular usage just as >words become part of our natural languages culture. The inference >paths on those terms *are just as formal* as the inference paths on >terms that are exclusively defined in the rdf, rdfs, daml, and owl >namespaces; the only difference is that the latter is recommended by >the W3C and the former is not. > >Is the W3C really in the business of recommending how we should >reason? I think not. No, of course not. You are confusing two completely different issues, and the confusion is potentially dangerous. What the W3C is doing here is providing a framework whose primary purpose is to enable just the kind of social 'trade' in meaning that you want, in a very simple way. You seem to think that providing the underlying framework is tantamount to Thought Control, and nothing could be further from the truth. But the essential technical point that you seem to fail to grasp is that this framework is intended for use by *PIECES OF SOFTWARE*, not by human beings. Of course the software is written by, and acts in the name of, and to further the aims of, human beings: but the actual detailed work of trading meanings and drawing conclusions on the semantic web is intended to be done by software agents, not by people. Human beings bring an incredible amount of mental machinery to bear on the task of understanding the intended meanings expressed in the utterances of other human beings, and much of this machinery has been produced by evolution over hundreds of millions of years. Even then, it takes around 15 years of constant training (which we call childhood) to be really proficient at this task; and still, it may depend crucially on built-in biological commonalities which underlie all human languages (nobody really knows whether humans can learn arbitrary languages). There isn't a hope in hell of our being able to automate this kind of ability in software in any of our lifetimes, so we cannot rely on it as a basic tool for the semantic web. Pat >http://robustai.net/papers/Monotonic_Reasoning_on_the_Semantic_Web.html > >Seth Russell -- --------------------------------------------------------------------- IHMC (850)434 8903 home 40 South Alcaniz St. (850)202 4416 office Pensacola (850)202 4440 fax FL 32501 (850)291 0667 cell phayes@ai.uwf.edu http://www.coginst.uwf.edu/~phayes
Received on Wednesday, 23 October 2002 14:29:19 UTC