- From: Pat Hayes <phayes@ai.uwf.edu>
- Date: Tue, 16 Apr 2002 20:02:14 -0400
- To: Steven Gollery <sgollery@cadrc.calpoly.edu>
- Cc: www-rdf-logic@w3.org
>Drew, > >I'd like to try to put this in terms that don't involve eccentric 19th >century British mathematicians and characters from Greek mythology Spoilsport. >: Let me >know if I've got it right, okay? > >At some point any system of logic and semantics must rely on the existence >of something outside itself. In Charlie's case, the connection between >Premise (as a concept) and Conclusion (also as a concept) exists, not in >the ontology, but in the inferencing engine -- in fact, it's part of the >definition of what an inference engine is. Rules about rules eventually >come to an end, and then the inferencing engine has to rely on some >"innate" built-in behavior to process them. > >The same thing is true of any attempt to define "meaning" in an ontology: >it only works if we assume that whatever is processing the ontology >(whether the processor is human or software) already possesses an >underlying context for that meaning. Ontologies (and ontology definition >languages) cannot be self-contained. > >Steven Gollery >CADRC Yes on rules, no on meanings. This is probably getting too philosophical for rdf-logic, but since the topic has come up, and since the above viewpoint is often expressed or assumed, it might be worth laying it to rest. There are two issues here, and we should keep them distinct. There is the 'rules are needed to draw conclusions from rules' fallacy, which Carroll was having fun with. As you point out, that bottoms out in inference rules (which are not axioms), or if you like, in working software (or if you like to think of that as itself consisting of syntactic rules, in the underlying virtual machine; and if you keep on going down, then eventually in hardware, and you have been passed from Charles Dodgson to Turing and von Neumann.) Then there is the 'formal rules are need to specify meanings' fallacy. That bottoms out in model theory. Model theory - modern semantics - does NOT depend on some processor having an underlying 'context' for the meaning. It does not depend on processors at all: it is defined by a mathematically specified relationship between a representation, and some aspect of the world that determines its truth. Now, that relationship might itself be formalized in a precise language, but it is important not to get confused about the role of this semantic meta-language. It does not itself *provide* the meanings of the first language: rather, it *describes* the relationship between the first language and the world which itself encodes the meaning. The relationship *is* the meaning, in a sense, and all the language of the model theory does is to provide a way to describe that meaning-defining relationship. The difference is important. For example it is sometimes said that because model theory is couched in set-theoretical language, that therefore it assumes that meaning is set-theoretic: but this is a misunderstanding, like arguing that because a bridge is described using differential equations it must be made of mathematics. Model theory makes some assumptions about the world, of course, in order to even be stated (eg that the world contains individual things that have properties), but those are not embodied in any particular 'context': they are the assumptions that the formal language itself is predicated on, the assumptions that are built into its very structure. If they were false, then the formal language would be meaningless, and the semantic theory agrees with that judgement. Ever since Tarski wrote his seminal paper in 1938, semantics really has been in a strong sense objective. Tarski rescued meaning from the infinite regress of 'contexts' just as reliably as computers rescue inference from Carroll's infinite regress of meta-rules . Pat Hayes > >Drew McDermott wrote: > >> [Charlie Abela] >> A question that has been haunting me these days is how, if >> possible, to match within an ontology, the premise and its >> conclusion. Inferencing must play a role here, but still there has > > to be some declared form of connection between the two. >> >> [me] >> I don't understand this part >> of your e-mail. Can you >> elaborate? >> >> [Charlie] >> I mean the following; >> >> Every rule will have a means of declaring its premises and >> conclusion, as in the example listed earlier. Now assuming that >> some form of reasoner is going to be used. And given a premise ( >> such as one containing a triple) the reasoner must match with a >> premise/ or premises in a particular rule and infer its >> conclusion. How will this inference come about? I am not sure how >> this process should be handled. Should there be some property in a >> basic rules ontology that connects a premise to a particular >> conclusion ? Sort of >> >> If >> Premise A >> Then >> Conclusion B >> >> And in the basic ontology there would be defined in some way that: >> Premise leadsTo Conclusion So inference engine upon given Premise A >> will try to find a rule that matches this Premise and the infer its >> Conclusion >> >> Hope I am not making a mess out of this and have explained more the >> issue >> >> It sounds like you have fallen prey to the "Achilles and Tortoise" >> fallacy described by Lewis Carroll. Perhaps someone can post a >> pointer to an online copy of it, you can read it, and Enlightenment >> will settle over you. >> >> The fallacy is to suppose that because a rule says "From P conclude >> Q," there must be another rule somewhere that says "If a rule says >> 'From P conclude Q', and you have concluded P, then you must conclude >> Q." An infinite regress suddenly yawns before us. >> >> The most remarkable example of the fallacy I ever came across was in >> an otherwise good book about programmed cell death and other >> biological wonders, where the author made the suggestion that >> somewhere deep inside DNA there is a message saying "Reproduce!" >> >> -- Drew McDermott -- --------------------------------------------------------------------- IHMC (850)434 8903 home 40 South Alcaniz St. (850)202 4416 office Pensacola, FL 32501 (850)202 4440 fax phayes@ai.uwf.edu http://www.coginst.uwf.edu/~phayes
Received on Tuesday, 16 April 2002 20:02:17 UTC