From: pat hayes <phayes@ai.uwf.edu>

Date: Mon, 9 Dec 2002 16:23:20 -0600

Message-Id: <p05111b1fba1a8eca965f@[10.0.100.86]>

To: Sandro Hawke <sandro@w3.org>

Cc: www-rdf-rules@w3.org, timbl@w3.org, pfps@research.bell-labs.com

Date: Mon, 9 Dec 2002 16:23:20 -0600

Message-Id: <p05111b1fba1a8eca965f@[10.0.100.86]>

To: Sandro Hawke <sandro@w3.org>

Cc: www-rdf-rules@w3.org, timbl@w3.org, pfps@research.bell-labs.com

>[ thread turning hard left, towards greater precision; hang on... ] And this response highly delayed but I hope will be helpful. > >> > A logical reasoner knowing one particular rule (which I think is >> > "modus ponens": A, A->B |- A) will do the same thing given "p->q" as a >> > rules engine would if it were given "p |- q". In fact, a reasoner >> > using any other sound and sufficiently-complete set of inference rules >> > would act the same given "p->q" as a rules engine would with "p |- q". >> >> This is definitely not true. A rules engine with p |- q will not conclude >> anything from not q whereas a complete reasoner for the propositional >> calculus will conclude not b from not a and a implies b. [corrected] > >Yes, I understand this. My term "sufficiently-complete" was meant to >indicate to readers familiar with modus tollens, resolution, and other >common inference rules that I was aware there were such rules but >thought they would cloud the issue more than clarify it. > >What I meant was that the logical reasoner would produce everything >the rules engine would produce _and possibly more_. I'm sorry for not >saying that. > >> > More generally, if you turn the "|-" into "->" in any collection of >> > rules, and feed the rules to a reasoner using at least modus ponens, >> > the reasoner will produce (eventually) the same results as the rules >> > engine. >> >> Again, not true. > >But you would agree with this: > > More generally, if you turn the "|-" into "->" in any collection of > rules, and feed the rules to a reasoner using at least modus ponens, > the reasoner will produce (eventually) all the results you would get > from a rules engine (and possibly more). Probably, though its hard to be exact since the term 'rule engine' isn't exactly defined. But this is not a surprising fact, since the 'rules' used by a rule engine ARE logical implications. It seems to me that this entire discussion has taken a bad turn somewhere and gotten confused over a pun. There term 'rule' in this context, ie as in 'rule system' and 'Rule-ML' and so forth, is NOT the same notion as the term 'rule' used in logic to mean 'inference rule'. Inference rules define proof construction processes (one common formal characterization of an inference rule is as a function on proofs); 'rules' in logic programming ARE logical implications, so of course they tend to look rather like logical implications. (They get called 'rules' when they are used only one way around, or are limited to some particular pattern of usage, and so correspond more closely to what is traditionally thought of as a computational process; but a restricted-usage implication is still an implication.) The part of Prolog, or a Prolog-like rule engine, which corresponds to a logical inference rule isn't the Horn clauses, but something deep in the guts of the Prolog interpreter or rule engine: its the machinery which stitches the 'rules' together to form actual inferences, the part that is typically compiled into hard code, the part that gets 'called' when you actually run the rules. In the case of Prolog, for example, its essentially a depth-first backward-chaining unit-preference resolution *inference rule* which is in the heart of the actual Prolog engine. So, your question at the start of this thread seems to me to be a nonsequiteur: n3 is both a rules language and a logical language, because 'rules languages' ARE logical languages. That is the central insight of logic programming. Remember Kowalski's old slogan, from the very beginning of the LP movement: algorithm= logic + control. Prolog-style rules are pieces of logic - implications - with some control information in effect built into the rule engine; which, in turn, is why rules, typically, do not *fully* capture the logical meaning, since the rule engines, by design, will not perform the full gamut of inferences that would completely characterize that meaning. (Eg,, they might refuse to perform modus tollens reasoning, which would be like running the rules 'backwards'. Some of them *only* do modus tollens reasoning and refuse to run the implications forwards: Prolog is like that, its often called being 'goal-directed'. Mind you, if you phrase the whole logic in terms of Genzen sequents, then the forwards/backwards duality just becomes a matter of which side of the sequent you see yourself as working on.) There are all kinds of interesting things to discuss here, like tweaking the semantics of a rule system so that the rules *do* fully capture those tweaked meanings; but none of this is helped by getting the notion of a rule confused with an inference rule, or by invoking the deduction theorem in a shallow justification of a mistaken analogy. > >? > >> > Let's apply this to n3. cwm is a rules engine; TimBL thinks of >> > log:implies as constructing a rule. For example, he is happy to write >> > >> > # n3 example 1 >> > { :a :b :c } >> > log:implies >> > { { :d :e :f } >> > log:implies >> > { :g :h :i } >> > }. >> > >> > and describe that as one rule which, if triggered by the presence of >> > ":a :b :c" will cause the addition of a second rule. Right, that is exactly a 'controlled' implication: it interprets log:implies in effect as 'logically implies, but only use modus ponens reasoning on me'. You take logic and control it - restrict and direct the logical search space - and you get something that looks and feels and acts like an algorithm. But, to repeat, none of this has got anything much to do with inference rules or the deduction theorem. > That second >> > rule, if triggered by the presense of ":d :e :f" will cause the >> > addition of ":g :h :i". >> >> Here one of the problems with N3. There is no indication in N3 of whether >> log:implies is supposed to be material implication, a proof rule, or some >> other logical connective (such as, for example, some sort of relevance >> implication). > >That's exactly what I'm trying to settle here. The meta-question is >how much we need to settle it, if the material conditional and a proof >rule are indistinguishable in some large set of situations. In the sense of 'rule' I think you want (the one in which RuleML and rule engines are about rules) there is no fundamental difference. The use of "rule" rather than "implication" is a signal that you are not planning to use it in the context of a logically complete reasoner (or, possibly, that you are using it relative to a more 'computational' semantics, such as minimal-model semantics, relative to which it is complete; but then you ought to think hard about whether you still want to call it the material conditional, maybe.) > Or to put >that differently, if Tim wants to think of log:implies as constructing >proof rules, can I define it as the material conditional without >actually causing any problems? Yes, provided that you remember that either the reasoner is incomplete, or else its not exactly conventional material implication (because the semantics have been altered.). Either way, you can't expect to get ALL the valid inferences which the material conditional might be thought to support. > >> > On the other hand, if we treat log:implies as a logical implication >> > operator, it's clear you can rewrite the sentence above as: >> > >> > # n3 example 2 >> > { :a :b :c. :d :e :f } >> > log:implies >> > { :g :h :i } >> > >> > which has the same final effect. (You might worry the inner rule >> > will only be inferred in the rules-style, not in the logic-style. >> > That's not really true; a reasoner using sufficiently-complete >> > inference rules would also infer from example 2, given ":a :b :c" that > > > "{:d :e :f} log:implies {:g :h :i}". You are just playing with degrees of completeness here. A full logically complete reasoner will be able to convert any pattern of conjunctions and implications to any equivalent one. Most machine systems avoid the rather silly combinatorial explosion this would produce by putting them all into a normal form (such as, say, clausal form), which is the logical equivalent of suppressing leading zeros. Most of these variations can be settled by specifying an input form for 'problems' to be posed to the reasoner. Can you ask it whether one implication or set of implications is equivalent to another? (ie whether two rule systems are equivalent.) Or can you only ask it, given a set of implications, whether some basic assertion follows from that set (the usual case, corresponding to 'running' the rules.) > > >> Again this depends on what the meaning of log:implies is. > >I believe TimBL consideres it to be a proof rule constructor, and I'm >exploring whether it wouldn't be better considered to be the material >conditional. You are both right. >I think he prefers the proof-rule approach as being more >general (and probably also as more obvious to many prospective users >and implementors; and perhaps as higher performance). Those are poor reasons for that choice, but never mind. > I now beleive >it's only more general when you use it indirectly, as a kind of >meta-logic. Doing that is fine, but it doesn't mean that the >rules-language is more expressive; rather it means it's enough less >expressive that you find yourself wanting to use it as a meta-logic to >define the stuff you really want. Meta-logic??? Sigh. It is really tiresome, I have to say, watching y'all scurrying around reinventing something that not only was invented about 60 years ago, but is one of the most elegant and thoroughly explanatory accounts of language and meaning ever devised. Why are you reinventing marbles when we have had wheels for so long? We KNOW what logic is like, what its properties are, even how to implement it pretty well. Its simpler than almost any other language invented by man (eg considerably simpler than XML). Why not just USE it?? OK, end of howl. Back to the real world. > >> > Is there any way to distinguish what cwm does from what a sound and >> > complete FOL reasoner would do with the same inputs? I don't think >> > so. >> >> I do not know how cwm works. However, my guess is that it is incomplete >> with respect to the propositional calculus. > >Absolutely. So a sound and complete FOL reasoner would output all >that cwm would output, and quite possibly more. As long as the >language does not include negation, however, I doubt there would be >much of interest in that additional output. Well, that depends on what you find interesting. Here are some candidates: 1. given a set of 'rules' and another set of 'rules', say whether or not they will give the same answers to all queries. 2. given a set of rules and a query, say whether that query will not get answered by those rules. 3. Given a set of rules and another set of rules, and thinking of each of them as defining a transformation on expressions (ie you input the expression and run the rules and output the resulting expression(s) all conjoined together, or disjoined if you are running the rules backwards), do they define the same transformation on all expressions? 4. As 3, but all expressions of a certain form? 5. As 3, but do they define inverse transformations, ie does exp--rules1 -->exp'--rules2-->exp'' always have exp = exp'', or maybe exp entailed by exp'' ? > I'd like to characterize >that more precisely at some point. > >> > If one were using such a reasoner, one might be tempted to add >> > negation to the language (because there's no additional cost), which >> > could be interesting and useful. But that's a different issue. >> > >> > My conclusion is this: we can treat log:implies as either creating an >> > inference rule ( |- ) or as the material conditional ( -> ); from the >> > outside no one will be able to tell the difference. >> >> Not true, or, at least, not true without the presence of some other >> machinery. > >Let's say the inference rule data is going to be used as a rules by a >rules engine, and material conditional data is going to be used as >data by a rules engine using only modus ponens. Will you get the >same output? The term 'rules engine' isn't well-enough defined to answer this. You have to get more precise; and when you do, the answer will probably be kind of obvious. > >In other words > > for all formulas x, e(mp + r(n), x) = e(mp, f(n) & x) > >where > > n -- an n3 document containing only log:implies rules > r(n) -- a set of rules produced from n by considering log:implies to > define rules > f(n) -- a logical formula produced from n by considering > log:implies to be the material conditional > mp -- the rule modus ponens > res -- the resolultion rule (which is sound and complete for FOL) Be careful. Resolution is complete for detecting unsatisfiablity of sets of clauses. It is not deductively complete (ie does not satisfy the deduction theorem: if A|=B it does not follow that you can derive B from A by resolution. What you CAN do is derive null from the clausal form of (A and not B), but that's not the same thing.) Also, you are here confusing 'rule' with 'inference rule', see above. For example, there isn't any rule engine that can input resolution as a rule and apply it (what would it use to apply it *with*? Prolog? C++ ?) > + -- operator to conjoin two sets of rules > & -- operator to conjoin two logical formulas Different senses of 'conjoin' (?) (If not, why do you use different symbols?) > e(R, I) -- the set of formulas produced by some rules engine > using rule R on input data I. (Or perhaps I should > call this the closure of applying rules R on formula > I, to get away from performance and implementation > issues.) > >This is not exactly what I meant in the previous message; I'd >forgotten the mp on the left. > >I also assert using resultion and f(n), we would conclude everything we >would conclude from using r(n), and possibly more: > > for all formulas x, there exists formula y such that > e(r(n), x) & y = e(res, f(n) & x) > >(This gives me a fun idea for a cwm test: write resolution in n3, and >see what things I can get cwm to prove. I bet I can get cwm to prove >this isn't the way to implement a fast theorem prover. :-) I will be very impressed if you can write resolution in n3. Start with the unification algorithm; I will be impressed just with that. Pat -- --------------------------------------------------------------------- IHMC (850)434 8903 home 40 South Alcaniz St. (850)202 4416 office Pensacola (850)202 4440 fax FL 32501 (850)291 0667 cell phayes@ai.uwf.edu http://www.coginst.uwf.edu/~phayes s.pam@ai.uwf.edu for spamReceived on Monday, 9 December 2002 17:24:00 UTC

*
This archive was generated by hypermail 2.3.1
: Wednesday, 2 March 2016 11:10:13 UTC
*