- From: William Van Woensel <william.vanwoensel@gmail.com>
- Date: Sat, 15 Dec 2018 16:20:02 -0400
- To: <public-n3-dev@w3.org>
- Message-ID: <002901d494b3$8fc890d0$af59b270$@gmail.com>
Hi Doerthe, My remark relates to the fact that, technically, it would be possible to record why a particular rule is not firing (as you mention near the end). For instance, a particular premise was not matched, or there were no shared variables between particular premises. It would be quite trivial to do this in e.g., a forward-chaining reasoner (using RETE). But I make a moot point because it seems to be a bit out of scope of what we mean to do (?) Mmm, maybe I really didn't get your point. In your example you would need to explicitly mark the rule you are wondering about then (since I expect you to have many rules which do not fire here) and then you should also know on which triple you want it to fire since there will be many triples it doesn't fire on. But if you are already that far that you know which rule should fire on which triple, I doubt that you still need a proof, you need some kind of debugging mechanism instead which can show you what exactly happens on a rule. I would not see that as a proof. As proof I really only see the mathematical proof of my result. So, if there is not result, I also do not have this kind of proof. For sure, perhaps the term “proof” is misplaced – call it a debugging mechanism, why-did-the-bloody-rule-not-fire, why? why??, ... This mechanism would help to answer what particular part of the rule failed, and why (e.g., no matching triples, no joins possible ..). While debugging rules, I tend to start throwing out rule premises, making premises more general, .. until the rule works and then work my way back to figure out which part failed. Having a mechanism in place to help with debugging rules would be pretty great. William From: Doerthe Arndt <doerthe.arndt@ugent.be> Sent: December-14-18 12:28 PM To: public-n3-dev@w3.org Subject: Re: First meeting of our group Hi William, Proofs are there to explain derivations done not derivations which are not done. As a very simple example suppose you have the triple: :my :nice :triple. Now, you have a rule which you expect to be applicable on it, but that one has a typo: {?x :nic ?y}=>{?x :nicer ?y}. So, you would expect that the reasoner would come to the conclusion: :my :nicer :triple. but this derivation will not happen because you wrote :nic instead of :nice. In this case, it would be difficult to find this particular problem in a bigger setting. If you now ask your reasoner to give you proofs for all instances of ?x :nicer ?y (ie you give Cwm, EYE or whatever you use a goal like {?x :nicer ?y}=>{?x :nicer ?y}) the result above will simply not be derived and therefore there will be no proof. The reasoner will not give you a proof of everything it tried and then failed (and we do not know whether it even tried to apply this rule, this depends on the reasoning algorithm, simple forward-chaining would ignore this rule here, backward-chaining would try to apply it) since proofs reflect valid steps from a calculus which you applied on your data set. My remark relates to the fact that, technically, it would be possible to record why a particular rule is not firing (as you mention near the end). For instance, a particular premise was not matched, or there were no shared variables between particular premises. It would be quite trivial to do this in e.g., a forward-chaining reasoner (using RETE). But I make a moot point because it seems to be a bit out of scope of what we mean to do (?) Mmm, maybe I really didn't get your point. In your example you would need to explicitly mark the rule you are wondering about then (since I expect you to have many rules which do not fire here) and then you should also know on which triple you want it to fire since there will be many triples it doesn't fire on. But if you are already that far that you know which rule should fire on which triple, I doubt that you still need a proof, you need some kind of debugging mechanism instead which can show you what exactly happens on a rule. I would not see that as a proof. As proof I really only see the mathematical proof of my result. So, if there is not result, I also do not have this kind of proof. Kind regards, Doerthe William From: Doerthe Arndt <mailto:doerthe.arndt@ugent.be> <doerthe.arndt@ugent.be> Sent: December-13-18 5:30 AM To: William Van Woensel <mailto:william.vanwoensel@gmail.com> <william.vanwoensel@gmail.com>; public-n3-dev@w3.org <mailto:public-n3-dev@w3.org> Subject: Re: First meeting of our group Proofs will also certainly help for debugging but they rather help in cases where unexpected derivations happen. +1! When I expect a result which I then do not get, there is also no proof to check what happened (no proof for something which is not true). Why not? .. (aside from technical considerations) Proofs are there to explain derivations done not derivations which are not done. As a very simple example suppose you have the triple: :my :nice :triple. Now, you have a rule which you expect to be applicable on it, but that one has a typo: {?x :nic ?y}=>{?x :nicer ?y}. So, you would expect that the reasoner would come to the conclusion: :my :nicer :triple. but this derivation will not happen because you wrote :nic instead of :nice. In this case, it would be difficult to find this particular problem in a bigger setting. If you now ask your reasoner to give you proofs for all instances of ?x :nicer ?y (ie you give Cwm, EYE or whatever you use a goal like {?x :nicer ?y}=>{?x :nicer ?y}) the result above will simply not be derived and therefore there will be no proof. The reasoner will not give you a proof of everything it tried and then failed (and we do not know whether it even tried to apply this rule, this depends on the reasoning algorithm, simple forward-chaining would ignore this rule here, backward-chaining would try to apply it) since proofs reflect valid steps from a calculus which you applied on your data set. Kind regards, Doerthe W From: Doerthe Arndt <mailto:doerthe.arndt@ugent.be> <doerthe.arndt@ugent.be> Sent: December-12-18 11:09 AM To: public-n3-dev@w3.org <mailto:public-n3-dev@w3.org> Subject: Re: First meeting of our group Dear Sandro, all, My own interest is in making it practical (even fun and easy) to base real applications on open-world data (that is, RDF or other KR data), so that we end up with a full data-sharing ecosystem. It has long seemed like rules were a promising approach: rather than than having to code around all the possible forms the input data could take, we simply write the appropriate rules and let the system match them to the input data whenever/however possible. I've built a variety of systems like this, and in my experience, the promise has not worked out terribly well. Rules are very, very hard to debug. Hopefully things will improve some day. I am used to Prolog and wonder whether the simple debugging predicates and modes of for example SWI-Prolog could help here. The problem with debugging in Prolog (and I guess in any other rule language as well) is that it depends on the reasoning algorithm. In Prolog this is resolution with the left-most selection rule. Since we know that, we can also easily (ok, debatable, but I think it is easy) see where reasoning got stuck in case of problems. But I do not think that we should completely fix the reasoning mechanism we expect for N3 (for example we could be flexible with a selection rule and hope for different approaches) and concentrate on the logic instead. Proofs will also certainly help for debugging but they rather help in cases where unexpected derivations happen. When I expect a result which I then do not get, there is also no proof to check what happened (no proof for something which is not true). N3, as implemented in cwm, was an attempt to attack the problem by one person (with some advice from a few others -- I don't mean to minimized DanC's role, but I think he'd agree the vision was Tim's) who had no experience with existing rule-based system or familiarity with the literature, so the design ended up rather idiosyncratic. At the other end of the spectrum, RIF was an attempt to attack this problem by a committee which included experts in many different kinds of rule-based systems, so it ended up rather complicated and full of options. I joined Tim's group a few weeks after cwm started, in late 2000 and was staff contact for RIF from chartering <https://www.w3.org/2005/rules/wg/charter> through Recommendation <https://www.w3.org/TR/rif-overview/> . I might be the only person involved in both projects. I'd love to see something simple and elegant enough it actually catches on. I highly doubt that will be classic N3, in all its self-referential glory (the "implies" operator is just an RDF predicate!), or RIF, in all its complexity. I agree with both points and hope to find a practical solution here. I also think that we should consider the work already done in the past on N3 and RIF, so I am glad that you share your experience. If this group just wants to nail down N3, more or less as implemented in cwm (and perhaps EYE), then more power to it, I wish it well. I'll try to follow the progress from time to time. On the other hand, I rather hope it ends up evolving into a language which really fits the needs of application developers. That would be awesome. I also think that this should be the goal of the group should be to find a practical rule language which fits the needs of developers. Nevertheless, I also think that we should not be too ambitious either to be sure to not deviate from our goal of specifying a standard. So, maybe we start with N3 as it is and "fix" what according to the experience of the people in this group needs to be present in a rule language? I hope that you can help us here with all the lessons learned from the two other groups. * Together with the formalisation we should also discuss whether we provide a reference implementation together In general, in standards work, reference implementations are not a great idea. It's much better, if possible, to have multiple independent implementations and a shared test suite. Every substantive decision by the group -- that is, a decision that might affect whether some implementation conforms to the spec -- as well as every normative statement in the spec should be recorded as multiple tests added to the test suite, showing the correct way for reasoners to behave. For a rule language, positive-entailment and negative-entailment tests are most of what you probably want. Of course there are also positive-syntax and negative-syntax tests. See, for example, https://www.w3.org/TR/rif-test/ Thank you for your advice. So, let's simply drop the point of a joint reference implementation then and write some test cases instead. Regards, Doerthe -- Dörthe Arndt Researcher Semantic Web imec - Ghent University - IDLab | Faculty of Engineering and Architecture | Department of Electronics and Information Systems Technologiepark-Zwijnaarde 19, 9052 Ghent, Belgium t: +32 9 331 49 59 | e: doerthe.arndt@ugent.be <mailto:doerthe.arndt@ugent.be> -- Dörthe Arndt Researcher Semantic Web imec - Ghent University - IDLab | Faculty of Engineering and Architecture | Department of Electronics and Information Systems Technologiepark-Zwijnaarde 19, 9052 Ghent, Belgium t: +32 9 331 49 59 | e: doerthe.arndt@ugent.be <mailto:doerthe.arndt@ugent.be> -- Dörthe Arndt Researcher Semantic Web imec - Ghent University - IDLab | Faculty of Engineering and Architecture | Department of Electronics and Information Systems Technologiepark-Zwijnaarde 19, 9052 Ghent, Belgium t: +32 9 331 49 59 | e: doerthe.arndt@ugent.be <mailto:doerthe.arndt@ugent.be>
Received on Saturday, 15 December 2018 20:20:27 UTC