RE: First meeting of our group

Hi everyone,

My two cents about what would be needed for a useful rule language: all what Doerthe has proposed .. together with options for closing the world. In my personal experience, the lack of this option (closure axioms exist in OWL but are rather limited) is a major bummer when using Semantic Web technology in general (but let's stick with rule languages). 

I understand / assume that the choice for only allowing an open-world-assumption relates directly to the "Web" part, i.e., one cannot reasonably assume that another Web data source does in fact have the missing statements. 

But there are so many practical cases where a closed world would make much more sense, with its support for universal quantification and negation-as-failure. I'm not arguing pro or con an open-world-assumption—I'm sure it has its uses, such as inferring statements to explicate assumptions in the open world—but rather making constructs available that better allow one to close the world. In a recent work related to clinical decision support, I had to resort to writing a "NOWA" class that implemented a closed world to support universal quantification and complementary classes (I used OWL2 RL to realize a subset of OWL2 DL) .. In this particular setting, where all data was locally available, these features made a lot of sense. 

In that system there was also a need to retract previous statements, which I similarly implemented using application code (making it a non-monotonous system, I suppose; I could have also used the “remove” and “drop” builtins provided by Apache Jena). In particular, it had to convert certain statements once a condition was met (e.g., moving a task to a new state involved retracting its previous state and asserting its new state, which in this case boiled down to changing the statement object).

Nevertheless, from my previous discussion with Doerthe:

  { ?c owl:intersectionOf ?l, @forall :cl (?l x:member :cl, :cl a ?t, ?y a ?t) } => ?y a ?c
 
  Your example would not work because it requires that whatever you find in the whole semantic we needs to be in the list ?l (@forAll :cl. ?l x:member :cl.). The reason for that is that the quantifier is in the antecedence of the rule. You basically say "if every cl is a member of l ... then" and not what you would like to have "for every member cl of l".

  … What you can do, is set a scope, saying something like "for all c1 mentioned in a certain document", this is something we can test. 

It seems that at least RIF-PRD includes the construct of negation-as-failure <https://www.w3.org/TR/2013/NOTE-rif-ucr-20130205/> . 



William

-----Original Message-----
From: David Booth <david@dbooth.org> 
Sent: December-11-18 8:32 PM
To: Sandro Hawke <sandro@w3.org>; public-n3-dev@w3.org
Subject: Re: First meeting of our group

Hi Sandro,

Excellent background!  A question though . . .

On 12/11/18 1:08 PM, Sandro Hawke wrote:
> It has long seemed
> like rules were a promising approach: rather than than having to code 
> around all the possible forms the input data could take, we simply 
> write the appropriate rules and let the system match them to the input 
> data whenever/however possible.  I've built a variety of systems like 
> this, and in my experience, the promise has not worked out terribly well.
> Rules are very, very hard to debug. 

On one hand, the complexity and difficulty of working with rule systems has been observed by many people, for a long time.  As Jesus Barras commented (slides 34 & 35): "
"No one likes rules engines --> horrible to debug / performance"
https://www.slideshare.net/neo4j/graphconnect-europe-2017-debunking-some-rdf-vs-property-graph-alternative-facts-neo4j

On the other hand, we all need to munge RDF data.  At the most fundamental level, an inference rule is merely a procedure that takes some RDF assertions and produces new RDF assertions (the
entailments) -- in essence a function from RDF to RDF.  Many of us have been using a variety of RDF-to-RDF transformation techniques that fall well outside of traditional rules languages and rules engines, including SPARQL, ShEx and general-purpose programming languages, such as Python, Java and JavaScript.  I -- and I assume others -- have used these alternative techniques for three reasons: (a) lack of a convenient, standard rules engine; (b) greater expressiveness (loops, for example) and control; and (c) programmer/maintenance economy of using the same language for multiple tasks.

Because of the potentially fatal performance and complexity impact of rules, I think it is important to be able to carefully control exactly which, where and how inference rules are applied.  This I think is a flaw of OWL reasoners.

Also, I think it may be wise to separate forward-chaining rules from backward-chaining rules, except in those (rare?) cases where a single rule definition can easily be run either way.  The reason for this is that I -- as a developer -- generally know exactly when/why I want to apply my rules, and if I am using an "alternative" rules language such as SPARQL or JavaScript then the implementation would be completely different for forward versus backward rules.

Returning now to my question: What is your take on whether/how we could achieve both the simplicity of a convenient rules language and the control and power of a general-purpose programming language?

Actually, I guess this question is both for you and anyone else who wants to address it.  :)

Thanks,
David Booth

Received on Wednesday, 12 December 2018 15:14:37 UTC