Re: [TED] Action-188, ISSUE: production rule systems have "difficulty" with recursive rules in RIF Core

> 
> > > > This would be a ridiculous and unjustified restriction.
> > > > 
> > > > The core is for exchange. There is no requirement for any concrete
> > > > system to properly include the core. (Don't confuse concrete systems with
> > > > RIF dialects.)
> > > 
> > > I'm not sure where the disagreement or misunderstanding here is.
> > > 
> > > My understanding fits with what Gary said, that RIF Core is a dialect
> > > and it's a part of every RIF dialect, so every rule engine using RIF
> > > must implement RIF Core.    
> > 
> > I think that this requirement makes no sense and, furthermore, is
> > meaningless.  Suppose people want to exchange aggregate-free subsets
> > of SQL 1992 through RIF.  Does it mean that RIF core should be limited
> > to relational algebra?  Or does it mean that we will kick them out
> > even though they can perfectly use RIF core to exchange their stuff
> > (preserving semantics etc.) we will somehow stop them until they
> > implement full RIF Core?
> >
> > (Note that different SQL vendors have various deviations from SQL 1992
> > (even though most of them claim to support it!), so such an exchange is not
> > completely out of question.)
> 
> I'm confused.  Why don't you propose an alternative conformance clause,
> one that makes more sense to you than my strawman, and we'll go from
> there.
> 
> Some guidance about writing conformance clauses (which I'll re-read now)
> is at http://www.w3.org/TR/qaframe-spec/ .
> 
>       -- Sandro

I think we may be are talking/thinking about different things.

I am not concerned with conformance clauses right now, but rather with
defining what might be a reasonable set of features (for lack of a better
word) that should allow us to call something a core or a dialect extending
the core.

Here is what I think a dialect might be:

     1. A well-defined theory with generic syntax and semantics.
     	By "generic" I mean some adaptation for one of the accepted
     	syntaxes for that theory.
	Semantics is preferably model-theoretic, but doesn't have to be
	(as discussed at the F2F in November).

     2. The theory must be maximal in the sense that no artificial
     	restrictions should be imposed. (I realize that "artificial"
	might be subjective.)

     3. The purpose of a dialect is to enable exchange  between theories
     	that are roughly built upon the same semantics.
	By "same" semantics I do not mean that the theories are equivalent.
	For instance, Horn rules with recursion and without recursion
	are built on the same semantics.

	To enable exchange between system A and system B, a dialect, D,
	does not need to be a subset of both systems. In fact, it must be a
	superset. It is also possible that A and B are mapped to different
	parts of D. In this case, full exchange might not be possible.
	This will be detected when RIF mappings are applied.

	Requiring that a dialect D must be a subset of  both A and B in
	order to enable exchange (as transpired in some of the emails)
	doesn't make sense to me. If A properly contains D then we *know*
	that A can't be exchange through D. In that case, the author of an
	A-based ruleset should start looking for a bigger dialect, D'.
	Or, maybe, the author should identify a syntactic subset of A such
	that this subset maps into D proper.
	Therefore, existing systems should not determine what does into a
	dialect. This should be determined by the formalizations underlying
	those theories. (And, as I said, formalizations must not be
	circumscribed unnecessarily.)

Now, the core is supposed to be an intersection of all dialects. This
intersections should not be understood in the syntactic sense. The core
should contain the means to enable the definitions of dialects, but it cant
be a syntactic subset of all of them. Its semantics should, however, map
into the dialects so that each dialect should be viewed as a semantic
extension of the core.

   a. Why the core cannot be an intersection. Consider RDFS as a dialect
      and a standard first-order logic. (Assume RDFS is represented not as
      triples but by binary predicates, as David R wants.)
      What would be a syntactic intersection of these two? Since the
      standard syntax of first-order logic does not allow mixing predicates
      and constants, the syntactic intersection would be a language where only
      constants are allowed (no predicates!).

      If we represent RDF/S using triples and binary predicates (for
      subclassing, etc.) then the intersection will be a language where
      only a couple of ternary and binary predicates are allowed (and no
      rules!).

      I hope these arguments show that treating the core as an intersection
      is not a good idea.

   b. What does it mean that the core maps semantically into a dialect, D?
      Formally it means the following. 
      Let R be a set of rules (incl facts) in a dialect, D (which includes
      the core). Let D(R) denote the set of all atoms that are consequences
      of R in D according to D's semantics.

      Then we say that the core maps semantically into D iff there is a
      1-1 (but not necessarily onto) mapping M that maps every formula of
      the core to a formula in D such that for every set of rules (and
      facts) R in the core, M(C(R)) = D(M(R)).
      (By the above, C(R) means the set of all atomic consequences of R in
      the core and D(...) is a set of all atomic consequences in D.

      This definition obviously can be used to define what it means for one
      dialect to syntactically extend another.

      Note: in my definition I require M(C(R)) = D(M(R)), but maybe
      \subseteq
      instead of = is also reasonable -- need to see if somebody has good
      examples to justify \subseteq.

So, how do we determine if something is a dialect and what should the core
look like? The above are just a set of basic guidelines. People who propose
a dialect (and the core) should explain what is the theory behind these
things and In case of the dialects) how they extend semantically the core
(and possibly other dialects).
Then we (RIFWG) discuss, criticize, amend, etc.

michael

> 
> > > 
> > > We'll need some normative Conformance text at some point, something a
> > > bit like:
> > >    http://www.w3.org/TR/owl-test/#consistencyChecker
> > > 
> > > We could say something like (as a rought first cut):
> > > 
> > >      A "RIF Core Rule Engine" is a rule engine which can perform sound
> > >      and complete reasoning on any rule set which can encoded in one or
> > >      more RIF Core documents.  It must be able to answer all queries
> > >      against the deductive closure of the ruleset, where a query is
> > >      equivalent to a RIF Core anticedent, and to answer a query means to
> > >      provide every matching set of bindings to the variables in the
> > >      anticedent. 
> > > 
> > > At the moment, unless some new information comes along, I'm inclined to
> > > agree that we need to leave recursive Horn rules out of the core.
> > > 
> > > My understanding is that recursive Horn rules are also a problem for
> > > prolog.  As with rete systems, there are lots of clever and effective
> > > ways of dealing with this problem (I was once an enthusiastic XSB user),
> > > but my sense is that they are still kind of cutting edge instead of the
> > > kind of dirt simple we want in RIF Core.  With non-recursive rules, one
> > > can do the trivial mapping to prolog or rete rules and any halfway
> > > decent engine will be a sound and complete reasoner for RIF Core rules.
> > > I think that's what we want.
> > > 
> > > We could go another step back for RIF Core, all the way to datalog, but
> > > I think non-recursive terms are still quite useful (eg for defining
> > > uncle), so I'd rather not do that.
> > > 
> > >    -- Sandro
> > > 
> > 
> 
> 

Received on Sunday, 17 December 2006 17:11:57 UTC