W3C home > Mailing lists > Public > public-rule-workshop-discuss@w3.org > August 2005

A Rule Interchange Format VS a Rule Language for Interoperability and other issues

From: Christian de Sainte Marie <csma@ilog.fr>
Date: Wed, 24 Aug 2005 14:01:36 +0200
Message-ID: <430C61A0.9070602@ilog.fr>
To: public-rule-workshop-discuss@w3.org

(Sorry for the long posting, and the, at least partially, obvious 
content: I am trying to clarify my understanding as much as to advance 
the discussion. I hope this helps)

Ideal interoperability is achieved when everybody uses the same 
language, which has to be an acceptable language for most purposes and 
the best possible one (or a very good one, at least) for some specific 
purposes, both from an expressiveness and from a computational point of 

The problem is, of course, that the specific purposes for which the 
language must be very good differ depending on whom you ask, so that we 
end up having to specify a language that is the best possible one for 
all purposes.

This is a Graal-like quest for research. Even if a WG was able to 
specify a good enough all-purpose language in an acceptably short time, 
I do not believe that users and vendors would just switch to it and 
rewrite their legacy applications/rule bases/engines in a snap (one 
difference between an ontology language and a rule language, here, is 
the importance of legacy).

So, even if a reasonably good all-purpose language is a worthy goal, I 
do not see how it can be the practical solution to the users immediate 
problem of having a vendor-neutral -- more precisely: engine- or rule 
management system-neutral -- way to store and share their rules 
(including publishing/retrieveing them on the Web).

On the other hand, practical interoperability can be achieved, although 
in a more limited way, when applications can retrieve the rules that are 
of interest to them, in a language that they can translate into their 
own executable language.

Not all applications will be able to use all rules, but not all rules 
are of interest to all applications either. I do not think that we want 
to prescribe which (kind of) rules are of interest to which (kind of) 
applications, nor which applications can use which rules: if we wanted 
to do so, it would probably be simpler to define different standards 
specific to the different kinds of rules/applications/usages (one of the 
cost would be that it might preclude unforeseen, innovative usages; it 
would certainly fragment the Web, at least).

A practical objective thus seems to be to define an interlingua onto 
which everybody can map their own language, and from which everybody can 
map the rules that are of interest to them onto their own language. 
Rules producers and users will then have to decide for themselves what 
are their targets and/or what rules they want/need to use. The 
interlingua can help here with predefined profiles, but they should be 
an option (at least, this is how I understand the profiles in Sandro's 

I understand that this is what Sandro's draft charter aims at. I do not 
know whether a notation with the expressive power of full FOL with 
equality and a standard semantics (e.g. Tarski-like model theory) is the 
right basis for such an interlingua, but I imagine so, maybe naively, 
because most if not all rule languages derive from it (being eg 
computationally efficient subset, as Dieter puts it).

A nice thing is that going the RIF way (Rule Interchange Format) does 
not preclude to tackle the more ambitious RLI (Rule Language for 
Interoperabilitry) objective on the longer term: besides a short term 
solution to a growing problem, the RIF would also provide a graceful 
adoption path for an RLI.

Now that we are all convinced that the RLI way is the practical way to 
go first (you are convinced, aren't you? :-) the nature of the issues 
changes slightly:
- FOL or not FOL: nobody says that full FOL with equality is 
computationally efficient, practical as a rule language or whatever. The 
actual question is: is there a language with a well established 
semantics and such an expressive power, that most practical rule 
languages can map onto it easily and completely enough? FOL seems like a 
natural candidate to me, but I do not really care (nor should anybody, 
since we are not talking about using it as a rule language, but only as 
the basis for an interchange format. I mean, nobody will have to see it 
or use it, only translate to and from it). If FOL is not a good 
candidate (as Jim at least hinted), I would be interested to understand 
why; and we would also need to find out what would be a good candidate;
- NAF, SNAF etc: As Michael points out, applications/engines/rule bases 
(the ones I am used to, at least) really rely on scoped negation as 
failure, even if the scope is implicit in most of the cases (it is also 
obvious, in most of the cases). So, the actual question is not so much 
whether NAF, SNAF or whatever is in scope, but rather: how should the 
RIF deal with the scope of negations and the consequences if it is not 
explicit or obvious (or well-defined, if it is not always)?
- Non-monotony: most applications/engines/rule bases that rely on SNAF 
also rely on monotonic inference/languages (well, I do not know for 
most; but some certainly do). It works because they actually rely on 
bounded monotony, only the bound is implicit (and, in most of the cases, 
obvious: a session, an inference cycle, whatever). So, the actual 
question, here again, is: how should the RIF deal with the scope of 
monotony and the consequences if it is not explicit or obvious)?

On the other hand, it may be the case that that vision of an interlingua 
is misguided, naive, confused or whatever. If so, I may not be the only 
one not to understand why exactly: it would really advance the 
discussion if somebody could explain that, I believe.

More issues or discussion items:
- RuleML: if RuleML is the right solution, let the WG decide that RuleML 
is the standard. If it is not, the WG must still inherit from the 
experience, so it has certainly to be one of the major input and 
starting point. There are other efforts and RECs that must be looked at 
as well: Sandro's draft mentions some already (somebody mentioned MathML 
to me, yesterday. I do not know; I did not look at MathML);
- Non-inference uses of rules: one of the argument against making 
computational efficiency a requirement for a RIF (at the expense of 
expressiveness), is that not all usages of rules require computational 
inference: editing, management, reading and use by people, etc (all the 
use cases for what we could call "rules as documents"). There is no 
reason why a rule editor, a rules repository or a rule browser should be 
limited to one kind of rules for computational efficiency reasons, and 
the developers of such tools/applications would certainly object to 
multiple specialised interchange formats (then, of course, the specific 
rule/ruleset themselves must be computationally efficient if we want 
them to be exploitable by machines as well; but, in the practial case, 
developers target an application, itself based on a specific rule 
language: it seems to me that the application and the application's 
language should constrain the rules for efficiency reasons, not the RIF);
- Of, ok, that's enough for this mail :-)


Received on Wednesday, 24 August 2005 12:00:27 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:48:33 UTC