W3C home > Mailing lists > Public > public-rule-workshop-discuss@w3.org > July 2005

Re: An appeal

From: Sandro Hawke <sandro@w3.org>
Date: Wed, 06 Jul 2005 11:08:22 -0400
To: Anthony Finkelstein <anthony@systemwire.com>
Cc: public-rule-workshop-discuss@w3.org
Message-Id: <20050706150823.7B1E14F082@homer.w3.org>


> When I attended the workshop I understood its goals to be to discuss
> 
> - "rule languages for interoperability"
> 
> - "a language for sharing rules"
> 
> - "a standard rule framework"
> 
> I was motivated by the statement "rule systems from different 
> suppliers are rarely interoperable"
> 
> None of this implies a single rule language or indeed a single rule 
> metalanguage!
> 
> As a scientist I don't believe this is possible to achieve. As a rule 
> system vendor I don't believe this is good for technology or business.

Can you explain this?  The normal approach to establishing
interoperability, as I see it and in very broad terms, is to define
one language/format/protocol which includes the important features of
a key subset of existing pre-standard languages/formats/protocols.

In this arena, things are somewhat complicated by the range of what
constitutes the field.  It's probably not meaningful to have a
constraint solver and a Rete engine conform to the same specification.
Is that what you're getting at?

What are the apples-to-apples functions performed by rule software?
What bits of code could be put in interchangeable black boxes?

Right now, I see three main ones:

    * Inference.  Given some data and some rules, infer some more data
      which logically follows.  An inference engine (or deductive
      database) mostly fits behind a query interface, eg SPARQL, SQL,
      XQuery.  

    * Validation.  Use some rules to examine some data and see if it
      meets certain criteria.  If not, issue errors or warnings.

    * Service Execution.  Given some data, rules, and a set of
      executable operations, run the engine and have it invoke
      procedural code with parameters bound from the data.

These functions are closely related: given a basic engine mostly
geared toward doing inference (cf Prolog) or service execution (cf
Jess), one can implement all three functions without a lot more work.

Each area has its own typical language: "if condition then condition",
"if [not] condition then error", and "if condition then action", but
they obviously have a lot in common, too.

Do you see a big problem with establishing a standard language of "if
condition then condition/error/action" and defining conformant
implementations in terms of the above functions?  Maybe it's too hard
a problem; without standardizing on control strategies, the size of
practical rule sets will be rather more limited, and the standard wont
really work in practice.  But maybe that element can be addressed in a
modular, contained fashion?

Or were you thinking in a completely different direction?  :-)

     -- sandro
Received on Wednesday, 6 July 2005 15:08:27 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:16:21 GMT