W3C home > Mailing lists > Public > public-rule-workshop-discuss@w3.org > February 2006

Re: [RIF] backward vs forward chaining [was some other subject]

From: doug foxvog <doug.foxvog@deri.org>
Date: Thu, 02 Feb 2006 15:03:02 +0000
To: Francois Bry <bry@ifi.lmu.de>
Cc: public-rule-workshop-discuss@w3.org
Message-id: <43E21F26.1010709@deri.org>

Francois Bry wrote:
> Peter F. Patel-Schneider wrote:
>> From: Francois Bry <bry@ifi.lmu.de>
>> Date: Thu, 02 Feb 2006 08:51:58 +0100
>>> Peter F. Patel-Schneider wrote:
>>>> From: Michael Kifer <kifer@cs.sunysb.edu>
>>>> Date: Wed, 01 Feb 2006 15:03:06 -0500
>>>>> Francois Bry <bry@ifi.lmu.de> wrote:

>>>>>> I agree that RIF should have (1) a clear declarative semantics and 
>>>>>> (2) in addition support conveying *some* *limited* specifications 
>>>>>> of procedural semantics (eg backward chaining is intended because 
>>>>>> with forward chaining the considered rules would require to 
>>>>>> process all/too many nodes on the Web).

Do you mean the procedural semantics should be defined for each rule, or
that one procedural semantics (backward chaining) should be defined for
all rules?

>>>>> I disagree with that, especially with the statement that "with forward
>>>>> chaining the considered rules would require to process all/too many 
>>>>> nodes
>>>>> on the Web".

>>>> +[some very large number]

Aren't we considering contexts here?  Parts of the web that are not in
the context in which the question is asked are not applicable for the
answer.  A context would include that of the included knowledge bases
and that of all (recursively) referenced ontologies as well as specified
other areas of the web.  Someone's knowledge base using my ontology
doesn't necessatate my access to that knowledge base.

What forward chaining would mean is that forward rules fire when they
are asserted or when an outside ontology or knowledge base accesses an
ontology with a forward rule.  The conclusions are then available
whenever a question is asked.  Backward chaining requires the execution
of backward rules when the question is asked.

>>>> What makes "forward chaining" a particularly bad method, even if you 
>>>> think of
>>>> forward chaining as a way to perform saturation?  Forward chaining 
>>>> (including
>>>> saturation, or not) works exceedingly well in some situations (and 
>>>> exceedingly
>>>> poorly in others).  Standard backward chaining has exactly the same
>>>> characteristics, by the way.  It all depends on the situation, and the
>>>> parameters used to control the chaining.


>>> My point was that forward chaining inherently requires to start 
>>> computing from all web sites that might be involved.   Either those web
>>> sites are known and limited in number -- and this is fine -- or not 
>>> -- and forward chaining is impracticable.

True, but backward chaining also impracticable if the number of sites to 
chain over are not known or unlimited in number for queries that are not
answered completely within a more limited context.

>>> -- 
>>> Francois

>> I don't follow this.  Why would anyone want to start with "all web 
>> sites that
>> might be involved"?  This seems to be a recipe for disaster no matter 
>> what
>> reasoning methodology is used.

>> In any case, what is it about "forward chaining" that requires this 
>> universal
>> reach more than any other evaluation methodology?  Couldn't I just say 
>> the same
>> thing about "backward chaining"?  What makes

>>     My point was that backward chaining inherently requires to start
>>     computing from all web sites that might be involved.

>> any less true than what you said?

>> Peter F. Patel-Schneider

> Think of rule specifying interesting web pages as follows, with

> interesting(A) :- computer-science(A).
> interesting(A) :- link(A, B), interesting(B).

> meaning, a web page ist interesting if it has a Computer Science content 
> or a link to an "interesting" page.

> Assume one asks whether www.example.ex is interesting. With backward 
> chaining, the evaluation starts with the query 
> "interesting(www.example.ex)" and propagate backwards the 
> binding,yielding the intermediate queries:

> link(www.example.ex, B), interesting(B)
> interesting(www.mycom.com) if A has a link to www.mycom.com
> etc.

Here, the system needs to know that "link" is an evaluatable predicate 
for its second argument and to bind B before asking "interesting(B)".
If the system searched the web to prove "link(www.example.ex, B)" or
searched for "interesting(B)" before proving "link(www.example.ex, B)"
you wouldn't have the efficiency noted.

> With forwards chaining, the evaluation starts with the queries
> "computer-science(A)"  ie checking all web pages for Computer Science 
> content, then following the links from all web page with CS contents, etc.

Actually, this happens long before the query.  If the rules are forward,
then when the query is made the answer is just extracted from an already
generated set (effectively table) of interesting pages.

And "all web pages" are not necessarily checked at assert time.  The
extent of the "computer-science" class/unary predicate is checked.  If
a forward rule exists for "computer-science(C)" that searches the whole
web and performs natural language proccesing on each page, then that
rule being forward with that method is a problem, not the forward rule
that you gave.

Note that if the definition of interesting was that anything that linked
to a "computer-science" page was interesting, then backward chaining 
would not be efficient, because "link(A, www.example.ex)" cannot be
efficiently determined in a backward fashion.

   interesting(A) :- computer-science(A).
   interesting(A) :- link(B, A), interesting(B).

> Thus, backward or forward chaining make a difference, as far as the 
> number of pages accessed is concerned.

It does make a difference.  And which requires more page accesses
depends upon the rule.

> Francois

douglas foxvog	  doug.foxvog@deri.org	 +353 (91) 495 150
Digital Enterprise Research Institute (DERI)
National University of Ireland, Galway
Galway, Ireland				http://www.deri.ie
Received on Thursday, 2 February 2006 15:03:21 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:48:34 UTC