W3C home > Mailing lists > Public > www-rdf-rules@w3.org > November 2003

Re: Semantic Web Rule Language (SWRL) 0.5 released

From: Drew McDermott <drew.mcdermott@yale.edu>
Date: Sun, 30 Nov 2003 17:33:23 -0500 (EST)
Message-Id: <200311302233.hAUMXN023166@pantheon-po03.its.yale.edu>
To: www-rdf-rules@w3.org


   [Harold Boley]
   RuleML since 0.8 (http://www.ruleml.org/indtd0.8.html) 

I have some trouble understanding the RuleML web site.  There are so
many alternatives syntaxes.  Is one of them, or are some of them,
"official"?  (This is an embarrassing admission for me to make since
I'm listed on the website as a participant.)

   [me]
   I am curious why explicit quantifiers are so conspicuously absent from
   SWRL.  . . .

   [Harold]
   Omitting all (top-level) universal quantifiers has often been used as one
   step for obtaining a clausal normal form in classical first-order logic, and
   has been a convention in many (Horn) logic programming languages (but this
   has nothing to do with the somewhat limited success of ISO Prolog as an LP
   standard: http://www.cs.unipr.it/~bagnara/Papers/Abstracts/ALPN99a).

   Besides LP, also mathematics has had a similar habit for named function
   definitions, taken up by many equational/functional programming languages.
   So, for defining square(x) = x*x Common Lisp uses (defun square (x) (* x x))
   rather than something like (def square (lambda (x) (* x x))), with the
   lambda
   binding construct playing a role similar to a universal quantifier.

But the argument list in Lisp 'defun' _is_ a quantifier; it
establishes a scope for variables as surely as 'lambda' does.

   Of course, there are essential uses of universal quantifiers nested into
   non-normalized formulas, 

I think the problem is the assumption that formulas can be
"normalized."  The normalization used in resolution theorem proving is
cool, but it's not some kind of general-purpose normal form that works
for all situations.

   Since many formulas and functions, however, define rules and named
   functions, the above free-variable conventions are
   widespread. Should we still strive for uniformity between the
   top-level of formulas and nested parts for a practical language
   ....?

You miss my meaning.  I'm not arguing for abolishing free variables;
I'm arguing for allowing bound ones.

Free variables are fine, when understood as shorthand for the
explicitly quantified version.  You can say more, and more precisely,
with the explicit version, so why make it a second-class citizen?

The free-variable convention has too many subtle "gotchas."  Even in a
well understood area like first-order theorem proving, I've seen
people get confused about why it is that queries have free variables
that are existentially quantified, or why backward chaining through a
rule like p(?x,?y) & p(?y,?z) -> p(?x, ?z) changes ?y from universal
to existential.

                                             -- Drew

-- 
                                   -- Drew McDermott
                                      Yale Computer Science Department
Received on Sunday, 30 November 2003 17:33:25 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 11:10:15 UTC