Re: Rules WG -- draft charter

On Mon, 10 Nov 2003 10:45:31 -0500 Peter F. Patel-Schneider wrote:
>I have a number of comments on the draft rules WG charter, revision
>1.26.
... of http://www.w3.org/2003/10/swre578

> I worry about the tieing together of rules and justifications.
> Justifications can be quite difficult to deal with, except under certain
> circumstances, even if limited to rule systems.  In particular, checking
> whether the application of a non-monotonic rule is justified can be
> intractable or even undecidable.

Justifications are key to using Semantic Web technology in Web Services,
from what I can tell. They're key to trust.

"Systems need to be able to explain their answers if they are to be trusted and
reused" argued McGuinness at ISWC, and I agree.

When I have argued against non-monotonic rules, I was told
that checking them is straightforward, provided references
to the knowledge base and such are explicit. If that's not
the case, I'm not sure what we would gain by standardizing
non-monotonic rules.


> I also worry about the condition that justifications are only to be used to
> justify how rules were used to reach a conclusion.

Only? Perhaps "the justification language will allow the
expression of the knowledge of how certain rules were used
to reach a conclusion" is misleading.


>  Even in an RDFS plus
> rules setting, there is more going on than just rule activity.  If this is
> the case, how will the RDFS inferences be recorded?

To cover RDF simple entailment,
I'd expect a justification step for existential introduction
and one for separation... or perhaps combine them into
an "RDF simple entailment" step.

A resolution or generalized modus ponens step, sketched in the charter,
seems like a good idea. Perhaps paramodulation for dealing with equality,
if the WG decides to add that sort of expressiveness.

>  The situation is much
> more complex in an OWL plus rule setting.

Again, that argues for keeping the expressiveness down toward
horn rules.


> The document appears to me to be internally inconsistent, particularly with
> respect to jusifications.  In Section 1 it states that justifications are
> to be used to ``allow the expression of of the knowledge of how certain
> rules were used to reach conclusion''.  In Section 2.1 it states that the
> working group is supposed to address ``a language for the exchange of
> proofs'', which is a much more ambitious scope than recording rule traces.

I expect that whatever inferences are licensed, the proof language
will include a way to justify them.

> The document appears to require that rules be equivalent to material
> implications, which is not their treatment in many rule systems.  In
> particular, in some rule formulations that allow disjunctive information
> rules with false consequents are not equivalent to classical negation.

I don't follow. Perhaps you could elaborate with an example?

> The document again brings up the desire to put everything in RDF, both
> syntactically and semantically.

Where?

>  The Web Ontology Working Group spent a lot
> of time wrestling with a similar mandate before ultimately settling on a
> partial solution that only works because of several characteristics of OWL,
> which would not be shared by a rules language.  I forsee that a similar
> mandate for rules would end up causing a similar delay for the rules
> working group.

Yes, integration takes time.


> It is not the case that proof checking is ``computationally much easier
> than theorem proving''.

Yes, it is, in at least a few interesting cases:

  This makes the AF logic unsuitable for use in traditional distributed
  authentication frameworks in which the server is given a set of credentials
  and must decide whether to grant access. This problem can be avoided in
  the server by making it the client's responsibility to generate proofs.
  The server must now only check that the proof is valid -- this is not
  difficult even in an undecidable logic.

  
[Bauer2002] A General and Flexible Access-Control System for the Web.
Lujo Bauer, Michael A. Schneider, and Edward W. Felten. To appear in
Proceedings of the 11th USENIX Security Symposium, August 2002.
among Proof-Carrying Authorization papers
http://www.cs.princeton.edu/sip/projects/pca/


"Proofs found by programs are always questionable. Our approach to this
problem is to have the theorem prover construct a detailed proof object
and have a very simple program (written in a high-level language) check
that the proof object is correct. The proof checking program is simple
enough that it can be scrutinized by humans, and formal
verification is probably feasible. [...] "
  -- http://www-unix.mcs.anl.gov/~mccune/papers/robbins/


>  In decidable logics the length of a proof may
> be similar to the number of computational steps required to proove a
> particular result.  This may lead to a PSPACE deduction procedure but proof
> checking requiring exponential space just to store the proof.

I can see a few options for cases like that...

  * don't do that
  * treat such steps as a built-in, like floating point multiplication


-- 
Dan Connolly, W3C http://www.w3.org/People/Connolly/

Received on Monday, 10 November 2003 17:28:51 UTC