proof representation noodling, cwm, euler

Points in random order:

* log:conclusion without log:includes seems to be
of very limited utility. You have to be very careful
to be sure the deductive closure is finite.
log:concusionIncludes seems better; cwm can
implement it as a composition of conclusion and includes,
but documents last longer than code, and our
knowledge bases should move away from using
log:conclusion.

I think Sandro made the analagous point about --think
without --filter. Euler has no way to offer such
an interface. The common interface they share is:
given a list of premises, find one or more
proofs of _these_ conclusions/conjectures.
I think you can use a degenerate { ?S ?P ?O }
to find all the simple factual conclusions.
I think cwm gives up in that case, but it could
probably be taught not to. Euler probably
doesn't go so fast in that case.

Euler would need a "don't justify the conclusion;
just instantiate the variables and assert the result"
mode in order to work for circles-and-arrows,
travel tools, /TR/, and the like,

* The proof theory literature that I've been reading
treats inference rules as (computational) functions
that take proofs as arguments and return proofs
as results.

So for example andI takes a proof of A and a proof of B
and returns a proof of A /\ B:

They write these using x: T notation, which you
can read as "T is a proof of x" (they also tell
you to read it as "x is of type T" though I find
that almost superfluous).

  A: pfA
  B: pfB
  ========
  A /\ B : andI(pfA, pfB)

>From that viewpoint, rdf simple entailment is
a function that takes a proof of A and returns
a proof of B whenever A log:includes B.

{ ?PFA :proves ?A.
  ?A log:includes ?B } <=>
  { [ is :rdfSimpleEntailment of ?PFA ] :proves ?B.
    ?PFA :proves ?A. }

* I'm starting to think our proof representation should use
this functional structure, though I can't say for sure
why. I haven't looked
at timbl''s proof representation design closely
yet.


* The systems that work this way seem to be
stratified. I'm not sure if that's critical to
making it work at all or just an artifact
of design preferences.

* log:includes is one entailment relationship;
log:conclusionIncludes is another; each entailment
relationship has an analagous inference rule form.

{ ?PFA :proves ?A.
  ?A log:conclusionIncludes ?B } <=>
  { [ is :hornClauseResolution of ?PFA ] :proves ?B.
    ?PFA :proves ?A. }

* I think Jos's research exploits the
"proofs as programs" Curry-Howard isomorphism
and actually provides an efficient implementation of
derived inference rules once it has prooved that
they follow from basic inference rules.

This looks like an extremely valuable mechanism
to support "diagonalization" that will be critical
to keeping proof sizes manageable.

* datatype literals are functional terms. Hmm...

-- 
Dan Connolly, W3C http://www.w3.org/People/Connolly/

Received on Tuesday, 25 March 2003 02:55:41 UTC