Re: proof representation noodling, cwm, euler

[nice to see this while having a holiday and
making some primitive attempts at creating
http://sourceforge.net/projects/eulersharp
http://eulersharp.sourceforge.net/2003/03swap
...]

> Points in random order:
>
> * log:conclusion without log:includes seems to be
> of very limited utility. You have to be very careful
> to be sure the deductive closure is finite.
> log:concusionIncludes seems better; cwm can
> implement it as a composition of conclusion and includes,
> but documents last longer than code, and our
> knowledge bases should move away from using
> log:conclusion.

Still don't grok log:concusionIncludes and we
haven't been using log:conclusion so far.
The implementation of cwm builtins is on my
todo stack, but...
OK, there are still weekends...

> I think Sandro made the analagous point about --think
> without --filter. Euler has no way to offer such
> an interface. The common interface they share is:
> given a list of premises, find one or more
> proofs of _these_ conclusions/conjectures.

Right, the _these_ have to be pointed out,
but we could think about an auto generated
query goal which is the conjunction of
?S each_verb_occuring_in_the_KB ?O.
That could be worked out I think.

> I think you can use a degenerate { ?S ?P ?O }
> to find all the simple factual conclusions.
> I think cwm gives up in that case, but it could
> probably be taught not to. Euler probably
> doesn't go so fast in that case.

That's right and right now it will even give up
immediately as the intial goal verb is unbound...

> Euler would need a "don't justify the conclusion;
> just instantiate the variables and assert the result"
> mode in order to work for circles-and-arrows,
> travel tools, /TR/, and the like,

That's another very good point, to leave out
the argument premises in such cases and just
give the conclusions (I had some testcases
so far which could use that, such as find
all ?S = ?O which we need in our prepare method
to do the subtitution of equals for equals, but
right now it's hacked in the code itself)

> * The proof theory literature that I've been reading
> treats inference rules as (computational) functions
> that take proofs as arguments and return proofs
> as results.
> So for example andI takes a proof of A and a proof of B
> and returns a proof of A /\ B:
>
> They write these using x: T notation, which you
> can read as "T is a proof of x" (they also tell
> you to read it as "x is of type T" though I find
> that almost superfluous).
>
>   A: pfA
>   B: pfB
>   ========
>   A /\ B : andI(pfA, pfB)
>
> >From that viewpoint, rdf simple entailment is
> a function that takes a proof of A and returns
> a proof of B whenever A log:includes B.
>
> { ?PFA :proves ?A.
>   ?A log:includes ?B } <=>
>   { [ is :rdfSimpleEntailment of ?PFA ] :proves ?B.
>     ?PFA :proves ?A. }
>
> * I'm starting to think our proof representation should use
> this functional structure, though I can't say for sure
> why. I haven't looked
> at timbl''s proof representation design closely
> yet.
>
>
> * The systems that work this way seem to be
> stratified. I'm not sure if that's critical to
> making it work at all or just an artifact
> of design preferences.
>
> * log:includes is one entailment relationship;
> log:conclusionIncludes is another; each entailment
> relationship has an analagous inference rule form.
>
> { ?PFA :proves ?A.
>   ?A log:conclusionIncludes ?B } <=>
>   { [ is :hornClauseResolution of ?PFA ] :proves ?B.
>     ?PFA :proves ?A. }

So far we haven't been working with explicit
proof vocabulary as we were fine with =
for the bindings and => for the sequent
[[
The so called proof thing is a SOUND ARGUMENT:
  An ARGUMENT is a pair of things:
    a set of sentences, the PREMISES;
    a sentence, the CONCLUSION.
  An argument is VALID if and only if
    it is necessary that if all its premises are true,
    its conclusion is true.
  An argument is SOUND if and only if
    it is valid and
    all its premises are true.
  A sound argument can be the premis of another sound argument.
]]
and for examples/testcases see
http://eulersharp.sourceforge.net/2003/03swap/etc5-proof.n3
The thing is also that one can query again
with the proof as query (as a kind of
proof validation or as a continuation)
http://www.agfa.com/w3c/euler/graph.check.n3

> * I think Jos's research exploits the
> "proofs as programs" Curry-Howard isomorphism
> and actually provides an efficient implementation of
> derived inference rules once it has prooved that
> they follow from basic inference rules.

It is definetely the case that proofs
correspond with stepwise procedures
which can be compiled into running code
(I just wanted to be able to work on some
more testcases for that, cases we see
for some specific algorithms...)

> This looks like an extremely valuable mechanism
> to support "diagonalization" that will be critical
> to keeping proof sizes manageable.

Haven't made the connection to that "diagonalization"
any pointer?

> * datatype literals are functional terms. Hmm...

I'm afraid we are again (mis)using forward paths
for that purpose in
http://www.agfa.com/w3c/euler/easterP.n3
and in our latest attempt at skolem functions in
[[
{?x :b ?y} => {?x :k (?x ?y).:sf1}.
{?x :b ?y} => {(?x ?y).:sf1 :m ?y}.
]]

to horn {?x :b ?y} => {?x :k [ :m ?y]}.

> --
> Dan Connolly, W3C http://www.w3.org/People/Connolly/

-- ,
Jos De Roo, AGFA http://www.agfa.com/w3c/jdroo/

Received on Tuesday, 25 March 2003 17:10:20 UTC