RE: RDF and OWL rules

Hi Geoff,

> > You are right; I made those rules as abstracting
> > from the testcases that Jeremy proposed at Xmas
> > time, and I abstracted away too many details...
> > (was more in a hurry to get the thing running).
>
> I know from experience that it's easy to do :-) One of the primary
> benefits of a combined effort to come up with these axioms is that it's
> often easier to see errors in someone else's work then your own. So the
> more eyes we have on them the better.

right, that's true

> > I took out :rule10r1 and :rule10c2 and simplified
> > :rule10c1 to a simpler case (for the moment)
> > See http://www.agfa.com/w3c/euler/owl-rules.n3
> > Also your remark of yesterday is letting me
> > further think that we have to improve our
> > math:proofCount implementation wrt non-una
>
> Finding cardinality problems for arbitrary cardinalities gets tricky I
> think because of the set permutations involved. For example, you have
> three values (A,B,C) of a property that has max cardinality 2. You know
> that:
>
> A=B, A!=C
> OR
> A=C A!=B
> OR
> A!=B B=C
> OR
> A=B A=C
>
> I wonder if an initial target of supporting lite-like cardinalities (0,
> 1, >1) might be a good idea for this reason.

indeed, we also like "first doing the simple things"

> BTW, have you been working towards a specific goal (in terms of what
> subset of owl you hope to axiomatize) or has your work been largely
> exploratory?

well, it's indeed more exploratory
(as with other kinds of testcases)

> I can't imagine we have much of a shot at completeness - so
> there will likely be entailments we can't prove and we could never pass
> a consistency test (though we could certainly find inconsistencies, we
> can never be sure we've found them all).

this is indeed the case, but it's not bad at all ;-)

> I think one of the biggest challenges will be the structural equivalence
> type rules. For example showing that:
>
> A subClassOf B
> A subClassOf C
>
> Entails
>
> A subClassOf _x
> _x intersectionOf _l
> _l first B
> _l rest _r
> _r first C
> _r rest nil
>
> As a specific case, it's straight forward.

well, even here, we went that way, but
existentials in the conclusions bring
us to skolem functions (at least in our
case where we want to use single triple
(Horn) conclusions) and that's kind of
uneasy in N3 triples - we do things like
  {?x :b ?y} => {?x :k (?x ?y).:sf1}.
  {?x :b ?y} => {(?x ?y).:sf1 :m ?y}.
and that works, but it's not really the
beauty of flat formulas such as in
http://www.agfa.com/w3c/2002/10/medicad/op/
more specifically
http://www.agfa.com/w3c/2002/10/medicad/op/lldmP.n3
and it's up to the inference mechanism to
find out what it needs to come up with
nested and functional term like formulas
such as
http://www.agfa.com/w3c/2002/10/medicad/op/lldmE.n3
(not the other way around I think)

> But the general problem seems
> tough without causing the generation of infinite equivalent structures.
> In other words, how do we prove that it also entails:
>
> A subClassOf _y
> _y subClassOf _x
> _x intersectionOf _l
> _l first B
> _l rest _r
> _r first C
> _r rest nil
>
> We can go one way fine, i.e. if the premise is the intersection and the
> conclusion is the subClassOf, but the other way is difficult.

that's of course true too...

> Or am I missing something?

not that I know ;-)
but I've seen that an engine can come up with
nice (unexpected) combinations of clear evidence
somehow by "following it's nose" as DanC would say


-- ,
Jos De Roo, AGFA http://www.agfa.com/w3c/jdroo/

Received on Friday, 4 April 2003 18:49:18 UTC