- From: Paul Gearon <gearon@itee.uq.edu.au>
- Date: Wed, 6 Apr 2005 12:41:05 +1000
- To: www-rdf-logic@w3.org
Hi, I've been having some difficulty understanding the use of OWL cardinality with the open world assumption, and I'd like some advice please. While I know that the open world assumption means that any unspecified statements are "unknown". I interpret this to mean that it is possible for any unwritten statement to exist. (If I'm wrong here, then let me know as the rest of this message is based on this assumption). For owl:minCardinality on a predicate there would seem to be 3 situations: minCardinality of 0: This is trivially consistent and valid. minCardinality of 1: This describes existence. Any statements with this predicate make the model valid. However, if there are no statements with the predicate then the model is still consistent, as those statements could exist. minCardinality > 1: If there are not enough statements using the predicate, then the model will still be consistent because those statements could exist. In other words, there exists an interpretation which would make this true. The only case where this could not be consistent would be if it is not legal to create the required statements. The only instance of this that I can think of is if the range of the predicate is restricted in some way, for instance it could be a oneOf without enough members. However, that case would be a fault in the ontology, not in the data. For validity, it may seem easy to conform if there are enough statements with the predicate. However, if any objects from these statements use owl:sameAs to declare that they are the same, the real usage of this predicate will be reduced, making the model invalid. The only way validity can be guaranteed is if enough of the objects are declared to be different from the others, via owl:differentFrom or owl:allDifferent. So for all 3 cases, the model is always consistent. Validity is guaranteed for cardinality of 0, possible with cardinality of 1, and difficult for cardinality of more than 1. owl:maxCardinality is similar: maxCardinality of 0: If the predicate is not used, then there is an interpretation where the model is consistent. However, since there may exist statements which use the predicate, then the model can't be valid. maxCardinality >= 1: If the predicate is used fewer times than the maxCardinality, then this is consistent. However, there may be more statements, except when the range is restricted (eg. with owl:oneOf), which means that validity can rarely be proven. If the predicate is used more times than the maxCardinality, then this would appear to inconsistent. However, it is possible for some of the objects to be declared the same as each other with owl:sameAs. This would reduce the effective number of times the predicate is used, possibly making it consistent again. The only way to guarantee inconsistency is if the objects are all different with owl:differentFrom and owl:allDifferent. So for maxCardinality of 0 the model will be invalid, and for maxCardinality of 1 validity is only provable for a particular case (and not for the general case). Consistency can be proven for cardinality of 0, and is very difficult to disprove for cardinality of 1 or more. Is validity an interesting property in a real-world database? I would have thought that consistency would be more important, particularly since validity is rarely possible to prove. If validity *is* important, then maxCardinality has a problem, because the model can't be valid in the general case. As for consistency, minCardinality is *always* consistent. maxCardinality is almost always consistent as well (the model needs to go to a lot of with owl:differentFrom to be inconsistent). If my interpretation here is correct, then these cardinality constraints would not appear to be be as useful as they seem. It looks very much like these constraints were designed for a closed world assumption, not the open world. Can someone enlighten me here please? TIA. Regards, Paul Gearon
Received on Wednesday, 6 April 2005 05:04:48 UTC