Re: [XSCH/ALL] straw poll options

> jacco:
> [[
> I was not happy about the third option in Saterday's straw poll.
> I felt I was forced to vote on "leave it to the application", while I was
> under the impression that Jeff's proposal was something along the lines of:
> 
> "no owl entailment, but provide a well-defined a way that applications
> can use to 'map' 1.3 floats on 1.3 doubles etc".
> 
> In my opinion, this approach solves the formal owl problems, the
> non-monotonic problem and
> the interoperability problems.
> ]]
> 
> While I think that Jeff will need to clarify the proposal, here is my
> analysis.

Let me clarify the two proposals.

1) Primitive equallity: all XML Schema datatypes have disjoint value spaces.

2) Primitive equality extended with approximate mappings (easlier known as "leave it to the application"): it is more general than 1). Now all XML Schema datatypes have disjoint value spaces, plus applications can specify some approximate mappings, such as mapping "1.3"^^xsd:float to "1.3"^^xsd:double.

Note that in this case, the values of "1.3"^^xsd:float and "1.3"^^xsd:double are different, but the approximate mapping **enables** the use of the XPATH eq operator, such as in the following SPARQL query:

> SELECT  ?size
> WHERE   { eg:car eg:engineSizeInLitres ?size .
>           FILTER (?size = xsd:decimal("1.3") ) . }

Using 2), "1.3"^^xsd:float and "1.3"^^xsd:double could be  results of the above query with the help of approximate mappings.

Another benefit of 2) is interoperability. Consider the scenario where one map ontology use xsd:float as the range of milage while the other use xsd:double. Using 1), milages in all different. While using 2), approximate mappings allows applications to do some useful things.

We have briefly addressed how to formalise the approximate mappings in our draft, see:

http://www.w3.org/TR/swbp-xsch-datatypes/#sec-values-eq

Finally, the 2) approach is not non-monotonic. Even if we map"1.3"^^xsd:float to "1.3"^^xsd:double, their interpretations are still different. It would be non-monotonic if their interpretations became equal to each other after the mapping.

Jeff

--
Dr. Jeff Z. Pan (http://www.csd.abdn.ac.uk/~jpan/)
Department of Computing Science, The University of Aberdeen


> 
> 1) Some application scenarios will want to treat 1.3^^double in a very
> similar way to 1.3^^float. These application scenarios will almost
> certainly want to treat 1.299999999999999822^^decimal in a similar way
> to 1.299999999999999822^^double, and 1.2999999523^^decimal as similar to
> 1.2999999523^^float.
> Since 1.2999999523^^float is identical to 1.3^^float, and
> 1.299999999999999822^^double is identical to 1.3^^double, we are likely
> to want to treat 1.299999999999999822^^decimal in a similar way to
> 1.2999999523^^decimal, or we need to have "in a similar way" as not
> behaving as an equivalence relation.
> 
> 2) Some application scenarios will want to treat
> 1.299999999999999822^^decimal as different from 1.2999999523^^decimal
> 
> 
> 
> The solution using primitive base types, moves the problem from the
> formal semantics, and the implementation of the formal semantics, to
> some part of the application layer, for example, a SPARQL query. This
> can be used in a way that is not an equivalence relation e.g. the query:
> 
> SELECT  ?size
> WHERE   { eg:car eg:engineSizeInLitres ?size .
>           FILTER (?size = xsd:decimal("1.3") ) . }
> 
> would treat the 1.3 in the query as a decimal, and then the rules for
> XPath eq would match both 1.3^^double and 1.3^^float, but not
> 1.299999999999999822^^decimal.
> But if xsd:float("1.3") were used as the value in the query, all three
> values would match.
> 
> In contrast,
> 
> SELECT  ?size
> WHERE   { eg:car eg:engineSizeInLitres "1.3"^^xsd:float . }
> 
> would only match the case where the object was a float with that value, 
> e.g. "1.2999999523"^^float.
> 
> Thus the primitive base type solution, combined with SPARQL, allows 
> applications to choose appropriate matching rules.
> 
> ===
> 
> My understanding is that the other proposal is to allow applications to 
> choose appropriate *semantics*. So that depending on an application 
> choice the empty graph may entail either:
> 
> (A)
> _:a owl:sameAs "1.3"^^xsd:float .
> _:a owl:sameAs "1.3"^^xsd:decimal .
> 
> or:
> 
> (B)
> _:a owl:sameAs "1.3"^^xsd:float .
> _:a owl:differentFrom "1.3"^^xsd:decimal .
>   (* see OWL DL note at end)
> 
> With this difference in semantics the overall application behaviour then 
> follows in the desired way, without any explicit application level code, 
> other than the choice of semantics to use.
> 
> Since from the two incompatible entailments (A) and (B) we can form 
> wider OWL constructs with incompatible interpretations, and this has 
> been posed as a property of the application rather than the data, I have 
> great difficulty in seeing how this does not risk interoperability failure.
> 
> I suppose we could have say a choice of two possible semantics, and 
> allow applications to choice between them, and warn RDF and OWL 
> publishers not to rely on the differences, but this seems quite a costly 
> solution to rounding problems. I think I was hearing a proposal that was 
> more open ended than that, and that simply because an application used 
> semantics entailing
> 
> (A)
> _:a owl:sameAs "1.3"^^xsd:float .
> _:a owl:sameAs "1.3"^^xsd:decimal .
> 
> we would not necessarily have the same application using semantics entailing
> 
> (C)
> _:a owl:sameAs "0"^^xsd:float .
> _:a owl:sameAs "0"^^xsd:decimal .
> 
> 
> In short, if the people who want application defined semantics are 
> serious then a proposal in which there is a well-thought extensibility 
> point is necessary. Inevitably such an extensibility point will result 
> in non-monotonic behaviour in that the conclusions (A) and (B) above are 
>  mutually inconsistent.
> 
> 
> 
> 
> ======
> 
> * OWL DL
> 
> The examples are OWL Full. Similar OWL DL examples would concern the 
> disjointness or equivalence of hasValue restrictions, roughly
> 
> ObjectProperty(p)
> DisjointClasses(
>    restriction( p, hasValue("1.3"^^xsd:decimal ) )
>    restriction( p, hasValue("1.3"^^xsd:float ) )
>   )
> 
> 
> or
> 
> ObjectProperty(p)
> EquivalentClass(
>    restriction( p, hasValue("1.3"^^xsd:decimal ) )
>    restriction( p, hasValue("1.3"^^xsd:float ) )
>   )
> 
> Either one or other is necessarily true and the other necessarily false, 
> and my undestanding is that the "application defined" position is that 
> applications can decide which. It is unclear if any restrictions are 
> made on the consistency of the application choices.
> 
> Jeremy
> 
> 
> 
> 
> 
> 
> 
> 
> 
>

Received on Monday, 28 November 2005 15:38:53 UTC