W3C home > Mailing lists > Public > public-owl-wg@w3.org > November 2007

Re: XML Schema datatypes

From: Bijan Parsia <bparsia@cs.man.ac.uk>
Date: Thu, 15 Nov 2007 09:48:06 +0000
Message-Id: <AAAF7E40-BA8B-47A0-9967-438E77BB530D@cs.man.ac.uk>
Cc: "Web Ontology Language (OWL) Working Group WG" <public-owl-wg@w3.org>
To: Carsten Lutz <clu@tcs.inf.tu-dresden.de>

[Just replying to this one...I think once boundedness per se is  
admitted, it's a bit hard to categorically argue against types that  
1) were in the previous version, 2) were implemented, and 3) meet  
certain ideological and organizational purposes. So, e.g., see  
Jeremy's argument for including some of the odder XML Schema base  
string types which are definable from string + existing facets:
More inline.]

On Nov 15, 2007, at 7:36 AM, Carsten Lutz wrote:

> On Thu, 15 Nov 2007, Bijan Parsia wrote:
>>> *We* are *not* defining a schema language for (stored) data in the
>>> sense of XML Schema. So it is a valid question whether or not the  
>>> XML
>>> Schema datatypes are also good for our (different) purposes. I  
>>> believe
>>> they are not.
>> I think this is too strong. Sometimes users use OWL for *high  
>> level* conceptual modeling, but sometimes one is refining the  
>> conceptual model a bit based on some more concrete aspects. why  
>> should they have to switch out from OWL just because they need to  
>> account for representation limits? What about modeling database  
>> schemas?
> I feel that deductions that actually *rely* on the boundedness or  
> fixed
> precision are (almost?) never desired.

Even if so, they may rely on disjointness of types. For example, if  
I'm modeling an attribute with a float value in one schema and an  
integer value in another, it's useful to get a class if I claim  
classes with these (functional) attributes are equivalent.

> If you disgree, it would be
> good to see a concrete example.

Another example, which is a little out of current practice at the  
moment, is deal with compatibility of keys.

> But if such inferences are undesired,
> why introduce bounded datatypes in the first place?

Even aside from reasoning issues, users often want to decorate their  
classes with types that can be useful mapped, e.g., to data  
acquisition fields. (There disjointness probably matters more than  
boundedness.) Similarly, if I'm trying to decide containment between  
two policies which contain numeric comparisons, it often does matter  
what the specific types are (not an artificial example, by the way).

Finally, as you point out, many of these are definable from the  
unbounded ones. Given that these are named and common and have  
institutional weight behind them, it's pretty easy to see why you  
would introduce them.

I would suggest that, as with *all* funky issues in numerical  
methods, that users need a lot of education in order to cope with  
things. Some of that education can be as simple as steering them to  
"More mathematically clean types", but workarounds are workarounds.  
If users have to maintain that "this integer is really really a  
float", then that, too, is a burden on them.

>> And conceptual modeling isn't the only thing people do with OWL.  
>> People do information integration which can require details about  
>> the types (consider either ETL or distributed query). Or when  
>> reducing other formalisms, like policy languages such as WS-Policy  
>> or XACML, to OWL, it can matter that you have the actual datatypes  
>> involved.
>> Finally, people *do* user RDF and OWL directly to work with semi- 
>> structured data.
>> So, I see your argument as supporting adding some additional, more  
>> general types, to better support CM, not as an argument for  
>> eliminating various representationally specific numeric types.
>> As with numeric methods genearlly, the user must exercise care.
> I can live with this view. This is the least we should do, i.e.,  
> giving
> the users *the option* to work with unbounded datatypes.
>>> We are defining an ontology language with a declarative semantics.
>> I don't see that declarativity is an issue here.
> With a procedural approach, you usually say (in a unique way) how a  
> value
> is computed. Also there we may hit upon a "gap" in a bounded datatype,
> but then it is common to use rounding and this works just fine.  
> With a declarative approach, there may easily be multiple ways to  
> compute a value
> (Jeremy's Celsius-Fahrenheit cycle is only a very simple example).  
> Trying to do rounding there breaks things (even in the unary case!).
>>> and (ii) the bounded datatypes of XML schema.
>> Boundedness is an issue, but you can get that with integers and  
>> min and max. Which we have.
> If a user explicitly uses min and max in his modelling, he obviously
> believes that the boundedness is crucial for the modelling. Then, he
> should expect to have consequences that derive from that boundedness.
> I cannot see that this is in any contradiction to what I have  
> advocated.
>> I don't see that as a fix. And if the issue is merely boundedness  
>> then we'd have to chuck user defined datatypes on the integers.
> I disagree.

If the issue which is sufficient to remove a dataype is that it is  
*bounded*, then obviously all bounded types should go. Your more  
sophisticated version is that the particular boundedness stemming  
from some of the built-in types is undesirable from a modeling  
perspective *and* has counterintuitive results.
Received on Thursday, 15 November 2007 09:48:25 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:42:00 UTC