Re: A possible structure of the datatype system for OWL 2 (related to ISSUE-126)

>>> On today's hardware, I would set this to be 64 bit integers or even
>>> 128 bit integers, and double precision float. Some machine's don't
>>> really have single float hardware, instead rounding from double  
>>> float.
>> I don't mind going up to 64 bit. 128 might be a bit too much (at  
>> least in Java -- a language in which many reasoners are implemented
>> you don't have this).
> Need only invoke code based on it in case of an overflow exception.

This line of argument worries me---the fact that there are easy-to-use  
classes in Java doesn't seem like sufficient reason to extend the spec  
to features that I expect will be important to very very few users.

I have encountered users who could make use of 64-bit integers  
(although not many). The only users I've ever encountered (or even  
heard about) who claim the need larger integers fall into two groups:

1. Mathematicians/cryptographers for whom 128-bit number aren't big  
enough either; i.e. they need arbitrarily-sized integers.
2. Programmers who use "integer" to mean "bit string". 128-bit bit- 
strings are very useful, but they don't require "integer" semantics  
and thus are probably better served by a "binary data" type.

I'd still argue that we should only require implementors to support 32- 
bit constants since that covers all common usage. I'd say that 128-bit  
integers are completely and utterly unreasonable, and I almost  
certainly wouldn't implement them (unless I also implemented arbitrary- 
precision integers).

I admit I probably would implement 64-bit integers, but still think  
the spec is better off not requiring them.


Received on Thursday, 10 July 2008 10:13:02 UTC