W3C home > Mailing lists > Public > public-owl-wg@w3.org > July 2008

Re: A possible structure of the datatype system for OWL 2 (related to ISSUE-126)

From: Rob Shearer <rob.shearer@comlab.ox.ac.uk>
Date: Thu, 10 Jul 2008 14:57:00 +0100
Cc: "Boris Motik" <boris.motik@comlab.ox.ac.uk>, "'OWL Working Group WG'" <public-owl-wg@w3.org>
Message-Id: <DADCC165-03A9-4287-886D-5CAF09918C26@comlab.ox.ac.uk>
To: Alan Ruttenberg <alanruttenberg@gmail.com>
Lots of embedded processors only support 32-bit words (ints and floats).

The majority of desktop hardware in operation today does not work  
"natively" with 64-bit integers (although there's usually a hardware  
instruction so that you can perform the multiple instructions  
necessary to implement 64-bit arithmetic without killing your  
pipeline). This will change in time.

But I've never even seen a computer program written using a 128-bit  
integer primitive, and I expect that it's for the reasons I've cited:  
nobody needs it. 2^32 is a fairly big number. 2^64 is astronomically  
big. Nobody counts that high. Ever. Requiring implementions to support  
something that may hypothetically gain marginal use a decade from now  
doesn't seem reasonable to me.

>>> Some machine's don't really have single float hardware, instead  
>>> rounding from double float.
>> I'm not sure that's relevant: all machines can mimic single float  
>> (i.e. the double hardware can do single rounding after every  
>> operation).
> Yes. I was suggesting there is little or no benefit to restricting  
> to single float.
>> I'd be more interested in hearing how big a user base double- 
>> precision floats really have. Are many scientific data sets encoded  
>> using doubles?
>> For the record, I'd wouldn't mind requiring double-precision floats  
>> but only 32-bit integers. Minimal implementations of such a spec  
>> could use a single homogeneous representation for numbers in that  
>> case.
> Is it that common that current machines have 32 bit integer but not  
> 64 bit integer arithmetic?
> I am more concerned about the float than the integer size,  
> notwithstanding my comments about 128 bit float. In that case I was  
> thinking about building for the future, and I expect that 64 and 128  
> bit integer arithmetic will commonly available soon, if not  
> immediately.
> Perhaps worth poking around how often xsd:long is used that would be  
> the motivation for 64 bit.
> -Alan

Received on Thursday, 10 July 2008 13:57:37 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:42:05 UTC