W3C home > Mailing lists > Public > public-xsd-databinding@w3.org > June 2007

Re: Comments on test cases

From: Pete Cordell <petexmldev@tech-know-ware.com>
Date: Fri, 15 Jun 2007 16:12:44 +0100
Message-ID: <004901c7af5f$abc6b9b0$5f00a8c0@Codalogic>
To: "Pete Cordell" <petexmldev@tech-know-ware.com>, "George Cowe" <gcowe@origoservices.com>
Cc: <public-xsd-databinding@w3.org>
----- Original Message From: "Pete Cordell" <petexmldev@tech-know-ware.com>
> The only thing I have issue with is the float one. 
> 1267.43233765876583765E12 is also a legal literal for a float, but that 
> doesn't mean that it can be fully represented in a float variable, and 
> thus not round-trippable.

I've been doing some more tests of this...

If I store 1267.43233E12 in a float, then write it to strings using both 7 
and 8 digit precision, and then read those strings back to floats, (see 
attached code) I find that the value converted at 7 digit precision doesn't 
match the input, whereas the value converted at 8 digit precision does.

This points to 8 digits of precision being the way to go.

BUT if I build up all the strings such as 0.1, 0.2, 0.3, up to 0.9999999, 
read them into a float and then print that float to strings at 7 and 8 
digits of precision, every time the 7 digit conversion gets the same text 
output as the input, whereas nearly 70% of the time extra extraneous digits 
are produced when using 8 digits of precision.

For example, you get results like:
input   7-digit    8-digit
0.1     0.1         0.1
0.2     0.2         0.2
0.3     0.3  0.30000001
0.4     0.4  0.40000001
0.5     0.5         0.5
0.6     0.6  0.60000002
0.7     0.7  0.69999999
0.8     0.8  0.80000001
0.9     0.9  0.89999998
0.1     0.1         0.1
0.11    0.11        0.11
0.12    0.12        0.12
0.13    0.13        0.13
0.14    0.14        0.14
0.15    0.15  0.15000001
0.16    0.16        0.16
0.17    0.17        0.17
0.18    0.18  0.18000001

That says to me that 7 digits is the correct way to go.

So there's a judgement call to be made here.  Go with 8 and pass the W3C 
test but surprise your customers with strange results, or go with 7 and fail 
the W3C test, but give results your customers probably expect.

As customers, if they really care about this sort of precision, have the 
option to map xs:float to double, I've decided to go with 7 digits of 
precision and fail the test.

This seems unfortunate to me.  The intent of the test seems to be about 
whether floating point numbers can be parsed and generated, not about corner 
cases in IEEE 754 binary to text conversion.  There are a vast number of 
values that could be used that would round-trip perfectly.  Why stick with a 
value that an editor created by dancing across his/her keyboard if it causes 
problem?

Regards,

Pete.
--
=============================================
Pete Cordell
Codalogic Ltd
for XML Schema to C++ data binding visit
 http://www.codalogic.com/lmx/
=============================================




Received on Friday, 15 June 2007 15:13:16 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 18 December 2010 18:20:37 GMT