On the performances of decimal and float encoding conversion

We have some concerns about the relative compression and 
encoding/decoding performances of EXI and "standard" binary 
representations for decimal and floating point numbers. While the 
formats currently proposed in EXI seem optimal for compression and 
well-adapted when converting from EXI to string (or decimal) 
representations and back, there is a significant performance penalty 
when converting from EXI to "standard" binary representations used in 
the C language:

- In cases where large integers are represented as an array of native 
ints, the proposed EXI integer representation is easily converted to the 
internal representation. However, when using decimals (with a native 
representation adding a scale factor), the EXI representation must first 
be converted to the decimal representation to reverse the digits before 
being converted to the internal representation. This intermediate 
conversion is expensive.
- For floating point number, the proposed EXI format must always be 
converted to its decimal representation before being converted to its 
native (often IEEE 754) representation.

So we join our voice to the existing concerns regarding the EXI 
representation of decimal and floating point numbers. We would prefer to 
at least have an option to support decimals using a scale factor and 
IEEE 754 float representation.

Best regards

Antoine Mensch

Received on Monday, 28 September 2009 09:29:52 UTC