Reiterated minAbsoluteValue rant

Maybe some one can enlighten me, but I see no value and significant
complications due to the use of the minAbsoluteValue facet.  I believe the
complications are substantially severe enough and the benefits so nebulous
that the facet should be removed to prevent its use.  The maxAbsoluteValue
facet does not concern me as much, but I do think it should be eliminated
since the same goal can be accomplished using the existing minInclusive and
maxInclusive facets.

Here is a scenario:

The schema author naively writes a minAbsoluteValue facet of 1.40239846e-45
for a datetype since he wants to support applications that use IEEE single
precision.
An engineering program that writes XML is written using a floating point
datatype that supports greater precision must now check every value written
against the schema underflow constraint to prevent validation errors.
An engineering program that is written in greater than single precision is
now subject to greater truncation errors since the document creator had to
purposefully lose precision.
A single precision engineering application avoids an underflow when the
double precision value typically generated by the conversion factor is
truncated to single precision.

The fact is that an truncation underflow in a lower precision application is
not such an extraordinary bad event that it justifies the additional burden
on applications that use higher precision.  What is so bad with letting the
lower precision application taking a value of 1-e100 and truncating it to
zero if and when it encounters it.

If the schema author wants to support single precision, he can use a min and
max constraint to prevent the value from getting outside of the +/-1e38
range and should tolerate underflows that may be generated from higher
precision applications.

Received on Monday, 15 November 1999 15:48:57 UTC