Re: Floating point proposal from left field... [long]

Ashok wrote:

We have changed the spec substantially in this area.  The "real" datatype is
gone.  There are 2 new primitive datatypes corresponding to IEEE float and
double.  I think you will like this much better.  The 12/17 public draft
includes these changes.

I did previous privately respond to Ashok but I wanted to make sure that I
responding publically.  I apologize for Ashok for making him sit through
this twice.

I am well aware of the introduction of the "double" and "float" datatypes
and the removal of the "real" datatype in the 12/17 draft.  I have tried as
persuasively as possible to explain (both in comments to this list and in
the HTMLHelp file I published) why I believe those changes are a very bad
thing.  The fact that Ashok thought that I liked (or would like) the "float"
and "double" primitive datatypes shows that I haven't been able to
effectively communicate my strong objections to this change.

-------------
Here is the pocket version of my feelings toward the 12/17 Datatypes Draft

1) "real" (a unlimited range and precision floating point) must come back.
2) I consider "double" and "float" harmful and would prefer they be removed,
but would grudingly tolerate them.
3) ISO 8601 truncated forms must go
4) recurringInstant must go
5) Enumerations should be reworked to handle the "international booleans"
problem
6) <and>, <or>, <nand>, <nor> and <conform> facets would very simply support
validations of types that include disjoint ranges or types that were a
namespace or one of an enumeration of special lexicals.

The HTMLHelp files at http://www.software.aeat.com/xml/resources.htm were
designed to communicate both my objections and my proposed resolutions.  If
the HTMLHelp format is a problem, I could send the same material on request
as zipped HTML pages.  Please, please, please, please read them.

-------------


I had objections to the <minAbsoluteValue> facet.  Basically, I thought that
the receiving application should be responsible for responding to values
that are an underflow for its specific floating point type instead of
forcing the sending application to preemptively fudge numbers to avoid
underflow.  I thought that it was a generally bad thing to do and argued for
its removal to prevent naive schema authors from using it when they thought
providing as much information as they could was a good thing.  However, if
it stayed, I could still avoid using minAbsoluteValue in my schemas and
lobby other people to avoid them.  Other than that I thought the "real"
datatype in the earlier versions was acceptible.

I also believed that <minAbsoluteValue> foreshadowed a larger tendency to
bind the lexical representation to a specific implementation in the schema.
Binding to a specific implementation is something that needs to be done, but
either in type-aware DOM or by the application.  The downside of doing it in
the schema is that it could be enormously complicated when your application
is running on a platform that does not support that type.

Unfortunately, losing "real" doesn't give me an option to avoid mimicing
IEEE's range, precision and rounding behavior.


I do understand the previous message could have been confusing.  It was
basically,

1) bitsMantissa and bitsExponent are a bad thing because they are also an
attempt to allow a schema datatype to be bound to one and only one floating
point datatype.  The only thing different was the range to implementation
datatypes that you could bind with was larger.

2) If a schema author wanted really wanted to mimic the range and underflow
characteristics of a specific datatype, it could be done lexically and
without introducing any specific facets.

Received on Saturday, 15 January 2000 12:38:07 UTC