W3C home > Mailing lists > Public > public-qt-comments@w3.org > April 2004

RE: Question about decimal arithmetic

From: Michael Rys <mrys@microsoft.com>
Date: Mon, 5 Apr 2004 11:16:50 -0700
Message-ID: <EB0A327048144442AFB15FCE18DC96C702859EDC@RED-MSG-31.redmond.corp.microsoft.com>
To: "Jeni Tennison" <jeni@jenitennison.com>, "Michael Kay" <mhk@mhk.me.uk>
Cc: <public-qt-comments@w3.org>, <jkenton@datapower.com>

I don't think deciding precision based on instance information is
acceptable. Many type systems have precision (and scale) as a type facet
that should be determined based on the arguments.

Also, I am not yet convinced that anything stricter than
implementation-defined will be acceptable given the wide range of
implementation platforms...

Best regards
Michael

> -----Original Message-----
> From: public-qt-comments-request@w3.org [mailto:public-qt-comments-
> request@w3.org] On Behalf Of Jeni Tennison
> Sent: Monday, April 05, 2004 2:20 AM
> To: Michael Kay
> Cc: public-qt-comments@w3.org; jkenton@datapower.com
> Subject: Re: Question about decimal arithmetic
> 
> 
> Hi Mike,
> 
> > I would like to offer the following suggestion for a more
> > interoperable definition of decimal division:
> >
> > If the types of $arg1 and $arg2 are xs:integer or xs:decimal, then
> > the result is of type xs:decimal. The precision of the result is
> > implementation-defined, but it must be not be less than min((18,
> > max((p1, p2)), R)) digits, where p1 is the precision of $arg1, p2 is
> > the precision of $arg2, and R is the number of digits (possibly
> > infinite) required to represent the exact mathematical result.
> > "Precision" here means the total number of significant decimal
> > digits in the value; all digits are considered significant other
> > than leading zeros before the decimal point and trailing zeros after
> > the decimal point. If rounding is necessary, the value must be
> > rounded towards zero. Handling of overflow and underflow is defined
> > in section 6.2.
> 
> Using the precision of the two arguments to determine the precision of
> the result leads to results that I find strange. For example:
> 
>   1       div 3         =>  0.3
>   1000000 div 3000000   =>  0.3333333
>   0.00001 div 0.00003   =>  0.33333
> 
> It would make a lot more sense to me to just use min((18, R)), but any
> definition here is better than none. What cases were you thinking of
> that led you to suggest using the precision of the arguments to
> determine the precision of the result?
> 
> Cheers,
> 
> Jeni
> 
> ---
> Jeni Tennison
> http://www.jenitennison.com/
Received on Monday, 5 April 2004 14:17:25 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:45:19 UTC