From: Jeni Tennison <jeni@jenitennison.com>

Date: Mon, 5 Apr 2004 10:19:40 +0100

Message-ID: <675716596.20040405101940@jenitennison.com>

To: "Michael Kay" <mhk@mhk.me.uk>

Cc: public-qt-comments@w3.org, jkenton@datapower.com

Date: Mon, 5 Apr 2004 10:19:40 +0100

Message-ID: <675716596.20040405101940@jenitennison.com>

To: "Michael Kay" <mhk@mhk.me.uk>

Cc: public-qt-comments@w3.org, jkenton@datapower.com

Hi Mike, > I would like to offer the following suggestion for a more > interoperable definition of decimal division: > > If the types of $arg1 and $arg2 are xs:integer or xs:decimal, then > the result is of type xs:decimal. The precision of the result is > implementation-defined, but it must be not be less than min((18, > max((p1, p2)), R)) digits, where p1 is the precision of $arg1, p2 is > the precision of $arg2, and R is the number of digits (possibly > infinite) required to represent the exact mathematical result. > "Precision" here means the total number of significant decimal > digits in the value; all digits are considered significant other > than leading zeros before the decimal point and trailing zeros after > the decimal point. If rounding is necessary, the value must be > rounded towards zero. Handling of overflow and underflow is defined > in section 6.2. Using the precision of the two arguments to determine the precision of the result leads to results that I find strange. For example: 1 div 3 => 0.3 1000000 div 3000000 => 0.3333333 0.00001 div 0.00003 => 0.33333 It would make a lot more sense to me to just use min((18, R)), but any definition here is better than none. What cases were you thinking of that led you to suggest using the precision of the arguments to determine the precision of the result? Cheers, Jeni --- Jeni Tennison http://www.jenitennison.com/Received on Monday, 5 April 2004 05:21:56 UTC

*
This archive was generated by hypermail 2.4.0
: Friday, 17 January 2020 16:56:56 UTC
*