Digit required before/after decimal point?

Is ".1" allowed as a lexical representation of a decimal? How about
"1."? How about for double and float?

For decimal, the canonical representation says it excludes leading
zeros, which would suggest that the canonical representation of "0.1" is
".1".  Is that intended?

I think things work out OK if the lexical representation always requires
a digit after the decimal point.  This would be consistent with integer
not allowing a decimal point.

What is the canonical representation of the decimal "1.0"? Presumably
"1".  Saying that trailing zeros are prohibited is rther a roundabout
way of saying this.

I would suggest things could be made much more clear, precise and
unambiguous by having the spec include a regex for the lexical
representation and the canonical representation of each the primitive
datatypes (perhaps as an annotation in the schema for datatype
definitions).

James

Received on Wednesday, 10 January 2001 22:49:30 UTC