- From: <bugzilla@wiggum.w3.org>
- Date: Sun, 14 Jan 2007 19:15:30 +0000
- To: public-qt-comments@w3.org
- CC:
http://www.w3.org/Bugs/Public/show_bug.cgi?id=4023 ------- Comment #5 from mike@saxonica.com 2007-01-14 19:15 ------- The text for unsignedInt and similar types makes statements like (3.3.22.1): unsignedInt has a lexical representation consisting of a finite-length sequence of decimal digits (#x30-#x39). For example: 0, 1267896754, 100000 However, there is no pattern facet in the schema-for-schemas that enforces this constraint. This is true in both the first and second editions of XML Schema 1.0. Given the clarification in the F+O spec that section 17.1.1 applies to this cast, we have the statement "The semantics of casting are identical to XML Schema validation", which means that if validating "+123" as an unsignedInt fails, then casting it will also fail. So it becomes a question of which takes precedence: the cited text in XML Schema Part 2 section 3.3.22.1, or the schema for schemas. If you follow the rules for validating an instance against a schema, you get to Validation Rule: Datatype Valid in Schema Part 2 section 4.1.4, and this invokes validation of the lexical representation against the pattern facet, but not against any English text describing a built-in data type. On the other hand, this interpretation would make no sense in the case of primitive data types, because the English text describing the lexical form of (say) xs:float is all we have, and it's clearly intended to be normative. So I think this needs clarification from the Schema WG. Meanwhile, we can guess what answer we might get if we note that XML Schema 1.1 explicitly permits "+123" and "-0" in the lexical space of unsignedInt and similar types.
Received on Sunday, 14 January 2007 19:15:35 UTC