- From: Michael Kay <mike@saxonica.com>
- Date: Thu, 28 Oct 2004 09:31:20 +0100
- To: "'George Cristian Bina'" <george@oxygenxml.com>
- Cc: "'Farber, Saul \(ENV\)'" <Saul.Farber@state.ma.us>, <xmlschema-dev@w3.org>
> B: > <choice minOccurs="1" maxOccurs="unbounded"> > <element ref="test:basicBit" minOccurs="1" maxOccurs="1"/> > <element ref="test:restrictedBasicBit" minOccurs="1" > maxOccurs="1"/> > <choice> > > R: > <choice minOccurs="1" maxOccurs="1"> > <element ref="test:restrictedBasicBit" maxOccurs="unbounded"/> > </choice> > You seem to have a point. The set of instances permitted by R is clearly a subset of those permitted by B, but the published algorithm doesn't seem to reflect this. Saxon is actually using a different algorithm to test subsumption, based on analysing the state transition graphs of the two content models (see the Thompson/Tobin paper from XML Europe 2003). As the paper notes, "Not only is the [published] approach complex and difficult to understand, but it is also inaccurate, in that it both allows some definitions as restrictions which accept more than their base, and bars some definitions which accept less". So it's one of those cases where an implementor has to choose between doing what the spec says and doing what it means. The introduction to the algorithm in the spec says: "The approach to defining a type by restricting another type definition set out here is designed to ensure that types defined in this way are guaranteed to be a subset of the type they restrict." So perhaps it's not entirely unreasonable for an implementation to use a different algorithm that better implements that intent! Michael Kay
Received on Thursday, 28 October 2004 08:31:26 UTC