W3C home > Mailing lists > Public > xmlschema-dev@w3.org > October 2004

RE: I'd appreciate a second-look at this, just to double-check

From: Michael Kay <mike@saxonica.com>
Date: Thu, 28 Oct 2004 09:31:20 +0100
To: "'George Cristian Bina'" <george@oxygenxml.com>
Cc: "'Farber, Saul \(ENV\)'" <Saul.Farber@state.ma.us>, <xmlschema-dev@w3.org>
Message-Id: <E1CN5gq-0000a3-00@ukmail1.eechost.net>

> B:
>    <choice minOccurs="1" maxOccurs="unbounded">
>        <element ref="test:basicBit" minOccurs="1" maxOccurs="1"/>
>        <element ref="test:restrictedBasicBit" minOccurs="1" 
> maxOccurs="1"/>
>      <choice>
> R:
>      <choice minOccurs="1" maxOccurs="1">
>          <element ref="test:restrictedBasicBit" maxOccurs="unbounded"/>
>      </choice>

You seem to have a point. The set of instances permitted by R is clearly a
subset of those permitted by B, but the published algorithm doesn't seem to
reflect this.

Saxon is actually using a different algorithm to test subsumption, based on
analysing the state transition graphs of the two content models (see the
Thompson/Tobin paper from XML Europe 2003). As the paper notes, "Not only is
the [published] approach complex and difficult to understand, but it is also
inaccurate, in that it both allows some definitions as restrictions which
accept more than their base, and bars some definitions which accept less".

So it's one of those cases where an implementor has to choose between doing
what the spec says and doing what it means.

The introduction to the algorithm in the spec says: "The approach to
defining a type by restricting another type definition set out here is
designed to ensure that types defined in this way are guaranteed to be a
subset of the type they restrict." So perhaps it's not entirely unreasonable
for an implementation to use a different algorithm that better implements
that intent!

Michael Kay 
Received on Thursday, 28 October 2004 08:31:26 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:56:06 UTC