Re: Choose your namespace (Was : Personal view)

  Hi Henrik,

On Mon, Jun 19, 2000 at 01:14:32PM -0700, Henrik Frystyk Nielsen wrote:
> The problem Daniel brings up is *not* a basic property of relative URIs
> but can happen in any decentralized system that supports indirection. It
> is inherently impossible to guarantee that the rule in section 5.3 about
> uniqueness of attributes is detected in all cases.
> 
> Take for example this slightly different version of Daniel's example:
> 
> -----------------
> <x xmlns:n1="http://www.example.org/a"
> xmlns:n2="http://www.example.com/a">
>   <test n1:y="1" n2:y="2"/>
> </x>
> -----------------
> 
> This looks like a completely valid example, but let's say that I go to
> "http://www.example.org/a" and it gives back a redirect to
> "http://www.example.com/a". This is the exact same problem that Daniel
> pointed out but in this scenario, it doesn't depend on the location of the
> document. Does this mean that my document suddenly is invalid or is it
> even something that we should expect to ever be detected? Clearly it
> isn't.

  Okay,

> Instead of using the uniqueness of attributes as a binary decision between
> whether a document is correct or not, we should instead note that there
> may be times that inconsistencies can happen and that yes, these are
> faults, but that these may not be detected.

 Just for clarification, my point is:

XML is a meta language. Whether a (set of) sequence(s) of bits can be
parsed and its content made available for further processing according to
the rules defining the XML families of languages is an algorithm. I
assert this algorithm need the following properties:

 1/ it should be an "atomic" processing, requiring no further input
 2/ it should not depend on the way those bits were made available
 3/ it should be stable in time/space, no variation

XML-1.0 follows those rules (though the XML-1.0 Rec depends possibly on
the encoding being provided by an external way :-( ), i.e. the Well
Formedness checks follows 1/ 2/ 3/.
Validating may require further processing, but the upper layer will
be able to look at the information set.

I want to make sure that whatever change done to the namespace spec,
the availability of an information set after processing with
(XML-1.0 + namespace) will follow an algorithm that complies to 1/ 2/ 3/.

I'm not even debating about what happen if one dereference a namespace
name, I want the XML processor to be able to tell me first if my
document contains useful data or not without such a dereference.

I would also like the possible changes made to the Namespace spec
to as much as possible guarantee 3/ even for document designed for the
previous version.

I really think that those 3 points must be met, otherwise (XML-1.0 + namespace)
will not be considered a safe (meta) language to encode informations.
People don't like to loose data due to inconsistencies which can happen
sometimes and the fact that fault may result. Trying to avoid those
inconsistencies should guide the design.
  
To go back to the example:
 - I accept the risk of getting the same schemas when trying to
   dereference http://www.example.org/a and http://www.example.com/a
 - but I want it to be done after I got an Infoset.
i.e. my (XML-1.0 + namespace) processor didn't find earlier an
inconsistency in the data, but that inconsistency actually pertain to
a higher level. I could as well get a 404 when trying to retrieve either
of those resource (or a zillion other possible errors that may arose
in the HTTP or IP stack  ;-)

  again, this is for clarification, I assume we agree on this.

Daniel 

-- 
Daniel.Veillard@w3.org | W3C, INRIA Rhone-Alpes  | Today's Bookmarks :
Tel : +33 476 615 257  | 655, avenue de l'Europe | Linux XML libxml WWW
Fax : +33 476 615 207  | 38330 Montbonnot FRANCE | Gnome rpm2html rpmfind
 http://www.w3.org/People/all#veillard%40w3.org  | RPM badminton Kaffe

Received on Tuesday, 20 June 2000 07:01:20 UTC