- From: John Boyer <boyerj@ca.ibm.com>
- Date: Tue, 3 Jan 2006 11:04:26 -0800
- To: Joseph Reagle <reagle@mit.edu>
- Cc: jose.kahan@w3.org, w3c-ietf-xmldsig@w3.org
- Message-ID: <OF704A9180.8C081185-ON882570EB.00637D9A-882570EB.0068CAA0@ca.ibm.com>
Hi Joseph, >> ... volatility of namespaces. > I have to agree with John on this point. Yet, unfortunately, this is the world that we live in... It's certainly seeming that way, so I'm glad you agree. My feeling that c14n should be changed via an erratum amounts to recognition that this is the way things are going to be, so we should align the spec with the world. > we have had to make compromises, even issue erratum for mistakes, but when > a change is purposeful and explicit like this (rather than a mistake or > oversight on our part), I would argue for a new algorithm. John, granted > that it is a messy world, what is the argument against the new algorithm ? Well, the world is messy whether we make a new algorithm or issue an erratum on c14n 1.0. In fact, I think proposing to do an erratum is really more aligned with saying that hey the world is not perfect, so let's just get on with it and let the existing algorithm do what it was intended to do. As to whether this is a mistake or a purposeful change, the distinction seems illusory. Many have complained that we have made a mistake and would like some kind of fix. That we did not know or believe it was a mistake is the same as any other kind of oversight in that we did not know or believe it was a mistake ;-) So, the proposal to simply issue an erratum is based on both process and technical factors: 1) Process. This type of issue is exactly what errata are designed for. People have misunderstandings that result in implementations that don't work correctly, so we issue errata to fix the problems with W3C recommendations. They aren't set in stone. Witness, for example, the many errata related to attribute values in the core XML recommendation. 2) Technical. The intent of c14n 1.0 is to be the default canonicalization method for XML 1.0. Other algorithms can exist for special purpose results, but you have to explicitly call on them to get the special result, and you do so with an understanding of a special purpose context. For example, the original motivation for creating e-c14n was to handle the special context of putting signed XML into a soap envelope without breaking the signature. Problem is xml:id is not supposed to be a 'special context'. A recommendation has made it part of the core of XML 1.0. So, because c14n 1.0 is supposed to be the canonicalization method for XML 1.0, it is what we use by default for any nodeset to octet stream conversion required during processing of the transform sequence. Such a transition is as easy to create as making a same document URI reference with *no* expressed transforms. Even in this case, it is possible to cause problems for inheritance of xml:id (e.g. if the referenced element contained an attribute assigned the type ID by a DTD). This may sound like a bit of an edge case, but with compound document formats it can actually happen more easily than one might at first think. Hence, the author of a signature element that uses any document subsetting would have to also remember to fix our bug by manually invoking a new c14n in order to be sure the signature will work when applied to documents that use xml:id-- everywhere in a transform sequence that a nodeset to octet stream conversion occurs. Unless we also issue DSig 1.1 to use C14N 1.1 by default. That's when accepting the messy world and issuing the erratum starts to look really good. :-) Best regards, John M. Boyer, Ph.D. Senior Product Architect/Research Scientist Workplace, Portal and Collaboration Software IBM Victoria Software Lab E-Mail: boyerj@ca.ibm.com http://www.ibm.com/software/ Joseph Reagle <reagle@mit.edu> 12/17/2005 05:47 AM To John Boyer/CanWest/IBM@IBMCA cc jose.kahan@w3.org, w3c-ietf-xmldsig@w3.org Subject Re: Canonical XML revision On Thursday 15 December 2005 13:46, John Boyer wrote: > 3) The W3C community seems to be interested in less rigor, not more. > The thing that's really busted, IMO, is the volatility of namespaces, I have to agree with John on this point. Yet, unfortunately, this is the world that we live in and unless we can convince the world not to be changing the meaning of documents after the fact, we will continue living in. (XML is guilty, but for that matter, so is Unicode!) However, since our concern is security, I would hope we not contribute to the trend. Granted, we have had to make compromises, even issue erratum for mistakes, but when a change is purposeful and explicit like this (rather than a mistake or oversight on our part), I would argue for a new algorithm. John, granted that it is a messy world, what is the argument against the new algorithm?
Received on Tuesday, 3 January 2006 19:04:57 UTC