- From: Thomas Roessler <tlr@w3.org>
- Date: Tue, 28 Oct 2008 19:36:51 +0100
- To: public-xmlsec@w3.org
- Cc: John Boyer <boyerj@ca.ibm.com>
Forwarding with permission. -- Thomas Roessler, W3C <tlr@w3.org> Begin forwarded message: > From: John Boyer <boyerj@ca.ibm.com> > Date: 28 October 2008 18:23:26 CEST > To: Thomas Roessler <tlr@w3.org> > Cc: chairs@w3.org, w3c-ac-forum@w3.org > Subject: Re: Heads-up on proposed breaking change to XML Signature > > > Hi Thomas, > > Thanks for taking the time to make this notification. I generally > agree that some simplifications would be helpful for performance and > for making it easier to express signatures of higher security. As > I'll explain below, the focus on selection would be too restrictive, > i.e. too much simplification, in that it tends to substantially > *reduce* the security value of the signatures. > > However, it is easy to maintain the highest possible security while > focusing for performance and simplicity on "selection". The trick > is to understand what must be selected and why. Put simply, a > document subset transform is more secure when it selects what must > be subtracted from the document before the canonicalization and > digest operations. The subtraction filter, then, is a precise and > explicit expression of what part of the document may still be > mutated by the rest of a business process that occurs after the > signature is affixed. Any change to a document other than to the > subtracted part will invalidate the signature as desired. > > For example, suppose you have a document with the familiar "office > use only" section. When a user signs the document, the document > subset should be the entire document less the "office use only" > section. This way, any change made to the document in any place > except the "office use only" section would invalidate the > signature. The purpose of a digital signature is to become invalid > when any change is made, except those anticipated by the system. > Thus, subtraction filtering is the best fit for a document subset > signature. > > By comparison, if a document subset signature merely selects the > portion of the document to be signed, then additions can be made not > only to the "office use only" section but also to any other location > in the document that is outside of the selected portions of the > document. It is entirely too easy to exploit the document semantics > and inject unintended side effects. > > Regarding the issue of performance, I would point out that the > original XPath filter should be replace by XPath Filter 2, or > something like it that is capable of subtraction. Whereas the > original working group paid insufficient attention to performance as > an interoperability criterion in the original recommendation of the > first XPath filter, the *fix* to that was the XPath Filter 2 > recommendation, which was *designed* for performance in the sense > that high performance was considered to be a requirement. > Specifically, interoperable implementations were expected to be > capable of performing a subtraction filter on signature generation > and validation operations over a 100K XML document in 1/4 second per > operation on an 800MhZ computer. This time includes not just the > subtraction filtering, but the entire transform-canonicalize-digest- > digest-encrypt sequence. > > The XPath Filter 2 transform should therefore be seen as highly > performant, and I would assert it would be difficult indeed to > provide actual empirical evidence to the contrary. That being said, > the XPath Filter 2 transform internally allows any number of set > operations of subtraction, union and intersection. Given the fact > that XML Signatures itself allows multiple References, it would seem > that a single operation of subtraction or selection would be > sufficient for all practical purposes. > > To be clear, IBM currently ships successful products that are based > on XML Signatures and that are particularly dependent for security > reasons on the existence of subtraction filtering. However, in the > design tools we offer for constructing document signatures, we > simplify the interface down to a simpler set of operations, which > you now appear to be wanting to achieve at a markup level. This > would make it easier for design tools to offer a visual/gestural > experience that is close in concept to the actual markup that is > generated, which in turn makes it easier for design tools to figure > out what has happened in prior design experiences based on the > markup. There's just a lot fewer unnecessary knobs and dials, so to > speak, so this seems like a good direction for the working group to > take. As long as it is clear, in case I haven't said it enough, > that we need document subtraction. > > As a final point, I realize that this message indicates you are > focusing on the transformation-digest-canonicalization sequence. My > greatest disappointment at the conclusion of the original XML > signature group pertains to an aspect of canonicalization that the > working group was unwilling to take on at the time due to the > newness of the technologies needed to address it. But those > technologies are now entrenched, so I would strongly urge the new > working group to reconsider. The issue is that the current > canonicalization algorithms make no use of DTD or Schema information > that might be available on when whitespace is important and when it > isn't. It is easy with today's technology to determine when an > element's content model permits whitespace merely as a convenience > versus when the element actually has a PCData or mixed content > model. It would be a great service to the XML community to see a > new canonicalizer that could detect and eliminate unnecessary > whitespace from the canonical form. > > Best regards, > John M. Boyer, Ph.D. > STSM, Interactive Documents and Web 2.0 Applications > Chair, W3C Forms Working Group > Workplace, Portal and Collaboration Software > IBM Victoria Software Lab > E-Mail: boyerj@ca.ibm.com > > Blog: http://www.ibm.com/developerworks/blogs/page/JohnBoyer > Blog RSS feed: http://www.ibm.com/developerworks/blogs/rss/JohnBoyer?flavor=rssdw > > > > > From: > Thomas Roessler <tlr@w3.org> > To: > w3c-ac-forum@w3.org, chairs@w3.org > Date: > 10/21/2008 08:21 AM > Subject: > Heads-up on proposed breaking change to XML Signature > > > > > > Dear Chairs and AC representatives, > > The XML Security Working Group had a productive meeting here in Cannes > this Monday and Tuesday. In outlining its roadmap for a next version > of XML Signature, the Working Group - as chartered - reviewed > possibilities for simplifying the specification. > > One area of that is promising for simplification is the Reference > Processing Model's ability to specify an (almost arbitrary) list of > transforms between node-sets and octet-streams. > > One option under consideration by the Working Group is to modify the > Reference Processing Model in XML Signature 2.0 to consider only > "selection", "canonicalization" and "digesting" transformations. The > Working Group believes that there are significant security and > performance advantages to making this change to the XML Signature > structure, but it does constitute a restriction on behavior that is > currently permitted by XML Signature 1.0. Specifically, this change > would still permit signing of document subsets but would prohibit > transformations of arbitrary complexity (e.g. unconstrained XSLT) from > being used within the XMLDSIG sturcture itself. > > We plan to outline this design idea in more detail in a Working Group > Note that we anticipate to put out for broad comment. > > At this point, we would be grateful for early feed-back about actual > use of custom Transforms with XML Signature that might be affected by > the change that is being considered. > > We're available in the hallways at TPAC to talk more about this, and > appreciate early feed-back. > > Regards, > > Frederick Hirsch (Nokia), Chair, XML Security WG; > Thomas Roessler (W3C), Security Activity Lead. > > > >
Received on Tuesday, 28 October 2008 18:37:04 UTC