Re: Fwd: Heads-up on proposed breaking change to XML Signature

Hi John

I completely agree on the document subtraction. This is required for 
many other use cases too - for example the EnvelopedSignature Transform 
itself is a kind of document subtraction. Also ebXML msg has subtraction 
requirements. Also the XPath Filter 2 is definitely faster  than the 
original XPath Filter.

The main problem that I have seen is there is too much flexibility in 
transforms. An XPath Filter 2.0 can include many intersect, subtract, 
union operations, and in fact there can be more than one XPath Filter 
transform in the transform chain. Also there is no restriction on the 
order of transforms - e.g. there can be an xpath -> c14n ->  xpath 
transform chain which probably doesn't mean anything but is still 
allowed. With so much of flexibility it is difficult to look at the 
signature and figure out what is really signed.  One idea I have 
proposed is to limit selection to one inclusion Xpath and one exclusion 
xpath  with exclusions overriding inclusions. See

Also the XPath Filter 2.0 intersect/subtract/union operations are 
described terms of "nodeset" and a nodeset implies a DOM requiring all 
the nodes to be loaded up in memory at the same time, which is a big 
performance problem for large documents. I would like the new XML 
Signature spec to address Streaming issues and not leave it completely 
up to each implementor. Most of the signature examples that I have seen 
only need very basic XPath of the form /a/b/c , which can be easily 
streamed, it would be helpful if XML Signature identifies a streamable 

Canonicalization is a very hot topic, there have a been lot of 
disucssions in the group on it - especially around whitespaces and 


Thomas Roessler wrote:
> Forwarding with permission.
> -- 
> Thomas Roessler, W3C  <>
> Begin forwarded message:
>> From: John Boyer <>
>> Date: 28 October 2008 18:23:26 CEST
>> To: Thomas Roessler <>
>> Cc:,
>> Subject: Re: Heads-up on proposed breaking change to XML Signature
>> Hi Thomas,
>> Thanks for taking the time to make this notification.  I generally 
>> agree that some simplifications would be helpful for performance and 
>> for making it easier to express signatures of higher security.  As 
>> I'll explain below, the focus on selection would be too restrictive, 
>> i.e. too much simplification, in that it tends to substantially 
>> *reduce* the security value of the signatures.
>> However, it is easy to maintain the highest possible security while 
>> focusing for performance and simplicity on "selection".  The trick is 
>> to understand what must be selected and why.  Put simply, a document 
>> subset transform is more secure when it selects what must be 
>> subtracted from the document before the canonicalization and digest 
>> operations.  The subtraction filter, then, is a precise and explicit 
>> expression of what part of the document may still be mutated by the 
>> rest of a business process that occurs after the signature is 
>> affixed.  Any change to a document other than to the subtracted part 
>> will invalidate the signature as desired.
>> For example, suppose you have a document with the familiar "office 
>> use only" section. When a user signs the document, the document 
>> subset should be the entire document less the "office use only" 
>> section.  This way, any change made to the document in any place 
>> except the "office use only" section would invalidate the signature.  
>> The purpose of a digital signature is to become invalid when any 
>> change is made, except those anticipated by the system.  Thus, 
>> subtraction filtering is the best fit for a document subset signature.
>> By comparison, if a document subset signature merely selects the 
>> portion of the document to be signed, then additions can be made not 
>> only to the "office use only" section but also to any other location 
>> in the document that is outside of the selected portions of the 
>> document.  It is entirely too easy to exploit the document semantics 
>> and inject unintended side effects.
>> Regarding the issue of performance, I would point out that the 
>> original XPath filter should be replace by XPath Filter 2, or 
>> something like it that is capable of subtraction.  Whereas the 
>> original working group paid insufficient attention to performance as 
>> an interoperability criterion in the original recommendation of the 
>> first XPath filter, the *fix* to that was the XPath Filter 2 
>> recommendation, which was *designed* for performance in the sense 
>> that high performance was considered to be a requirement.  
>> Specifically, interoperable implementations were expected to be 
>> capable of performing a subtraction filter on signature generation 
>> and validation operations over a 100K XML document in 1/4 second per 
>> operation on an 800MhZ computer.  This time includes not just the 
>> subtraction filtering, but the entire 
>> transform-canonicalize-digest-digest-encrypt sequence.
>> The XPath Filter 2 transform should therefore be seen as highly 
>> performant, and I would assert it would be difficult indeed to 
>> provide actual empirical evidence to the contrary.  That being said, 
>> the XPath Filter 2 transform internally allows any number of set 
>> operations of subtraction, union and intersection.  Given the fact 
>> that XML Signatures itself allows multiple References, it would seem 
>> that a single operation of subtraction or selection would be 
>> sufficient for all practical purposes.
>> To be clear, IBM currently ships successful products that are based 
>> on XML Signatures and that are particularly dependent for security 
>> reasons on the existence of subtraction filtering. However, in the 
>> design tools we offer for constructing document signatures, we 
>> simplify the interface down to a simpler set of operations, which you 
>> now appear to be wanting to achieve at a markup level.   This would 
>> make it easier for design tools to offer a visual/gestural experience 
>> that is close in concept to the actual markup that is generated, 
>> which in turn makes it easier for design tools to figure out what has 
>> happened in prior design experiences based on the markup.  There's 
>> just a lot fewer unnecessary knobs and dials, so to speak, so this 
>> seems like a good direction for the working group to take.  As long 
>> as it is clear, in case I haven't said it enough, that we need 
>> document subtraction.
>> As a final point, I realize that this message indicates you are 
>> focusing on the transformation-digest-canonicalization sequence.  My 
>> greatest disappointment at the conclusion of the original XML 
>> signature group pertains to an aspect of canonicalization that the 
>> working group was unwilling to take on at the time due to the newness 
>> of the technologies needed to address it.  But those technologies are 
>> now entrenched, so I would strongly urge the new working group to 
>> reconsider.  The issue is that the current canonicalization 
>> algorithms make no use of DTD or Schema information that might be 
>> available on when whitespace is important and when it isn't.  It is 
>> easy with today's technology to determine when an element's content 
>> model permits whitespace merely as a convenience versus when the 
>> element actually has a PCData or mixed content model.  It would be a 
>> great service to the XML community to see a new canonicalizer that 
>> could detect and eliminate unnecessary whitespace from the canonical 
>> form.
>> Best regards,
>> John M. Boyer, Ph.D.
>> STSM, Interactive Documents and Web 2.0 Applications
>> Chair, W3C Forms Working Group
>> Workplace, Portal and Collaboration Software
>> IBM Victoria Software Lab
>> E-Mail:
>> Blog:
>> Blog RSS feed: 
>> From:
>> Thomas Roessler <>
>> To:
>> Date:
>> 10/21/2008 08:21 AM
>> Subject:
>> Heads-up on proposed breaking change to XML Signature
>> Dear Chairs and AC representatives,
>> The XML Security Working Group had a productive meeting here in Cannes
>> this Monday and Tuesday.  In outlining its roadmap for a next version
>> of XML Signature, the Working Group - as chartered - reviewed
>> possibilities for simplifying the specification.
>> One area of that is promising for simplification is the Reference
>> Processing Model's ability to specify an (almost arbitrary) list of
>> transforms between node-sets and octet-streams.
>> One option under consideration by the Working Group is to modify the
>> Reference Processing Model in XML Signature 2.0 to consider only
>> "selection", "canonicalization" and "digesting" transformations.  The
>> Working Group believes that there are significant security and
>> performance advantages to making this change to the XML Signature
>> structure, but it does constitute a restriction on behavior that is
>> currently permitted by XML Signature 1.0.  Specifically, this change
>> would still permit signing of document subsets but would prohibit
>> transformations of arbitrary complexity (e.g. unconstrained XSLT) from
>> being used within the XMLDSIG sturcture itself.
>> We plan to outline this design idea in more detail in a Working Group
>> Note that we anticipate to put out for broad comment.
>> At this point, we would be grateful for early feed-back about actual
>> use of custom Transforms with XML Signature that might be affected by
>> the change that is being considered.
>> We're available in the hallways at TPAC to talk more about this, and
>> appreciate early feed-back.
>> Regards,
>> Frederick Hirsch (Nokia), Chair, XML Security WG;
>> Thomas Roessler (W3C), Security Activity Lead.

Received on Wednesday, 29 October 2008 21:50:29 UTC