W3C home > Mailing lists > Public > public-xmlsec@w3.org > August 2009

Re: Comments on July 8 2.0 signature draft

From: <pratik.datta@oracle.com>
Date: Mon, 31 Aug 2009 13:27:52 -0700
Message-ID: <4A9C3248.701@oracle.com>
To: Scott Cantor <cantor.2@osu.edu>
CC: "'XMLSec WG Public List'" <public-xmlsec@w3.org>

I wrote this spec assuming readers would be very familiar with 1.x, and 
they would look for what is changed in 2.0. But I see your point, that 
it may not be the case. So it would be good to have a changes from 1.x 
-> 2.0 as a separate section, rather than sprinkle it throughout the 

Link to the 2.0 spec: 

More comments below

On 8/26/2009 10:48 AM, Scott Cantor wrote:
> I started to write some material about why we needed to basically move all
> this new text into the old document and approach it that way instead of as a
> new document, but I'm starting to think that the result of that will be to
> confuse people and make it seem like you have to understand both to start
> with the new model. So I'm coming around to the idea of using a new 2.0 spec
> that formally references the original spec as "a valid but optional
> processing model" and layers a new processing model on top of it as the
> preferred mechanism, with the trigger being the new Transform to explicitly
> signal that.
> So, that being the case, I think we would want to say that kind of thing up
> front.
> But I would avoid quite so much language inline talking about the changes
> from 1.x, and either highlight them as some kind of HTML insert/panel/note,
> or move the text to a changes section (maybe with hyperlinks in various
> spots to the specific discussion in that section).
> Section 1:
> The third paragraph is where we're stating the relationship between the old
> and new work, and to get that right we have to decide on that relationship.
> Are we actually *deprecating* the old transforms and c14n algorithms? That
> implies intent to remove. Or are we discouraging their use, while not
> signaling that intent? Or is it more about conformance, and we intend to
> make only the new one MTI? We should decide all that soon, I think.
> Section 3.1.2:
> The Note seems insufficiently detailed. I assume we just want to use the
> text from 1.1.
> Section 3.2:
> Would soften step 1 in that KeyInfo may be omitted, so there are other ways
> to establish the signing key.
> Are steps 2 and 3 actually in the right order? Seems like at least in some
> cases, it will be cheaper to evaluate the Reference/Selection than do the
> signature operation. I know in the old model, specs that have tight
> Transform profiles always assume that the implementer will check out the
> Reference/Transform set first.
Steps 2 and 3 are the order that we had in the best practices. But now 
that we have separated out the Selection from the Canonicalization, it 
would be cheaper to do the Reference/Selection before the Signature 
verification operation, especially if the signature verification uses 
asymmetric keys. So we can reverse step 2 and 3.

> Section 3.2.1:
> Step 2 says C14N 2.0 is a must but not a normative MUST. Are we requiring
> that? If so, we need to make it normative as a function of this processing
> model and add text up front to clarify that the MUSTs apply if and only if
> the new model is being used, or something like that. But I don't think we
> want soft language inside the doc just to deal with the fact that older
> signatures will still permit other c14n methods.
The new transform model should make C14 2.0 as a normative MUST
> Step 3 again assumes KeyInfo is present/used.
> Section 4.4.1:
> Same issue as above wrt requiring new c14n 2.0. Suggest text about which
> "named parameter sets" are MTI be in the c14n spec, not here.
> You have a reference to requiring c14n 1.0, obviously this should be 2.0.
Yes this should be 2.0
> Suggest redoing the paragraph about the security issues, but that's
> wordsmithing, not essential right now.
> The last sentence of the last paragraph needs to come out, I think, or maybe
> replace it with the point that by requiring this algorithm, the SignedInfo
> element is represented only as an XML subtree and not as text.
Yes, the last sentence "The following applies to algorithms which 
process XML as nodes or characters:" should be deleted. I had copied it 
over from the XML Signature 1.x spec - it is no longer applicable.
> Section 4.4.3:
> This is confusing because the Selection/etc. elements aren't actually here,
> but are inside a Transform. I wouldn't discuss them here, but would instead
> just say that in accordance with the new processing model:
> - URI MUST be present and refers to the source material over which the
> single 2.0 Transform will operate
> - Type MUST NOT appear
> - The Transforms element MUST be present and contain exactly one child
> element with the new Algorithm
> This assumes the way you do detached/non-XML sigs is to point at the content
> but then specify something in the Transform about it not being XML. If
> that's not the intent, adjust above as needed.
> Obviously some/much of the text about the URI attribute gets moved up here.
There is some complexity about the URI in 1.x  
Complexity 1:
The spec says that URI can be omitted altogether, but only for one 
Reference. I.e. if there multiple references then only one of them can 
have URI missing.  I don't know why this restriction exists. Maybe there 
is an assumption here that there is some kind of "default" URI, and 
implementations at first do the dereferencing and then do the 
transforms. So if two transforms have the default URI, then both will be 
end up getting the same contents.

Do we want to retain this behavior in 2.0?  I am thinking that non-XML 
sigs may not use the URI at all, e.g. one can use something like a 
"DbConnectionString" and a "DatabaseRowID" to identify a row in the 
database. In that case there is nothing wrong with having multiple 
references missing the URI attribute.

Fundamentally should 2.0 even define a "dereferencing" operation?  1.x 
assumed that the result of dereferencing an URI is an Octet stream or an 
Xpath node set. I think this is too limiting - for database rows a 
"rowset" is a more natural model.. Rather we replace dereferencing by 
Selection, and selection uses URI as one of the input parameters. So the 
implementation should first look at the Reference/Selection@type, and 
then based on the type, process the URI. If the type is "dbrow", then it 
won't even look at the URI. If the type in binary/fromURI and there is a 
byteRange, then it should do a dereferencing efficiently using some kind 
of sparse read mechanism.

This is why I prefer to keep the text of the URI after the discussion of 
the type attribute.

Complexity 2:
There is difference between  URI="" and URI='#xpointer(/)'  with respect 
to how comment nodes are processed.  This is another instance of how 
dereferencing is intertwined with canonicalization.

> Section
> Again, I'd defer discussion of thew new Transform to its own section and
> merely have text here constraining what MUST appear.
> I would move all the subtext about the new Transform down into a Transform
> section, and just have this section continue as before documenting the top
> level elements.
> Section
> As has been discussed, I believe URI needs to be removed here and left to
> the Reference level. I think it's less confusing that way, rather than more.
> Is the enveloped flag needed? Is it possible to assume that if you run
> across yourself in the tree while applying c14n that you have to exclude
> that? That seems to be self-evident, right? Is there a way to actually
> generate a valid signature that includes yourself?
I agree, there is no harm in assuming that we always want to exclude the 
the signature itself.
> Section
> We also noted last call that it's critical to have much rigor about exactly
> what XPath subset is allowed here, or if it's not even true XPath then
> defining something in its place (ideally something drastically simpler that
> happens to be a subset syntactially maybe).
I would definitely want it to be true XPath, because XPath is very 
widely used, and if we deviate from it, then implementors will have a 
hard time. The XPath spec has the grammar expressed in 39 rules. I am 
thinking that we can make another grammar with a subset of those rules.

> Since the rules for include/exclude are different, the text needs factoring
> on that line.
> Section
> Seems like this section should be turned into "Transform Processing Model"
> and have a step by step explanation of how to do that, rather than a focus
> on changes.
> -- Scott
Received on Monday, 31 August 2009 20:29:20 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:55:12 UTC