Re: Comments on July 8 2.0 signature draft

Regarding the following:

> we should not limit dereferencing to URI -> octets, rather have a  
> more generic Selection Parameters -> Object. We need to discuss this  
> further.


Section 3.2.1 in the current Signature 2.0 draft indicates that an  
object of various forms might be returned, "4.4.3.2 Selection  
element", indicates it can be XML or octets.

http://www.w3.org/2008/xmlsec/Drafts/xmldsig-core-20/Overview.html#sec-Selection

Section 4.4.3.3 URI Attribute, probably needs more clarification.

What more needs to be done here? Do we need an issue?

regards, Frederick

Frederick Hirsch
Nokia



On Sep 15, 2009, at 6:17 PM, ext pratik.datta@oracle.com wrote:

> I have updated the spec with most of Scott's comments
> http://www.w3.org/2008/xmlsec/Drafts/xmldsig-core-20/Overview.html
>
> 	• Added the ByteRange parameters   ACTION-349
> 	• Most of Scott's comments  ACTION-361
> 		• Section 10 lists differences between 1.x and 2.0 with links
> 	• Put in many more sections from 1.1, However there are three  
> sections still empty - KeyInfo, Algorithms (including transforms),  
> and Security
> 	• I have not added the streaming XPath subset to this document - we  
> can do it later on , after getting more comments
> 	• Need a section on extension points, and maybe some samples of  
> possible extensions
> 	• I still think we should not limit dereferencing to URI -> octets,  
> rather have a more generic Selection Parameters -> Object. We need  
> to discuss this further.
> Pratik
>
> On 8/31/2009 1:27 PM, pratik.datta@oracle.com wrote:
>>
>> Scott,
>>
>> I wrote this spec assuming readers would be very familiar with 1.x,  
>> and they would look for what is changed in 2.0. But I see your  
>> point, that it may not be the case. So it would be good to have a  
>> changes from 1.x -> 2.0 as a separate section, rather than sprinkle  
>> it throughout the document.
>>
>> Link to the 2.0 spec: http://www.w3.org/2008/xmlsec/Drafts/xmldsig-core-20/Overview.html
>>
>> More comments below
>>
>>
>> On 8/26/2009 10:48 AM, Scott Cantor wrote:
>>> I started to write some material about why we needed to basically  
>>> move all
>>> this new text into the old document and approach it that way  
>>> instead of as a
>>> new document, but I'm starting to think that the result of that  
>>> will be to
>>> confuse people and make it seem like you have to understand both  
>>> to start
>>> with the new model. So I'm coming around to the idea of using a  
>>> new 2.0 spec
>>> that formally references the original spec as "a valid but optional
>>> processing model" and layers a new processing model on top of it  
>>> as the
>>> preferred mechanism, with the trigger being the new Transform to  
>>> explicitly
>>> signal that.
>>>
>>> So, that being the case, I think we would want to say that kind of  
>>> thing up
>>> front.
>>>
>>> But I would avoid quite so much language inline talking about the  
>>> changes
>>> from 1.x, and either highlight them as some kind of HTML insert/ 
>>> panel/note,
>>> or move the text to a changes section (maybe with hyperlinks in  
>>> various
>>> spots to the specific discussion in that section).
>>>
>>> Section 1:
>>>
>>> The third paragraph is where we're stating the relationship  
>>> between the old
>>> and new work, and to get that right we have to decide on that  
>>> relationship.
>>> Are we actually *deprecating* the old transforms and c14n  
>>> algorithms? That
>>> implies intent to remove. Or are we discouraging their use, while  
>>> not
>>> signaling that intent? Or is it more about conformance, and we  
>>> intend to
>>> make only the new one MTI? We should decide all that soon, I think.
>>>
>>> Section 3.1.2:
>>>
>>> The Note seems insufficiently detailed. I assume we just want to  
>>> use the
>>> text from 1.1.
>>>
>>> Section 3.2:
>>>
>>> Would soften step 1 in that KeyInfo may be omitted, so there are  
>>> other ways
>>> to establish the signing key.
>>>
>>> Are steps 2 and 3 actually in the right order? Seems like at least  
>>> in some
>>> cases, it will be cheaper to evaluate the Reference/Selection than  
>>> do the
>>> signature operation. I know in the old model, specs that have tight
>>> Transform profiles always assume that the implementer will check  
>>> out the
>>> Reference/Transform set first.
>>>
>>>
>> Steps 2 and 3 are the order that we had in the best practices. But  
>> now that we have separated out the Selection from the  
>> Canonicalization, it would be cheaper to do the Reference/Selection  
>> before the Signature verification operation, especially if the  
>> signature verification uses asymmetric keys. So we can reverse step  
>> 2 and 3.
>>
>>> Section 3.2.1:
>>>
>>> Step 2 says C14N 2.0 is a must but not a normative MUST. Are we  
>>> requiring
>>> that? If so, we need to make it normative as a function of this  
>>> processing
>>> model and add text up front to clarify that the MUSTs apply if and  
>>> only if
>>> the new model is being used, or something like that. But I don't  
>>> think we
>>> want soft language inside the doc just to deal with the fact that  
>>> older
>>> signatures will still permit other c14n methods.
>>>
>> The new transform model should make C14 2.0 as a normative MUST
>>> Step 3 again assumes KeyInfo is present/used.
>>>
>>> Section 4.4.1:
>>>
>>> Same issue as above wrt requiring new c14n 2.0. Suggest text about  
>>> which
>>> "named parameter sets" are MTI be in the c14n spec, not here.
>>>
>>> You have a reference to requiring c14n 1.0, obviously this should  
>>> be 2.0.
>>>
>> Yes this should be 2.0
>>> Suggest redoing the paragraph about the security issues, but that's
>>> wordsmithing, not essential right now.
>>>
>>> The last sentence of the last paragraph needs to come out, I  
>>> think, or maybe
>>> replace it with the point that by requiring this algorithm, the  
>>> SignedInfo
>>> element is represented only as an XML subtree and not as text.
>>>
>>>
>> Yes, the last sentence "The following applies to algorithms which  
>> process XML as nodes or characters:" should be deleted. I had  
>> copied it over from the XML Signature 1.x spec - it is no longer  
>> applicable.
>>> Section 4.4.3:
>>>
>>> This is confusing because the Selection/etc. elements aren't  
>>> actually here,
>>> but are inside a Transform. I wouldn't discuss them here, but  
>>> would instead
>>> just say that in accordance with the new processing model:
>>>
>>> - URI MUST be present and refers to the source material over which  
>>> the
>>> single 2.0 Transform will operate
>>> - Type MUST NOT appear
>>> - The Transforms element MUST be present and contain exactly one  
>>> child
>>> element with the new Algorithm
>>>
>>> This assumes the way you do detached/non-XML sigs is to point at  
>>> the content
>>> but then specify something in the Transform about it not being  
>>> XML. If
>>> that's not the intent, adjust above as needed.
>>>
>>> Obviously some/much of the text about the URI attribute gets moved  
>>> up here.
>>>
>> There is some complexity about the URI in 1.x  -http://www.w3.org/TR/xmldsig-core/#sec-URI
>> Complexity 1:
>> The spec says that URI can be omitted altogether, but only for one  
>> Reference. I.e. if there multiple references then only one of them  
>> can have URI missing.  I don't know why this restriction exists.  
>> Maybe there is an assumption here that there is some kind of  
>> "default" URI, and implementations at first do the dereferencing  
>> and then do the transforms. So if two transforms have the default  
>> URI, then both will be end up getting the same contents.
>>
>> Do we want to retain this behavior in 2.0?  I am thinking that non- 
>> XML sigs may not use the URI at all, e.g. one can use something  
>> like a "DbConnectionString" and a "DatabaseRowID" to identify a row  
>> in the database. In that case there is nothing wrong with having  
>> multiple references missing the URI attribute.
>>
>> Fundamentally should 2.0 even define a "dereferencing" operation?   
>> 1.x assumed that the result of dereferencing an URI is an Octet  
>> stream or an Xpath node set. I think this is too limiting - for  
>> database rows a "rowset" is a more natural model.. Rather we  
>> replace dereferencing by Selection, and selection uses URI as one  
>> of the input parameters. So the implementation should first look at  
>> the Reference/Selection@type, and then based on the type, process  
>> the URI. If the type is "dbrow", then it won't even look at the  
>> URI. If the type in binary/fromURI and there is a byteRange, then  
>> it should do a dereferencing efficiently using some kind of sparse  
>> read mechanism.
>>
>> This is why I prefer to keep the text of the URI after the  
>> discussion of the type attribute.
>>
>>
>> Complexity 2:
>> There is difference between  URI="" and URI='#xpointer(/)'  with  
>> respect to how comment nodes are processed.  This is another  
>> instance of how dereferencing is intertwined with canonicalization.
>>
>>
>>
>>
>>
>>> Section 4.4.3.1:
>>>
>>> Again, I'd defer discussion of thew new Transform to its own  
>>> section and
>>> merely have text here constraining what MUST appear.
>>>
>>> I would move all the subtext about the new Transform down into a  
>>> Transform
>>> section, and just have this section continue as before documenting  
>>> the top
>>> level elements.
>>>
>>> Section 4.4.3.2:
>>>
>>> As has been discussed, I believe URI needs to be removed here and  
>>> left to
>>> the Reference level. I think it's less confusing that way, rather  
>>> than more.
>>>
>>> Is the enveloped flag needed? Is it possible to assume that if you  
>>> run
>>> across yourself in the tree while applying c14n that you have to  
>>> exclude
>>> that? That seems to be self-evident, right? Is there a way to  
>>> actually
>>> generate a valid signature that includes yourself?
>>>
>> I agree, there is no harm in assuming that we always want to  
>> exclude the the signature itself.
>>> Section 4.4.3.4:
>>>
>>> We also noted last call that it's critical to have much rigor  
>>> about exactly
>>> what XPath subset is allowed here, or if it's not even true XPath  
>>> then
>>> defining something in its place (ideally something drastically  
>>> simpler that
>>> happens to be a subset syntactially maybe).
>>>
>> I would definitely want it to be true XPath, because XPath is very  
>> widely used, and if we deviate from it, then implementors will have  
>> a hard time. The XPath spec has the grammar expressed in 39 rules.  
>> I am thinking that we can make another grammar with a subset of  
>> those rules.
>>
>>> Since the rules for include/exclude are different, the text needs  
>>> factoring
>>> on that line.
>>>
>>> Section 4.4.3.5:
>>>
>>> Seems like this section should be turned into "Transform  
>>> Processing Model"
>>> and have a step by step explanation of how to do that, rather than  
>>> a focus
>>> on changes.
>>>
>>> -- Scott 
>>>
>>>
>>>
>>>
>> Pratik
>>

Received on Monday, 19 October 2009 19:47:10 UTC