RE: Canonicalization of <SignedInfo> for Reference Validation

At 22:24 7/6/2001, Dournaee, Blake wrote:
>Thank you very much for this information.

Note, it's a more verbose restatment (with examples) of what is already said 
in the spec:

http://www.w3.org/Signature/Drafts/xmldsig-core/Overview.html#sec-See

    Some applications might operate over the original or intermediary
    data but should be extremely careful about potential weaknesses
    introduced between the original and transformed data. This is a
    trust decision about the character and meaning of the transforms
    that an application needs to make with caution. Consider a
    canonicalization algorithm that normalizes character case (lower to
    upper) or character composition ('e and accent' to 'accented-e'). An
    adversary could introduce changes that are normalized and
    consequently inconsequential to signature validity but material to a
    DOM processor. For instance, by changing the case of a character one
    might influence the result of an XPath selection. A serious risk is
    introduced if that change is normalized for signature validation but
    the processor operates over the original data and returns a
    different result than intended.

>As an aside, there is one other thing that puzzled me regarding XML dsig. In
>one of the examples that you used below, you mentioned Base64 as an encoding
>transform.

That's a mistake. &disg;#base64 is a decode transform, there's no URI yet 
for an encode algorithm as there isn't much use for such a thing besides 
examples! <smile/> If there was an base64-encode URI, it would be different 
than the one I gave.


--
Joseph Reagle Jr.                 http://www.w3.org/People/Reagle/
W3C Policy Analyst                mailto:reagle@w3.org
IETF/W3C XML-Signature Co-Chair   http://www.w3.org/Signature
W3C XML Encryption Chair          http://www.w3.org/Encryption/2001/

Received on Monday, 9 July 2001 11:40:19 UTC