RE: Canonicalization of <SignedInfo> for Reference Validation

Blake,

I would tend to define a well defined transformation could be defined as a
'stateless' transformation, that is the output is a function of the given
input and nothing else (which is not in the function definition itself). 
It should therefor be unalterable during the time , as there are no
additions which are beyond the immediate control of the transformation ,
such as a memory someone else may change (e.g. dereferencing a URL).

Ilan Zohar
Hewlett Packard

-----Original Message-----
From: Dournaee, Blake [mailto:bdournaee@rsasecurity.com]
Sent: Thursday, July 05, 2001 8:00 PM
To: 'John Boyer'
Cc: w3c-ietf-xmldsig@w3.org; 'Joseph M. Reagle Jr.'
Subject: RE: Canonicalization of <SignedInfo> for Reference Validation


John,

Thank you for your further clarification.

I am inferring (from your example below) that well-defined transformation
algorithms don't violate the "what you see is what you sign" rule simply
because their behavior is open and known? Is this correct?

How do we draw the "line in the sand" for transformation algorithms that
preserve the wysiwys (What you see is what you sign) property. For example,
it is easy to argue for pervasive transformation algorithms like Base64
Encoding or ZIP compression, but what about lesser known algorithms. What if
someone decides to compress something with ARC? What about a proprietary
compression or encoding scheme with a closed source?

It's clear to me that an XSLT transform that deletes all element content in
an XML Document would clearly violate wysiwys, but is there really a good
way to measure this property?

In a strict sense, Base-64 encoding seems to violate wysiwys: For example,
in a courtroom if someone asks to see the document that was signed and a
Base-64 encoded blob is shown to the signer, it will be difficult for that
person to say: "Yes this is the document that I signed", because in reality
they saw something completely different when the signature operation
happened.

I'm guessing that all of this was hashed out some time ago and I probably
came a little late for this discussion. Any thoughts or pointers to previous
discussion on this would be most helpful.

Kind Regards,

Blake Dournaee
Toolkit Applications Engineer
RSA Security
 
"The only thing I know is that I know nothing" - Socrates
 
 


-----Original Message-----
From: John Boyer [mailto:JBoyer@PureEdge.com]
Sent: Thursday, July 05, 2001 2:31 PM
To: Dournaee, Blake; Joseph M. Reagle Jr.
Cc: w3c-ietf-xmldsig@w3.org
Subject: RE: Canonicalization of <SignedInfo> for Reference Validation




Hi Blake,

<blake>
On a related note, it seems like encoding transforms (such as Base64 or
compression or anything that significantly permutes the input data)
would
violate the "See what you sign" rule.
</blake>

<john>
A transforms such as base64, compression, etc. do not violate "What you
'see' is what you sign?", provided that the transformation algorithms
follow well-documented public standards.

For example, when you sign the binary of a jpeg image, are you signing
what you see?  No you're signing highly compressed (possibly lossy) data
that corresponds to a bitmap image via a well-known public algorithm, so
we accept it.  By comparison, signing a base-64 encoding of a jpeg is
peanuts.

Cheers,
John Boyer
Senior Product Architect, Software Development
Internet Commerce System (ICS) Team
PureEdge Solutions Inc.
Trusted Digital Relationships
v: 250-708-8047  f: 250-708-8010
1-888-517-2675   http://www.PureEdge.com <http://www.pureedge.com/>
</john>




-----Original Message-----
From: Joseph M. Reagle Jr. [mailto:reagle@w3.org]
Sent: Thursday, July 05, 2001 11:36 AM
To: Dournaee, Blake
Cc: Dsig (E-mail)
Subject: Re: Canonicalization of <SignedInfo> for Reference Validation


At 14:10 7/5/2001, Dournaee, Blake wrote:
>I've been thinking about Section 3.2.1: Reference Validation and am not
>quite convinced that there is a real security reason for canonicalizing
><SignedInfo> for Reference Validation.

Hi Blake,

You're right, for Canonical XML there isn't much of a reason. *But*
since
other canonicalizations can be used, in order to satisfy the "see what
you
sign" (and its sister maxims) you should reference validate (see) what
was
signed (canonical form.) An area where this might be important is where
a
canonicalization algorithm rewrote URIs. Even something as innocuous as
absolutizing relative URIs (which was a point of debate with respect to
namespaces) could change what it is your signing.

Canonical XML doesn't make any such changes, and one could optimize
appropriately, but since the specification is generally written from an
algorithm independent point of view it includes that processing/warning.


--
Joseph Reagle Jr.                 http://www.w3.org/People/Reagle/
W3C Policy Analyst                mailto:reagle@w3.org
IETF/W3C XML-Signature Co-Chair   http://www.w3.org/Signature
W3C XML Encryption Chair          http://www.w3.org/Encryption/2001/

Received on Friday, 6 July 2001 13:42:36 UTC