W3C home > Mailing lists > Public > xml-encryption@w3.org > June 2002

Re: XML decryption transform number 13

From: Joseph Reagle <reagle@w3.org>
Date: Mon, 3 Jun 2002 13:37:46 -0400
To: merlin <merlin@baltimore.ie>, "Takeshi Imamura" <IMAMU@jp.ibm.com>
Cc: xml-encryption@w3.org
Message-Id: <20020603173746.07DED1517@policy.w3.org>

On Friday 31 May 2002 08:23 pm, merlin wrote:
> I do. I botched the Type handling. Here, I use Type
> in a useful manner.

Merlin, thank you for continuing to crank out your iterations <smile/>: 
substantive proposals are a great way to make progress!

> . Define two decrypt transforms:
>     &decrypt;Binary
>     &decrypt;XML

I like this idea and I think it recalls Takeshi's question as to why I 
liked to distinguish between &content; and &element. I figured it might 
come in handy somewhere. However, aside from the limitations, the 
processing is the same. So by creating a &decrypt;XML 
transform identifier we can process arbitrary XML "Types" but still 
identify that we want them decrypted as we specified. 

> . The &decrypt;XML transform operates so:
>   o For each unexceptional EncryptedData in the node set:
>     * If its Type is &xenc;(Content|Element) then decrypt
>       it, wrap it in the parsing context of the EncryptedData
>       element, parse this and trim it. (Aside: I think that
>       this definition of the processing of these Types
>       should be in the xmlenc spec. We could then allow
>       decrypt-and-replace mode to operate uniformly on any
>       Type whose "processing" result is a node set. I am not
>       asking for wrap/parse/trim to be their specification,
>       just that the "result" of processing these types is
>       defined to be a node set, a non-normative implementation
>       of which is wrap/parse/trim.)

This sounds like a good idea.

>     * Save these node sets; they will be the replacements for
>       the EncryptedData elements; they may be element or
>       element content.
>   o Canonicalize the input node set but, in place of every
>     unexceptional EncryptedData, canonicalize the replacement
>     node set. Note that the result may not be in canonical
>     form.

By this I presume you mean you agree with my earlier (off-list) comment 
that a c14nized document with parts of itself replaced with other c14nized 
fragments itself (all-together) might not be in c14n form? (The boundaries 
between the original document and the "holes" were content being replace 
with c14nized fragments might exhibit some non-c14n variants when looked at 
as a whole.)

> . Every EncryptedData in XML mode must, after decryption
>   and processing in accordance with the Type attribute, result
>   in a node set. This is simply a requirement that we place on
>   apps that encrypt data for subjection to &decrypt;XML. Use
>   another transform if you don't like this.

Why must it be a nodeset (and not octets?)

> . A smart implementation will realize that, if this processing
>   is followed by a canonicalization transform (e.g., if this
>   is the last transform and the next step is digest) and it
>   can formulate its replacement node-set canonicalization to
>   be *identical* to what canonicalization of that node set
>   in-place would be, then it can omit the redundant parse
>   and c14n steps.

Yes (see comments above), to know how hard this would be would take some 
toying about with it?

> . In terms of performance, we parse the content of each
>   EncryptedData once, and then do a c14n/parse step on the
>   whole node set, so this will be reasonably efficient;
>   particularly if the previous point is followed.

Efficiency is always a good thing!

> Super encryption:

This is a contentious issue still and I want to isolate most of the 
proposed changes above from this bit. I think most of your suggestions 
above *can* be employed regardelss of whether we do automatic 
super-decryption (recursing on EncryptedData that appear after a 

> . If an application is super-encrypting data that are
>   subject to a decryption transform, then it is responsible
>   for using this type so that the decryption transform
>   can operate, and it must understand that superencrypted
>   EncryptedData cannot make same-document references
>   during their processing.
> . There is one issue that may or may not need addressing:
>   o Superencrypting excepted EncryptedData: Just don't
>     use the SuperEncrypted type.
>   o Superencrypting both excepted and non-excepted
>     EncryptedData: Either just don't do it or identify them
>     through a new EncryptionProperty. Is that going too
>     far? It does exercise our EncryptionProperty framework.

It sounds as if you and Takeshi might have a straightforward disagreement 
about requirements. (Correct my summary if I'm wrong).

Takeshi likes auto-super-decryption and isn't too worried about the problem 
of XPointer evaluation and references outside of an EncryptedData still 
working. (And he suggests using "#xpointer(id('ID'))" *would* still work, 
is this true...?)

Merlin is concerned about XPointer evaluation and references outside of an 
EncryptedData and doesn't find the auto-super-decryption that compelling. 
(I wonder about this mysef. If I have an application where I know I'm 
encrypting element X and Y, is super-encryption all that likely?)

So it would seem we have a tension between these two requirements. If there 
is no other way to resolve these tensions, at least by making it a specific 
parameter/transform, the application can make its own choice...
Received on Monday, 3 June 2002 13:38:39 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 23:13:09 UTC