Re: XML decryption transform number 13

r/IMAMU@jp.ibm.com/2002.06.03/01:34:07
>>  o Return the concatenation of the plaintexts.
>
>I feel that concatenating the plaintexts is weird.  What kind of scenario
>are you supposing?

Any scenario in which multiple encrypted binary documents are
covered by a signature. E.g: multiple classfiles encrypted in
an XML document.

Or, more pragmatically, to do otherwise is to needlessly
restrict the transform. If applications don't want this feature
then they can just cover a single binary EncryptedData at
a time. We lose nothing by supporting this.

>>. The &decrypt;XML transform operates so:
>>
>>  o Decrypt every unexceptional EncryptedData in the node set
>>    and process it in accordance with the specification of the
>>    Type attribute, or fail if that is unknown. For example,
>>    Type &gzip-xml; will be gunzipped; type &python-xml-pickle;
>>    will be executed in python; type &xenc;(Content|Element);
>>    will be untouched. Wrap the resulting octet stream in
>>    a parsing context of the EncryptedData element (i.e.,
>>    ...<dummy...), parse this, trim it and save the result.
>>    These will be the node sets that should replace the
>>    EncryptedData elements; they may be element or element
>>    content.
>>
>>  o Canonicalize the input node set but, in place of every
>>    unexceptional EncryptedData, canonicalize the replacement
>>    node set. Note that the result may not be in canonical
>>    form.
>>
>>  o Parse the resulting octet stream.
>
>This looks good, but I think that it is a little redundant.  The
>wrapping/parsing/trimming is required only for the plaintext resulting from
>an EncryptedData element node which is the first node in the node-set, and
>hence that for the other plaintexts could be omitted.

I'm not sure I understand. It could be unnecessary if we
serialized the input node set, with EncryptedData replaced by
their plaintext, and then wrapped/parsed/trimmed this (the
one-phase processing I was describing earlier). However,
in the case of the algorithm I described (#13) this won't
work; canonicalizing the input node set doesn't necessarily
provide the context to parse the XML fragments. Which is a
separate issue that I have with the XML spec.

Regardless, I revamped my understanding of Type processing,
and came to the conclusion that wrap/parse/trim should
not be a part of this spec. In the revised spec (version 2)
the output of decrypting an EncryptedData, in accordance with
the definition of its Type, should be a node set, which we
then then canonicalize in place of the EncryptedData.

We could define the &decrypt;XML as accepting either a node
set or an octet stream as the result of Type processing. A
node set would be canonicalized, an octet stream would be
inserted directly.

However, I think this makes Type-based processing somewhat
redundant, and would require us to impose on the XML encryption
spec that encrypted XML MUST be parseable in the context of
its canonicalized parents; that is, WITHOUT access to general
entities. I think the latter would be a good thing (placing
that requirement on type Element or Content). However,
I don't like acceptance of an octet stream; we have typing,
and I think that we should just requires node sets that we
can use directly. It seems aesthetically pleasing to me: The
&decrypt;XML transform requires that EncryptedData produce XML
(node sets).

merlin

Received on Monday, 3 June 2002 20:08:40 UTC