W3C home > Mailing lists > Public > public-xmlsec@w3.org > October 2008

ACTION-51: Provide proposal on list regarding transform primitives (draft)

From: Konrad Lanz <Konrad.Lanz@iaik.tugraz.at>
Date: Wed, 01 Oct 2008 02:47:54 +0200
Message-ID: <48E2C8BA.9030706@iaik.tugraz.at>
To: public-xmlsec@w3.org
Dear all,

This is following up on the discussions minuted in
http://lists.w3.org/Archives/Public/public-xmlsec/2008Sep/0004.html and
on the last two calls.

My recollection was:
The discussion was first surrounding streaming XPath implementations and
transformations, robustness of signatures and interoperability of custom
transforms. (Please let me know if I missed out something).

Let me also rephrase in other words what I said in
http://www.w3.org/2008/09/09-xmlsec-minutes :

> One can either view the dereferenced data-object or the digest input
> as the secured data (i.e. is the input to transforms or output).
> 
> "What you see is what you sign" in the current specs however is 
> favoring the digest input as it is as close as possible to the 
> cryptographic hashing operation.
> 
> There is essentially two (maybe three) big general viewpoints to look
> at the chain of transforms :

> 1.) It remodels the building of a viewable document from arbitrary 
> xml (e.g. XSLT), the secured/signed document is the transient image 
> at the of the chain of transforms, that ideally would hash the actual
> screen, but is pleased with the digest input.

> The expectation to C14N under 1. is that if different processors 
> screwed little things like line-endings and namespaces up, c14n is 
> supposed to fix this again.

> 2.) The chain of transforms is a pure enabler for robustness and here
> the data of interest is the actual input rather than the viewable
> output; for two typical cases are, white space normalization, and as 
> a subcase subdocument selection, idempotency of transforms would 
> allow for this case to view the input to the chain of transforms 
> equal to the output, as you could perform it over and over again and 
> after the first processing it would not change any more. This assumes
> that the input document may be changed on signing according to the
> operations performed by the chain of transforms.

> The expectation to C14N under 2. in fact the expectation in 2. is 
> that the document at the beginning of the chain of transforms is in 
> logical and normalized identity with the input.

> 3.) The hybrid case of 1.) and 2.) ... but let's skip 3.) for now ...

> <fjh> question, does this depend on the extent of what 
> canonicalization actually does, e.g. does simplification also enable 
> only hashing approach <klanz2> pratik: tracing back nodes to the 
> input

For Transformation primitives let me focus on 2.) ...

A distinction between selections (they, output NodeSetData) and "real"
transforms (they, return a new Document) is a necessity especially for
2.) [1]. Well and there is also normalizations that return a new
document, that however is logically still equivalent to the input.

Let's for now distinguish normalizations from "real" transforms, that
the latter are not idempotent.

Similar there is also a need to clearly communicate that selections
always depend on the underlying document, which is (at least logically)
augmented by namespace declarations distributed across the document to
reflect the XPath data model.

And one has to be appreciative that if a namespace node is not in the
output NodeSetData of some selection "transform", it still can be
navigated using the XPath axes.
(Actually this is true for every node of the dereferenced document or
parsed document - the latter in case a previous transform returned
OctetStreamData).

Given all these complexities of the current data model and looking at
XML digital signatures from a user perspective I think there is a need
(or REQUIREMENT) for a set of easy and inter-operable transforms (not
selections).

Such transforms should preferably be idempotent and mutually
independent, so that they can be executed in parallel or in any order.

I would like to call them for now transformation primitives and they
should be specified in English text and their implementation should not
depend on some specific data or processing model such as SAX, StAX,
XPath or DOM. They would be required to be followed by another
transformation primitive (no more selections) and eventually minimal
canonicalization.

Transformation primitives could alternatively/preferably be substantial
for some form of C14N Vnext. Yet they would still be valuable as an
agreed inter-operable set of ds:Transforms for legacy XMLDSig (second
edition).

Some quickly drafted examples for such transformation primitives could be:

  a)some sort of minimal canonicalization as defined in RFC 4051 [2] ...

  b)normalization of namespace prefixes, implying that they are not of
    significance to the data object. It does not contain QNames in
    content referred to by this ds:Reference.
    Hence this ds:Transform specifies a numbering scheme for namespace
    prefixes with some NCName [3] parameter P so that prefixes are
    replaced by concat(P,'1') ... concat(P,'n') ... and so on .

    The transformation MUST by default throw an error on the discovery
    of the string concat(P,x,':') in attribute values or out side tags,
    where x is a decimal integer ....

  c)normalization of multiple successive whitespace characters,
    implying that they are not of significance to the referenced data
    object and are replaced by a single white space character, ...
    1) appreciating the usually line based processing of text formats
    either line breaks are added/normalized after each start and end TAG
    2) alternatively the indention algorithm XYZ is applied
    ... multiple/contradicting usage of c) is prohibited ...

  d)sorting of attributes

  e)a non-XPath-Filter based version of the enveloped signature
    transform

  f) and the like ....

So if one would apply those to the referenced document itself (by
actually changing it) then the processing of the transforms, despite
some serialization, could be omitted on verification as those transforms
are idempotent.

In case it fails, the transformations may be executed again assuming an
allowed change has happened. If it fails again it fails overall.

We should hence further revise the paradigm of not changing the
dereferenced/parsed document.

It seem to be easy for documents external to the signature as they will
be - so to say dereferenced - by value (a copy is obtained/downloaded),
whereas same-document references tend to be processed by "reference"
especially as embedding an enveloped signature can only be performed in
place.


I hope this makes sense,
best regards

Konrad


[1] ... previously mentioned in discussions in Boston
http://www.w3.org/2007/05/02-xmlsec-minutes.html#action05
and by Pratik


[2] http://tools.ietf.org/html/rfc4051#section-2.4
    http://tools.ietf.org/html/rfc3075#section-6.5.1

[3] http://www.w3.org/TR/REC-xml-names/#NT-Prefix

-- 
Konrad Lanz, IAIK/SIC - Graz University of Technology
Inffeldgasse 16a, 8010 Graz, Austria
Tel: +43 316 873 5547
Fax: +43 316 873 5520
https://www.iaik.tugraz.at/aboutus/people/lanz
http://jce.iaik.tugraz.at

Certificate chain (including the EuroPKI root certificate):
https://europki.iaik.at/ca/europki-at/cert_download.htm


Received on Wednesday, 1 October 2008 00:48:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:43:55 GMT