W3C home > Mailing lists > Public > public-xmlsec@w3.org > March 2009

Re: Explanation of section 4.5 of transform note

From: Pratik Datta <pratik.datta@oracle.com>
Date: Fri, 20 Mar 2009 12:25:05 -0700
Message-ID: <49C3ED91.2080300@oracle.com>
To: Scott Cantor <cantor.2@osu.edu>
CC: "'XMLSec WG Public List'" <public-xmlsec@w3.org>
Responding to Scott's comments :

In the new model, the implementation pluggability will also have to be 
The current model is based on a "transform engine", anybody can plugin a 
tranform, and the engine just loops over whatever transforms are defined.

The problem with this is that the engine does not understand what it is 
doing, it just does it. A transform is just a piece of code that it 

The new model is declarative - this also means that the signature syntax 
itself doesn't say the exact steps to be followed. e.g if the signature 
has a envelopedSignature="true" attribute, and a 
wsse:replaceSTRWithST="true", the engine has to know which one to do 
first - removing the signature or STR replacement. Actually in this 
particular case the results are the same, whichever way you do it, but 
the point is that engine has to know all possible combinations that can 
be thrown at it, and be able to process them correctly, and disallow 
meaningless combinations.. So from the implementation point of view, a 
new transform does not plugin to an existing engine, rather each spec 
that adds a transform, has to build its own engine (which could share 
code from the base engine, by utility functions or class derivation). 
i.e.a WS-Security implementation would be required to implement a new 
transform engine, but for that is can use the help of an underlying XML 
Security transform engine.


Although this might appear to make the implementation a lot more 
complex, that is not true.  Because the current transform model is also 
not very clean for plugging in transforms.

E.g. Attachment transforms require special handling.  Because usually 
transforms  take a byte[] or Set<Node>  but  these ones are different, 
because they require the Mime headers too. Also there is an underlying 
assumption that these transforms are always the first one.  In my 
implementation I had to put in special logic  for the URI Resolver to 
pass an AttachmentPart object to the Transform engine, so that the 
Attachment Transform could extract the MimeHeaders out of the 
AttachmentPart.and do special processing.

Other transforms have special handling too- the STR-Transform and 
Decrypt Transfrom need to share code with the Canonicalization Tranform 
- and this kind of code sharing is not through a plugin mechanism. The 
XPath Filter 2 transform requires access to the whole document, not just 
to the Nodeset that is passed in, because the XPath expressions are 
evaluated on the document using the complete set of nodes in tree, not 
on the subset.

So the new transform model, is not really preventing code sharing, it is 
just not following the plugin model.


Response to Frederick

The Attachment Transforms is one of reasons that I moved the URI 
attribute from the Reference element into the Selection element in the 
proposal. Because there was an underlying assumption that a URI will 
resolve to a nodeset or an octet stream.   But that is too simplistic, 
in reality the selection is interlinked with the URI resolver.  So 
basically we do not have a
  URI Resolving + Selection + Canonicalization

instead the resolving becomes part of the Selection step, and we do not 
limit what can be the result of URI resolver -  it can be a simple octet 
stream or octet stream + mime headers or directory with files or even a 
database record with field values. All we are saying is the output of 
the selection is of type xml 
(type="http://www.w3.org/2008/xmlsec/experimental#xml" ) or text or binary.


Scott Cantor wrote:
> Pratik Datta wrote on 2009-03-19:
>> Trying to clarify section 4.5 of the transform note
>> http://www.w3.org/2008/xmlsec/Drafts/transform-
>> note/Overview.html#extensibility
> Thanks, that helps, particularly since you still have this notion of
> Transforms in the document, and I wasn't clear on how that related to the
> two example WS Transforms you discussed.
>> We are proposing a new transform model, which is a radical departure from
>> the current model, and it doesn't have the current concept of a
> "transform".
>> (With the exception of XSLT and decrypt transform, that we reluctantly
> added
>> back).
> I think this has to be determined, and if we're keeping the idea of
> Transforms at all, that argues to me more for adapting your proposal to fit
> into the *existing* syntax/model. In other words, either we're eliminating
> generic transforms or we aren't. I'm of the opinion we should.
>> With the current transform model, people are free to define new
> transforms,
>> and they have. In this section I have taken two such transforms from WS-
>> Security spec and attempted to map them to the new transform model.  This
> is
>> just an exercise to validate the new model; the WS-* specs are frozen and
>> not expected to change.
> I think that's open to question. If WS-Security doesn't ever plan to support
> XML Signature 2.0, its value to me at least goes somewhere close to zero.
> I realize it's not our task to even debate that, but I would phrase this
> more as "any subsequent change to the WS-* specs are ignored for the
> purposes of this discussion".
>> To map the STR-Transform to the new model, we need to split it up - part
> of
>> it will go into the <Selection> element, and part into the
>> <Canonicalization> element. The Selection part can be represented by a new
>> attribute (assuming we go with attribute extensibility)
>> replaceSTwithSTR="true/false". The canonicalization part is standard.
> For clarity, I think your proposal should tighten up the XML and make it
> clear that with this approach, that's not replaceSTwithSTR, but
> wss:replaceSTwithSTR (or whatever). It's an extension attribute in somebody
> else's namespace and code would have to be added to an existing
> implementation of your proposal to handle it. It wouldn't be baked in.
>> This splitting up the STR-Transform gives a big benefit. One of the goals
> of
>> the new transform model is to accurately determine what is signed, and the
>> current STR-Transform does not let you do that easily because it combines
>> and replacement and canonicalization into one step, so it is very hard for
>> an application to stop the STR-Transform in the middle and get the value
> of
>> the replaced tokens. But with the new model, an application can just
> execute
>> the <Selection> step and get the value of the replaced tokens, and check
> the
>> policy to determine if the tokens that were supposed to be signed, and
>> really signed.
> If the model is to add content to the Canonicalization or Selection
> constructs you're defining to signal extensions, how would somebody plug in
> their extension handling code to an implementation? Note that these
> extensions could impact essentially any phase of the two processes. It's not
> a pipeline, like the existing spec's Transforms model is, where you can
> clearly plugin as needed.
> Let's say you provided an implementation of your Selection construct, minus
> the STR transform extension attribute (as you probably would do). How could
> I plug into your code and somehow see that extension attribute and then
> impact your processing to do the STR replacement?
> Perhaps it would have to explicitly become event-driven/streamed, and I'd
> have to handle all the events and some how inject pre/post processing on
> your work?
>> If this explanation makes is clearer, I can update the note with this
>> content.
> I think it's helpful, yes.
> -- Scott
Received on Friday, 20 March 2009 19:25:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:55:10 UTC