Transform Note Design Decisions

I've reviewed the transform note with a view toward identifying the  
design decisions in it.  Here's what I came up with:

- Processing as XML and binary is made explicit (the type parameter);  
it seems like we can't switch back and forth between the two during  
processing.  There's special case handling for extracting base64  
encoded material from XML.

- There's a fixed sequence of selection, possibly an external XSLT  
transform (the idea is to have a well-known transform and no  
transforms inline), and then canonicalization.

- The selection is a further simplified version of the XPath Filter  
2.0 transform, plus (optionally) the existing decryption and nveloped  
signature transforms.

- Canonicalization is restricted to the pre-hashing use case, which  
relaxes some constraints; however, we don't have a good design draft  
of what that simpler canonicalization could look like.

I was trying to frame my thinking about this as "how do these design  
decisions affect the existing model"; the point where I have most  
trouble right now is the data model that we expect to work on top.  Is  
all of this implementable on top of an event stream?  Do we still need  
to handle a node-set, but with the knowledge that that node set is  
structurally simpler?

Help, please!

Thanks,
--
Thomas Roessler, W3C  <tlr@w3.org>

Received on Wednesday, 25 March 2009 11:55:45 UTC