Straight-through or Other Processing?

As I see it, we have three kinds of processing models we can

1. Straight-through: a pipeline is a sequence of steps where each step's
    output is chained to the next.

2. A dependancy-based model where steps have input dependancies.  A
    target is choosen and the sequence of steps is determined by chasing
    these inputs down.

3. A parallel model where any step can ask for an additional model and
    that causes another pipeline "chain" to execute--possibly in

It is my belief that (1) is the simplest core and the minimum bar for
our first specification (and hence a requirement).

(2) has been tried and found to be a hard way to think about this... but
it works for people who like ant, make, etc.

(3) is a natural extension to (1).

In terms of implementations, smallx uses (1) and, I believe, sxpipe does

You can get some bits of (3) by allowing dynamic binding of parameters
to input chunks.

In smallx, I allow a [p:]let step that allows binding of parameters to
infoset items.  This lets you "save" bits for later or get additional
documents into the pipeline.

For example, I use this to save simple attribute values as parameters:

   <p:parameter name="msgid" select="/mg:message/@ref"/>

But I also use it to save whole chunks:

   <p:parameter name="matrix" select="/c:url-post/m:gap-matrix/m:matrix"/>
   <p:parameter name="weight" select="/c:url-post/m:gap-matrix/m:weight"/>

where the 'matrix' and 'weight' elements are save as their own
document elements bound to each respective parameter.

In the end, if you want an external document, you can retrieve
it first and then bind it to a parameter:

   <p:url-action method="GET" href=""/>
   <p:parameter name="stuff" select="/stuff"/>

--Alex Milowski

Received on Saturday, 14 January 2006 18:28:03 UTC