Use Cases

1. A Web Service for US Tide Information

   http://www.jeteye.com/jetpak/13402417,,,1135264420,,,,view.html

   Here I use an XML pipeline to implement a REST-style web service
   where I pull tide information from the NOAA, manipulate the web page
   that contains the data, and then return an XML representation.

   This is a combination of:

       URL interaction to retrieve and parse a web page.
       Use of the Tagsoup parser to deal with not-so-well-formed XHTML
       Translation of this into a particular XML representation with XSLT

   There is a front-end application that is also a pipeline that interfaces
   with this web service and provides a web front-end for a mobile phone

   Here's the direct links:

      http://www.smallx.com/tideinfo/
      http://www.smallx.com/tideinfo-service/

2. Parsing/Unparsing embedded XHTML.

   A both Jeteye [1] and as samples for  the small project, I need to be
   able to parse and escape subtrees of a document to interact RSS feeds.
   This allows a processor to deal with elements like the description 
element
   in RSS.

3.   A replacement for Cocoon

   This was part of the Sylvia project at the CDE 
(http://groups.sims.berkeley.edu/sylvia/)

   The basic use cases are:

    1. Chaining a sequence of XSLT transforms.
    2. Dynamic application of XSLT.  Here the important bit was that there
        is a pipeline style in smallx that lets you compose a vocabulary 
element
        that specifies the XSLT to run on a subtree:

        <apply-xslt src='myxslt.xsl'>
             <doc>...</doc>
        </apply-xslt>

        The result of the transform replaces the  'apply-xslt' element.

    This is an important use case for web-application use of pipelines 
as they often
    need to be dynamic in their choice of XSLT/etc.

4. A "fat pipe" implementation for mathematical computation


   As part of Monos/Xeerkat/Mathgrid.org, I use XML pipelines to implement
   mathematical computations.  Custom pipelne steps that implement certain
   operations are provided by the Monos software.  These might be simple
   operations like 'transpose a matrix' to more complex algorithms like 
computing
   a groebner basis.  The point here is that I can mix these steps with 
regular
   pipeline steps like XSLT or XQuery.

   The point being is that many computation operations are data manipulation
   tasks.  Many tools have insufficient ability to do this operations.  
In this way,
   XML pipelines excel at letting users mix data manipulation tasks with 
computation
   tasks.

   I have certain steps that are simple element filters.  For example, 
the groebner
   basis pipeline step takes in an XML 'matrix' element and outputs an XML
   'matrix' element.

   See this document for more information:

      
http://www.milowski.com/math/papers/2005-iamc/IAMC-2005-milowski.xhtml

--Alex Milowski

Received on Thursday, 22 December 2005 15:30:38 UTC