Re: shaping up Xproc unit test thoughts

On 6/12/07, Jeni Tennison <jeni@jenitennison.com> wrote:
> James Fuller wrote:
> > shaping up some ideas on unit testing with xproc;
> >
> > I would place under a different optional library...have scratched out
> > the following as a starting point for discussion;
> >
> > <t:test-suite>
> >
> > <t:test name="check-ex-pipeline" msg="checking pipeline output">
> > <p:input port="test-source">
> >  <p:pipe step="xform" port="result"/>
> > </p:input>
> > <p:input port="expected">
> >  <p:document href="test.xml"/>
> > </p:input>
> >
> > <t:assert msg="checking result has title element">
> > </t:assert>
> >
> > <t:assert msg="checking result has body element">
> > </t:assert>
> >
> > <t:assert msg="checking result has meta tags">
> > </t:assert>
> >
> > </t:test>
> >
> > </t:test-suite>
>
> Sorry, I'm missing some context. Is the idea that this is an XML
> document that you would programmatically convert into a pipeline and
> then run? If you have separate <t:assert> steps, what's the role of the
> 'expected' input?

just playing at the moment...

I would like to have multiple asserts in a single test...the test
being analogous to a step

i conflated everything (as usual)

when testing for equivelence between the output of a pipeline (an
input in t:test) it needs to be compared to an expected xml
document....for other assertions like xpath or some true/false type
things we wont need expected input (or we can...but it would be easier
to define an option....but perhaps we just have a value attribute on
the assert.


> > now a few questions and thoughts;
> >
> > *  I guess a  t:test type step could be considered very similar to
> > p:viewport
> >
> > * I would like to append output from multiple tests to a single
> > p:output, unsure of how this is achievable with current p:output
> > definition and allowed sequence
>
> You generate a *sequence* of documents on a single output and then wrap
> them into a single document in a separate step, with <p:wrap-sequence>
> for example.

yes, I would weakly argue that this is awkward, though the
alternatives, e.g adding an attribute to p:output would only be
slightly less verbose.

as an aside, this type of 'append' scenario will be common in 'long
running' pipelines, which I am thinking a bit more about

> > * I have left the t:assert elements empty for now, but one can imagine
> > assertions testing for true, false, xml-equals, xml-not-equals,
> > xpath-exists, xpath-not-exists, etc....not quite sure if its a
> > parameter or an option ...probably being silly here
>
> If you (as the pipeline author) know the names you want to use then
> they're options. If you don't (and the user of the step gets to choose
> the names) then they're parameters.

thx of the clarification, though it feels more appropriateto have a
test value attribute to match against.....as this value is not a
parameter nor an option in a semantic sense.

> > * I see such unit tests as valuable part of documentation of code, so
> > I would advocate for them living inside p:pipeline
>
> I think that would work, if you add the test namespace to list of
> ignored namespaces (otherwise they'd be interpreted as steps).

ok, that makes sense.

> > * should such tests be applied to steps, compound steps, pipelines and
> > subpipelines or should I make some differences now in the t:test
> > element
>
> I'd invoke tests on entire pipelines. You need ways to specify all the
> inputs, options and parameters and to test all the outputs.

makes sense as well

> > * test failure, what does it mean ...I know that when a test fails
> > this means it is indicated in the output
>
> Don't forget methods of testing whether the pipeline throws an error,
> and whether the error is the expected one.

yes, this is just different flavoured asserts....just thinking from a
TDD point of view, do we want the test run to continue or fail when it
hits the first failed test, I lean towards running all tests and
having a full report at the end (as I do with perl).

> > * must think a bit more about issues with context and inherited
> > environment with testing
>
> This is why testing at the pipeline level is good: there's very little
> context that you can't pass explicitly into the pipeline.

yes good point.

> > * an implementation detail, switch to turn on or off understanding t:
> > namespace elements
>
> If they're ignored in the pipeline then that's turned them 'off' as far
> as running the pipeline normally goes. Otherwise, you'll extract them to
> run them. So I don't see a need for turning understanding on or off, but
> might have misinterpreted the method you're envisaging using to run the
> tests.

not sure yet, I assume the XProc spec is leaving the idea of importing
pipeline libraries from the commandline as an implementation
detail...which is how I see these elements being enabled...as for the
the test runner, this would mean the pipeline runs as per normal with
the test taking the final result output as input.

thx for the comments,

Jim Fuller

Received on Tuesday, 12 June 2007 09:51:19 UTC