- From: Wendell Piez <wapiez@wendellpiez.com>
- Date: Sat, 28 Feb 2026 09:56:52 -0500
- To: public-ixml@w3.org
- Message-ID: <CAAO_-xzfE1ddR=rEUTQeOiM2SkbEqvDowZp+305GKKUmHh_oVQ@mail.gmail.com>
Hello, I haven't actually done this in XProc, but I've done something pretty similar. One of my test pipelines calls iXML over a sequence of inputs. Half of these are expected to work, the other half to throw errors. The processor dispatches them to p:invisible-xml wrapped in p:try, and does the right thing with the results, whether outputs, or errors. The pipeline output registers whether the good inputs worked and the bad inputs produced errors. Discrepancies are reported. This is nice to have, not only to sketch the boundaries of bad-vs-good in the syntax, but also because it is easy to throw in a different toolchain (iXML processor, XSLTs) and see that the results are still good - so, I get regression testing around my defined inputs and 'source of truth' is where I want it. Since a goal for me is an unambiguous grammar, avoiding the problem of what to do with ambiguities is something I don't have to think about. But if it were something I'd be willing to accept - and especially if coding for others to use (include future-self others), I'd also build a test suite of instances that demonstrated the 'intent' apart from the tools
Received on Sunday, 1 March 2026 14:54:09 UTC