Re: [ANN] XSDBench XML Schema Benchmark 1.0.0 released

Hi Michael,

Michael Kay <mike@saxonica.com> writes:

> For example, many schema validators are likely to have an elapsed time for
> validation of something like (aX + c) where X is the document size. If
> you're only measuring one 12K instance, then dividing the processing time by
> X doesn't give any useful measure of throughput because you don't know what
> "c" is.

Well, we tried to minimize 'c' by caching the schema and reusing the parser
but I agree that there could be some buffer allocations, etc with every new
document.

The problem with large documents is that it becomes hard to cache them in
memory. One way to overcome this would be to fake a very large document
by replaying the same fragment over and over again. This way we can
simulate an arbitrary large document. The only problem is that it will
be the same data over and over again. Do you see any unfairness in this?


> (Though I can't complain, because this one test did find a bug in my product
> which none of the 3000 test cases in the W3C test suite had shown up!)

And did you get any performance numbers? Also would you like to submit
a test case for your product to XSDBench?


Thanks for the feedback.

-boris

--
Boris Kolpackov
Code Synthesis Tools CC
http://www.codesynthesis.com
tel: +27 76 1672134
fax: +27 21 5526869

Received on Wednesday, 18 October 2006 12:44:04 UTC