Re: SV: performance testing of schemas

Bryan Rasmussen writes:

> I'm going to ask if we can run a project to provide the 
> benchmarking, especially as it will be important to our future 
> projects and so forth.

I'm not quite sure who "we" is, but I would never discourage anyone from 
doing and publishing careful benchmarks.  As it turns out, I've for 
several years been the technical lead on a team doing high performance 
validators.  Indeed, we hope to publish some of our work in detail later 
this year.  In my experience, careful benchmarking is difficult to do 
well, and benchmarking that's not done carefully is likely to yield 
misleading results.  Among the factors I would strongly urge you to 
consider if you do serious benchmarking are:

* The role of the API. Even SAX can be very slow relative to the 
capabilities of the best high performance parsers.   Eric Perkins from our 
team gave a talk on this at XML 2005, including quantitative measurements 
of API overhead.

* Programming languages and runtimes, e.g. Java vs. C

* Compiler and JIT switches.  We've seen swings of perhaps 50% in C 
parsers from switching compilers and optimization levels.  >These 
variations are not consistent across parsers<.   A parser which is a large 
body of code may benefit disproportionally from a system that does good 
global optimization or inlining.  If you're testing a particular feature, 
its performance may be dominated by the ability of your compiler or JIT to 
optimize or inline code into the inner loop for that feature.

* Processor and machine architecture.  Results tend to be moderately 
consistent over multiple architectures, but not always.  We've even seen 
in one case major variations resulting from the different cache 
architectures on two Thinkpads (yes, we checked with hardware level 
tracing).

* As usual, you need to take some care in validating multiple instances, 
averging times, making sure the times you're measuring are correct, etc.

Of course, if you have a particular build of a particular parser in a 
particular environment you can moderately easily measure variations as a 
function of schema features used.  I wouldn't assume that the results tell 
you very much about those same features in a different parser, or 
especially with a different API, or even necessarily the same parser 
compiled or run differently.  It's a big leap from doing such tests to 
concluding that "substition groups are {fast/slow}".

Noah

--------------------------------------
Noah Mendelsohn 
IBM Corporation
One Rogers Street
Cambridge, MA 02142
1-617-693-4036
--------------------------------------








Bryan Rasmussen <brs@itst.dk>
12/09/05 05:07 AM
 
        To:     "'xmlschema-dev@w3.org'" <xmlschema-dev@w3.org>
        cc:     "'ht@inf.ed.ac.uk'" <ht@inf.ed.ac.uk>, 
noah_mendelsohn@us.ibm.com, "'Michael Kay'" <mike@saxonica.com>
        Subject:        SV: performance testing of schemas



I think we can see there is a need for this kind of benchmarking
information, and perhaps correlation of benchmarking results with test 
suite
conformance. I'm going to ask if we can run a project to provide the
benchmarking, especially as it will be important to our future projects 
and
so forth. Hopefully this will be okayed and we can get this data out 
there.

Cheers,
Bryan Rasmussen

Received on Friday, 9 December 2005 14:54:25 UTC