W3C home > Mailing lists > Public > www-voice@w3.org > July to September 2017

Big SCXML Interpreter Benchmark

From: Stefan Radomski <radomski@tk.tu-darmstadt.de>
Date: Wed, 5 Jul 2017 12:38:02 +0000
To: "www-voice@w3.org (www-voice@w3.org)" <www-voice@w3.org>
Message-ID: <1D35DF04-F500-4EC3-A49F-50056E24A37B@tk.informatik.tu-darmstadt.de>
Hey there,

I did an evaluation of the performance of six different SCXML implementations, namely LXSC, Qt SCXML, SCION, scxmlcc, Commons SCXML and uSCXML [1]. I do display those results on the front-page of our interpreter and would like feedback from the other authors as to whether they feel that I misrepresent their implementation's performance.

With regard to methodology, I do create SCXML documents in increasing complexity [2] with a NULL datamodel and trigger an infinite sequence of microsteps via spontaneous transitions. These will, ever again, cause a state called 'mark' to be entered and I measure the entries per second for 25s (minus time required for setup) and average the results [3].

It is most definitely debatable whether the selection of benchmarks (currently LCCA heavy and transition preemption) are representative for any real-world workload, but I still feel that they give a good idea. If you have any other suggestions for (platform agnostic) benchmarks or want me to run your interpreter implementation differently from [3], please drop me a note.

@David: I could not get JSSCXML to run outside the environment of an HTTP browser. If you could document on how to set it up e.g. wihin Node.js, I will include it.

Regards
Stefan

[1] https://github.com/tklab-tud/uscxml#benchmarks<https://github.com/tklab-tud/uscxml>
[2] https://github.com/tklab-tud/uscxml/tree/master/test/benchmarks
[3] https://github.com/tklab-tud/uscxml/tree/master/contrib/benchmarks
Received on Wednesday, 5 July 2017 12:38:36 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 5 July 2017 12:38:42 UTC