W3C home > Mailing lists > Public > public-lod@w3.org > December 2009

Re: Berlin SPARQL Benchmark update (experiment)

From: Patrick van Kleef <pkleef@openlinksw.com>
Date: Thu, 10 Dec 2009 12:42:09 +0100
Message-ID: <4B20DE91.8090001@openlinksw.com>
To: Andreas Schultz <a.schultz@fu-berlin.de>
CC: public-lod@w3.org
Hi Andreas,

> we released a new Benchmark experiment with the BSBM recently.
> 
> Results can be found under
> 
> http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/results/V5/
> 
> This time we present a pure triple store Benchmark. Instead of the smaller
> datasets of the previous experiment we generated a 200 millions triples
> datasets. So there are only 100M and 200M triples datasets this time.
> 
> The main reasons for the experiment was on the one hand a bug fix in ARQ
> that was leading to a significant speed up of Jena TDBs query performance.
> On the other hand BigOWLIM was tested for the first time.
> To make these results better comparable we also added the fastest store of
> our previous experiment to the candidates, which has been Virtuoso
> Open-Source.

First of all i would have liked to be advised on this new run before the 
published results, specially when you put my name on the bottom of this 
document.

That would have given me the opportunity to suggest you use the later 
VOS 5.0.12 release (dated 28-10-2009) instead of the VOS 5.0.11 release 
(dated 23-04-2009) you used to compare with. Or even against our new VOS 
6.0.0 release (dated 16-10-2009). Like other participating projects of 
the last full benchmark, we have not exactly sat still for the last half 
year in terms of both bugfixes and enhancements/optimizations.

It would have also given me a chance to inform you that the settings and 
methods you currently use for testing Virtuoso may not be appropriate 
for loading the larger sizes of 100M and 200M you are now using for your 
benchmark. I will find some time to redo the benchmark here and send the 
results to you for comparison.


Secondly, i was under the impression that only published open source 
projects would be considered in this benchmark, however according to the 
  web page for BigOWLIM <http://www.ontotext.com/owlim/index.html> :

   "BigOWLIM is available under an RDBMS-like commercial licence on a
    per-server-CPU basis; it is neither free nor open-source"

Does this mean that for the next Berlin Benchmark release you are 
considering other commercial contributions as well?



Respectfully,


Patrick van Kleef
---
Maintainer Virtuoso Open Source Edition
OpenLink Software
Received on Thursday, 10 December 2009 14:37:43 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 10 December 2009 14:37:44 GMT