Re: Berlin SPARQL Benchmark update (experiment)

Hi Patrick,

I understand your point.

Actually the only reason why I used the .11 version is that at the time we
tested Virtuoso, it was the most recent version. Since the benchmark was
mainly about the other two stores with Virtuoso only for comparison
reasons it was Ok to keep the results although a newer version came out.

But, fair is fair, if Virtuoso can do much better than that, we will find
it out in a new test and then update the results.

About the commercial - Open Source question, there isn't really an
exclusion list of which stores we won't test, although we prefer Open
Source projects. The only real exceptions are proprietary stores because
nobody really profits from this knowledge.

In the case of BigOWLIM, it sounds like that they give it for free if you
do research with it: "To purchase a licence or obtain a free copy for
research or evaluation, please, contact OWLIM-info-at-ontotext.com."

I guess that's somewhere in-between. ;)

Cheers
Andreas

> Hi Andreas,
>
>> we released a new Benchmark experiment with the BSBM recently.
>>
>> Results can be found under
>>
>> http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/results/V5/
>>
>> This time we present a pure triple store Benchmark. Instead of the
>> smaller
>> datasets of the previous experiment we generated a 200 millions triples
>> datasets. So there are only 100M and 200M triples datasets this time.
>>
>> The main reasons for the experiment was on the one hand a bug fix in ARQ
>> that was leading to a significant speed up of Jena TDBs query
>> performance.
>> On the other hand BigOWLIM was tested for the first time.
>> To make these results better comparable we also added the fastest store
>> of
>> our previous experiment to the candidates, which has been Virtuoso
>> Open-Source.
>
> First of all i would have liked to be advised on this new run before the
> published results, specially when you put my name on the bottom of this
> document.
>
> That would have given me the opportunity to suggest you use the later
> VOS 5.0.12 release (dated 28-10-2009) instead of the VOS 5.0.11 release
> (dated 23-04-2009) you used to compare with. Or even against our new VOS
> 6.0.0 release (dated 16-10-2009). Like other participating projects of
> the last full benchmark, we have not exactly sat still for the last half
> year in terms of both bugfixes and enhancements/optimizations.
>
> It would have also given me a chance to inform you that the settings and
> methods you currently use for testing Virtuoso may not be appropriate
> for loading the larger sizes of 100M and 200M you are now using for your
> benchmark. I will find some time to redo the benchmark here and send the
> results to you for comparison.
>
>
> Secondly, i was under the impression that only published open source
> projects would be considered in this benchmark, however according to the
>   web page for BigOWLIM <http://www.ontotext.com/owlim/index.html> :
>
>    "BigOWLIM is available under an RDBMS-like commercial licence on a
>     per-server-CPU basis; it is neither free nor open-source"
>
> Does this mean that for the next Berlin Benchmark release you are
> considering other commercial contributions as well?
>
>
>
> Respectfully,
>
>
> Patrick van Kleef
> ---
> Maintainer Virtuoso Open Source Edition
> OpenLink Software
>

Received on Thursday, 10 December 2009 13:59:54 UTC