Re: Tools for 20 million triples?

Charles McCathieNevile wrote:

> on another list someone asked what tools would be good for handling
>  an OWL ontology of about 25,000 terms, with around 20 million 
> triples. There were a handful of ideas about how to build 
> specialised SQL systems or similar, but Danny Ayers pointed out 
> that there are systems capable of handling RDF and a lot of triples
>  (which by lucky chance happens to be a way of storing OWL).
> 
> So I wondered if anyone on this list had experience of tools 
> working with this size dataset. (I will read Dave Beckett's report 
> done for SWAD-Europe on the topic, but I suspect that there is 
> already new information available, and would like to be up to 
> date).

It depends on your hardware of course. Given a reasonably fast server,
Sesame in combination with a MySQL DB (or even in-memory, given
enough RAM) should be able to handle this without too much of a problem.

To be honest with you though, largest set I've personally worked with 
in Sesame was about 5 million, and that could be slow at times (though 
that may have been because I was running it on my notebook).

Jeen
-- 
Jeen Broekstra          Aduna BV
Knowledge Engineer      Julianaplein 14b, 3817 CS Amersfoort
http://aduna.biz        The Netherlands
tel. +31(0)33 46599877  fax. +31(0)33 46599877

Received on Thursday, 25 March 2004 07:45:53 UTC