Re: Algorithm evaluation on the complete LOD cloud?

​Dear Paul,​ others,

On Fri, Apr 24, 2015 at 4:39 PM, Paul Houle <ontology2@gmail.com> wrote:

> Also I find the "no special hardware requirements" thing to be strange,
>  probably because it ought to be defined in terms of "I have a machine with
> these specific specifications".  For instance,  if you had a machine with
> 32GB of RAM (which is pretty affordable if you don't pay OEM prices) you
> could load a billion triples into a triple store.  If your machine is a
> hand-me-down laptop from a salesman who couldn't sell that has just 4GB of
> RAM you are in a very different situation.

​The method that Laurens and I ​are using is a combination of LOD Laundromat
<http://lodlaundromat.org/> (ISWC paper
<http://link.springer.com/chapter/10.1007/978-3-319-11964-9_14>) created by
us and Linked Data Fragments <http://linkeddatafragments.org/> (ISWC paper
<http://www.researchgate.net/profile/Ruben_Verborgh/publication/264274086_Web-Scale_Querying_through_Linked_Data_Fragments/links/53f498b10cf2fceacc6e918d.pdf>)
created by the research group of Ruben Verborgh. Both approaches were
integrated earlier this year (ESWC paper
<http://ruben.verborgh.org/publications/rietveld_eswc_2015/>) resulting in
a large-scale platform for LOD processing that can scale without noticeable
memory consumption. All scalable parameters are ultimately related to
disk-space, which is relatively cheap and easy to scale.

​The LOD Laundromat indeed does not currently serve "the complete LOD
Cloud"​ but a subset consisting of 37B triples). Due to its architectural
underpinnings there seems to be no inherent reason why it could not serve
the complete LOD Cloud.

---
Best regards,
Wouter
​ B​
eek.

E-mail: w.g.j.beek@vu.nl
WWW: www.wouterbeek.com
Tel.: 0647674624

Received on Tuesday, 28 April 2015 01:40:38 UTC