- From: Henry Story <henry.story@gmail.com>
- Date: Fri, 2 Jul 2010 17:20:24 +0200
- To: Paul Houle <ontology2@gmail.com>
- Cc: Linked Data community <public-lod@w3.org>
On 2 Jul 2010, at 17:07, Paul Houle wrote: > ow, if hardware cost was no object, I suppose I could keep triples in a > huge distributed main-memory database. Right now, I can't afford that. > (If I get richer and if hardware gets cheaper, I'll probably want to > handle more data, putting me back where I started...) Paul, I always wonder about the odd similarity of this argument with the argument people used to make that java is slow. It is now of course proven that in many apps Java is faster than C, because it can do just in time compilation: ie the compiler can look at how the code is USED to make optimisations that a static compiler just cannot. So similarly with RDF stores. Is it not feasible that one may come up with just in time storage mechanisms, where the triple store could start analyising how the data was used in order then to optimise the layout of the data on disk? Perhaps it could end up being a lot more efficient than what a human DB engineer could do in that case. Henry
Received on Friday, 2 July 2010 15:20:56 UTC