Re: [hcls] Updated wiki page for HCLS Knowledge Base

Hi Egon,

> Linked Data focuses on crawling the web. At least, that's the
> impression I have... yet, a single store to query is indeed much more
> convenient... it's sort of contradicting:

I don't find that contradicting. Having URIs that resolve to something 
useful is practical. Having a SPARQL endpoint that I can quickly visit and 
query is practical. Having both opens up more possibilities than having 
either alone, and is even more practical. Insisting that we get rid of all 
centralized repositories to demonstrate everyone how web-centric we are, 
however, is not rational. To use an analogy: we learned to ride our bike, 
there is no need to constantly show off to everyone that we can even do it 
without hands.

> A single federated query is not what
> I expect to be the final solution; instead, I expect an iterative
> process, where possible steps may be federated, but iterative
> nevertheless...
>
> Having one single SPARQL end point indicates the crawling is done,
> where we have only just started linking things together...

Having a single SPARQL end point means that  we have a selected set of data 
available through a simple query mechanism, nothing more, nothing less. 
Depending on the setup, this single SPARQL endpoint can be incrementally 
updated from the source datasets as well (e.g., I remember that Openlink is 
working on a 'live' synchronisation between DBpedia and Wikipedia). For some 
use cases, this is enough to do the job, for other use cases, it is not.

>  that said,
> I also don't think the final SPARQL end point should be remote at all,

So where should the final SPARQL end point be located? In a server inside 
the intranet of each organization? On the client side? How should it be 
filled? By crawling linked data resources? Please specify.

Cheers,
Matthias 

Received on Wednesday, 14 October 2009 09:30:52 UTC