- From: Kingsley Idehen <kidehen@openlinksw.com>
- Date: Wed, 19 Mar 2014 12:13:02 -0400
- To: public-lod@w3.org
- Message-ID: <5329C20E.9050102@openlinksw.com>
On 3/19/14 11:14 AM, Ruben Verborgh wrote: > Hi Luca, > >> >Just finished reading the paper. Really great stuff. The idea of >> >splitting a single resourceful SPARQL request into multiple >> >fine-grained requests was always attractive to me and I've also >> >thought of something similar (http://lmatteis.github.io/restpark/). I >> >applaud you and your team for coming up with a formal client algorithm >> >as well as a server implementation for offering this functionality. > Thanks and some of the ideas seem similar to Restpark, > especially the URI template and the examples, > which come down to concrete applications of our algorithm. > > The main difference in the design process, I think, > is that I started looking at it from the client perspective: > what affordance does a client need to solve queries? > Incorporating the total triple count and links to other fragments > are very important to allow the right client-side decisions. > >> >I wonder, given a Linked Data set, if you could easily and generically >> >wrap it with a basic LDF interface. Sort of how Pubby wraps SPARQL to >> >expose Linked Data, maybe there can be a wrapper for a Linked Data >> >site to expose basic LDF. > In addition to Pieter's answer, who correctly points at the implementation > of third-party tools, I would also point to our open-source basic LDF server: > https://github.com/LinkedDataFragments/Server#supported-data-sources > > As you can see, it supports many back-ends out-of-the-box. > This includes SPARQL endpoints, but also other triple sources > such as Turtle files, and even other basic LDF sources. > > An interesting back-end is HDT (http://www.rdfhdt.org/), > which we use to offerhttp://data.linkeddatafragments.org/dbpedia. > >> >* I build a nice site and my data is available as HTML, marked up with RDFa >> >* Database is not a triple store, just regular MySQL > So the easiest way to publish this as fragments could be to extend Datasource. > For an example on how to do this, see > https://github.com/LinkedDataFragments/Server/blob/master/lib/LevelGraphDatasource.js. > >> >Where I see LDF being a*huge* deal is that I could use something to >> >wrap my RDFa pages and expose a basic LDF server, without having to >> >change any of my technology stack for my app. This could potentially >> >allow thousands of RDFa providers to expose querying functionality >> >with minimum effort. > Indeed, and that's exactly what we aim for with basic LDFs: > low-cost, queryable publishing of Linked Data. > Let us know how it works out, or if we can help! > > Best, > > Ruben > Ruben, How about making an RDF document that describes LDF? Producing such a document would make its value proposition clearer. This approach is also a nice case of Linked Data dog-fooding e.g., the basis for the most basic LDF utility example using an RDF document as the data source :-) SUQIN does have such a document, for instance [1]. [1] http://squin.sourceforge.net/data.ttl -- raw data in turtle document [2] http://linkeddata.uriburner.com/describe/?url=http%3A%2F%2Fsquin.org%2F%23squinSystem&graph=http%3A%2F%2Fsquin.sourceforge.net%2Fdata.ttl -- viewing via URIBurner . -- Regards, Kingsley Idehen Founder & CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter Profile: https://twitter.com/kidehen Google+ Profile: https://plus.google.com/+KingsleyIdehen/about LinkedIn Profile: http://www.linkedin.com/in/kidehen
Attachments
- application/pkcs7-signature attachment: S/MIME Cryptographic Signature
Received on Wednesday, 19 March 2014 16:13:25 UTC