Re: Linked Data API

Nathan wrote:
> Dave Reynolds wrote:
>> On 25/02/2010 18:11, Nathan wrote:
>>> Leigh Dodds wrote:
>>>> Hi all,
>>>>
>>>> Yesterday, at the 2nd Linked Data London Meetup, Dave Reynolds, Jeni
>>>> Tennison and myself ran a workshop introducing some work we've been
>>>> doing around a "Linked Data API".
>>>>
>>>> The API is intended to be a middle-ware layer that can be deployed
>>>> in-front of a SPARQL endpoint, providing the ability to create a
>>>> RESTful data access layer for accessing the RDF data contained in the
>>>> triple store. The middle-ware is configurable, and is intended to
>>>> support a range of different access patterns and output formats. "Out
>>>> of the box" the system provides delivery of the standard range of RDF
>>>> serialisations, as well as simple JSON and XML serializations for
>>>> descriptions of lists of resources. The API essentially maps
>>>> parameterized URLs to underlying SPARQL queries, mediating the content
>>>> negotiation of the results into a suitable format for the client.
>>>>
>>>> The current draft specification is at:
>>>>
>>>> http://purl.org/linked-data/api/spec
>>> If I may make a suggestion; I'd like you to consider including the
>>> formed SPARQL query in with the return; so that developers can get used
>>> to the language and see how similar to existing SQL etc etc..
>> Absolutely. The notion (and current implementation) is that the returned
>> results gives a reference to a metadata resource which in turn includes
>> the sparql query and the endpoint configuration. Will check if that is
>> clear in the current draft of the spec write up.
>>
>>> For all this middle-ware is needed in the interim and provides access to
>>> the masses, surely an extra chance to introduce developers to linked
>>> data / rdf / sparql is a good thing?
>> Exactly. The API helps developers get started but we are trying to keep
>> the essence of the RDF model intact so that they can move onto SPARQL
>> and full stack as they get comfortable with it.
> 
> thinking out-loud here; I wonder what would happen if you created a REST
> api like you have, that redirects to the SPARQL endpoint w/ query and
> that obviously returns SPARQL+JSON / SPARQL+RDF ..? then libraries like
> ARC and rdflib, jena etc can be used by the developers; essentially just
> a little introduction protocol and offloading all the hard work on to
> these fantastic libraries. 

Indeed the API implementations redirect to the SPARQL endpoints and in 
the case of our own Java implement it does, of course, build on Jena.

However, it is possible for a consumer of the published data to work 
without needing to use any of those toolkits. They can get started with 
just simple web GETs, with easy to use parameters, and can consume the 
data with standard JSON and XML tools. As they get comfortable with the 
data, the generic data model and the specific vocabularies involved in 
the sources then migrating to the full power of SPARQL and RDF APIs 
should be easier. The API exposes you to the structure of the data and 
makes it easy to play around with, giving you a motivation to get to 
grips with the more powerful tools.

> Which in turn would also be a further
> introduction to Linked Data. Further SPARQL+JSON is really easy to
> decode and use 

Sure, if you already understand the modelling behind it. However, to 
someone doesn't (and isn't yet motivated to do so) it can appear a 
little arcane.

We were also very keen to ensure the essence of RDF is preserved through 
the API, in particular the resource/thing centric and schemaless nature 
of it. A danger of saying "just use SPARQL+JSON" is not just the 
learning curve but the rigidity of it. SPARQL Describe and LCBD are 
wonderful things and the API makes it easy to get descriptions and 
discover what is possible when your data linking isn't limited by a 
rigid schema.

Dave

Received on Thursday, 25 February 2010 21:39:30 UTC