W3C home > Mailing lists > Public > public-rdf-ruby@w3.org > February 2008

Re: APIdeas

From: Keith Alexander <k.j.w.alexander@gmail.com>
Date: Wed, 20 Feb 2008 22:16:39 -0000
To: "Jack Rusher" <jack@rusher.com>, public-rdf-ruby@w3.org
Message-ID: <op.t6uf51zyzdej1c@polar-bear>

On Wed, 20 Feb 2008 15:15:04 -0000, Jack Rusher <jack@rusher.com> wrote:

>    Things have been disappointingly quiet since this went out:

> On 10 Feb, 2008, at 23:36, cdr wrote:
>> 1: abstract-concept orientation

>> 2: RDF schema / class-orientation

>> 3: one class to rule them all (jQuery style)
>> 4. RDF _is_ the language

> ... do we lack preferences here?

I'd guess there is room for a variety of approaches. What I'd like  
personally (for what it's worth) is simply a decent native ruby rdf/xml  
parser that passes all the tests, and exposes the parsed triples as a hash  

At this point, it might be worth my pointing out that in PHP, the RDF APIs  
of ARC and Drupal, as well as the PHP code we write at Talis (much of  
which is, or will be, open-sourced) are converging on exposing data in the  
structure defined in http://n2.talis.com/wiki/RDF_JSON_Specification

I know you weren't that keen on the proposal cdr ;) But I see a lot of  
value in coverging on a common structure for RDF across libraries and  
across scripting languages. For one thing, it's easier for developers to  
get to grips with the structure if they've come across it before in other  
libraries - and they will already be familiar with the various patterns of  
iterating and conditionals to get at the data they need.

Also, it makes it easier to plug different components together if they  
share (and expose) the same internal data structures. If component1 has an  
arbitrarily different internal structure from component2, then you need to  
serialise the data going out of component1 and parse it again going into  
component2, whereas if they use the same, you can simply pass the data  
 from one to the other. Less code, more performance :)

>    Another question: are we, in the first pass, more concerned with  
> making it easier to do network queries against remote SPARQL endpoints  
> or providing infrastructure for local RDF stores?  Most of my current  
> use cases are of the latter variety, but I see much opportunity for  
> mash-up goodness using the former.

I'm interested in retrieving data over the web. Not just SPARQL, but  
linked data etc. I quite like the approach taken by trio in Python, where  
you give it a uri, and it will do whatever necessary coneg, grddling etc,  
to get you back triples (http://inamidst.com/sw/trio/), but I'd also like  
something that deals with HTTP transparently, so you can get at the  
response code and headers and things. I quite like the idea of being able  
to pass callbacks that would trigger on certain response codes ...

As for styles of RDF model API, I do like the jQuery style myself, if  
anyone wants to write one like that :)


Keith Alexander
Received on Wednesday, 20 February 2008 22:17:32 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:02:13 UTC