Re: Object/triple mapping as one building block for software develop

Hi Olivier,

up to now the SPARQL endpoint needs to be instantiated/configured 
explicitly. However integration of, e.g., the Semantic Web Client 
library [1] should be possible, as well as making use of VoiD descriptions.

Regarding inheritance from RDFResource: This is not mandatory for OTMj 
to work. Esp. if you are about rendering RDF from OO data (work in 
progress), annotating your existing classes and registering it in 1 LOC 
should be enough w/o any inheritance requirements. I just got used to 
inherit interfaces from RDFResource so I can access rdf:Resource-related 
properties (label, comment, seeAlso, ...) in sub-interfaces.

I think whether to use inheritance from RDFResource or not is up to the 
specific use case (e.g., consuming or publishing linked data), but it 
would be nice to find out whether one solution is preferable of the other.



Olivier Rossel schrieb:
> how do you specify which SPARQL endpoint should be queried to retrieve
> (for example) a set of Person ?
> In my opinion, inheriting  from RDFResource is bad design.
> In Elmo, for example, you register the mapped interfaces via a simple
> statement at the beginning of your program,
> so your interfaces are not polluted by that specific superinterface.
> Could it be done in OTM-J ?
> On Wed, May 6, 2009 at 8:11 PM, Matthias Quasthoff
> <> wrote:
>> Hi all,
>> in order to simplify integration of linked data in our software projects, we
>> translated standard object-relational mapping patterns to object/triple
>> mapping. The key idea is to help software developers by letting them do
>> *everything* with means of their programming language. That is, developers
>> shouldn't even have to know that there is something like nodes, triples or
>> HTTP (for simple applications, at least).
>> Our first version of a Java implementaion can be found here:
>> So far, exposing linked data as Java objects and querying SPARQL endpoints
>> works. SPARQL queries are not specified as strings but can be created using
>> the OO data model (i.e. field names instead of RDF property names, Java
>> locales instead of string language tags etc.).
>> We also hope to be able to integrate different types of policies (trust,
>> privacy, and data licenses), as well as some kind of provenance information.
>> Such framework automating triple handling could easily relieve developers
>> from recurring implementation tasks regarding any kind of policies. We think
>> that in most cases, a standard solution will be appropriate and the
>> respective standards will be more appealing to developers if they can find a
>> simple way to use them.
>> I know that there is Sommer and RDFReactor and more, and I hope we'll be
>> able to achieve some kind of compatibility between our approaches. But I
>> truly think that handling linked data rather requires something like design
>> patterns, when and where to handle which kind of data, policies etc. I hope
>> we can join some kind of discussion in this direction, and that somebody
>> will find our work useful for their project.
>> Best regards from Berlin,
>> Matthias

Received on Thursday, 7 May 2009 09:42:46 UTC