Re: Failed to port datastore to RDF, will go Mongo

Hi all, 

first off: thanks a lot for the many comments and the advice that this has received. 

On Nov 24, 2010, at 11:44 PM, Toby Inkster wrote:

> On Wed, 24 Nov 2010 18:12:50 +0000
> Ben O'Steen <bosteen@gmail.com> wrote:
> 
>> That's not the point that is being made. A competent developer, using
>> all the available links and documentation, spending days researching
>> and learning and trying to implement, is unable to make an app using a
>> triplestore that is on a par with one they can create very quickly
>> using a relational database.
> 
> Or, to put a different slant on it: a competent developer who has spent
> years using SQL databases day-to-day finds it easier to use SQL and the
> relational data model than a different data model and different query
> language that he's spent a few days trying out.

That is probably a fair description of  my position. (Although I'm now using a database I've only known for 4 months and that is definitely not relational.) I want to finish this particular project by the end of next week, so I decided to default back to technologies that are more familiar to me and where it seemed easier to select the required components. Have a running prototype now :-)  

Anyway, I'd like to raise some additional points for the future: 

1. I'd like to get a better picture of who is currently developing end-user open government data applications based on linked data. Given that there is a massive push towards releasing OGD as LD, I'd be eager to find out who is consuming it in which kind of (user-facing) context, especially regarding government transparency. More precisely: is RDF used primarily as an interchange format or are there many people actively running sites using it? 

2. (Trying to figure out the intended process:) Several people have suggested that I shoud iteratively develop a mapping of the data to RDF, starting with an entirely independent ontology and then incrementally adopting other vocabularies. While this seems fine in theory, I'm curious how it works in practice: wouldn't I a) screw anyone using my data via REST and dumps and b) have to refactor all of my loaders, search indexing, templates, ... essentially every part of the system using the data for each change? Or would you recommend duplication? 

Thanks a lot, 

 Friedrich 

Received on Thursday, 25 November 2010 07:42:18 UTC