- From: Dan Brickley <danbri@danbri.org>
- Date: Mon, 14 Dec 2009 11:17:37 +0100
- To: Jeni Tennison <jeni@jenitennison.com>
- Cc: Richard Light <richard@light.demon.co.uk>, Linked Data community <public-lod@w3.org>, John Sheridan <John.Sheridan@nationalarchives.gsi.gov.uk>
On Mon, Dec 14, 2009 at 10:37 AM, Jeni Tennison <jeni@jenitennison.com> wrote: > Richard, > > My opinion, based on the reactions that I've seen from enthusiastic, > hard-working developers who just want to get things done, is that we (the > data.gov.uk project in particular, linked data in general) are not providing > them what they need. > > We can sit around and wait for other people to provide the simple, > light-weight interfaces that those developers demand, or we can do it > ourselves. I can predict with near certainty that if we do not do it > ourselves, these developers will not use the linked data that we produce: > they will download the original source data which is also being made > available to them, and use that. > > We, here, on this list, understand the potential power of using linked data. > The developers who want to use the data don't. (And the publishers producing > the data don't.) We simply can't say "but they can just build tools", "they > can just use SPARQL". They are not going to build bridges to us. We have to > build bridges to them. > > My opinion. Opinion, sure. But absolutely correct, also! (Excuse me if a small rant is triggered by all this...) Why, twelve years, two months and twelve days after http://www.w3.org/TR/WD-rdf-syntax-971002/ was first published, do we not have well packaged, maintained and fully compliant RDF parsers available in every major programming language? And that is for just the smallest critical piece of software needed to do anything useful. Short answer: because people from these mailing lists didn't sit down and do the work. We waited for someone else to do it. Some of us did bits of it, but ... taken as a whole, there are still plenty of basic pieces unfinished, in various languages. Millions upon millions of euros and dollars have been spent on Semantic this and Semantic that, and now Linked this and Linked that; countless conferences, workshops and seminars, PDFs, PPTs and so on; but still such basic software components haven't been finished, polished, tested and distributed. I'm not speaking ill of anyone in particular here. Countless folk have worked hard and tirelessly to progress the state of the art, get tools matured and deployed. But there is plenty plenty more to do. I do fear that the structure of both academic and research (eg. EU) funding doesn't favour the kind of work and workplan we need. In the SWAD-Europe EU project we were very unusual to have explicit funding and plans that allowed - for example - Dave Beckett to work not only on the RDF Core standards, but on their opensource implementation in C; or Jan Grant and Dave to work on the RDF Test Cases, or Alistair Miles to take SKOS from a rough idea to something that's shaking up the whole library world. I wish that kind of funding was easy to come by, but it's not. A lot of the work we need to get done around here to speed up progress is pretty boring stuff. It's not cutting edge research, nor the core of a world-changing startup, nor a good topic for a phd. With every passing year the RDF tools do get a bit better, but also the old ones code rot a bit, or new things come along that need supporting (GRDDL, RDFa etc.). What can be done in the SemWeb and Linked Data scene so that it becomes a bigger part of people's real dayjobs to improve our core tooling? Are the resources already out there but poorly coordinated? Would some lightweight collective project management help? Are there things (eg. finalising a ruby parser toolkit) that are weekend-sized jobs, month sized jobs; do they look more like msc student summer projects or EU STREP / IP projects in scale? Could we do more by simply transliterating code between languages? ie. if something exists in Python it can be converted to Ruby or vice-versa...? Are funded grants available (eg. JISC in UK?) that would help polish, package, test and integrate basic entry-level RDF / linked data software tools? Back on the original thread, I am talking here so far only about core RDF tools, eg. having basic RDF -to- triples facility available reliably in some language of choice. As Jeni emphasises, there are lots of other pieces of bridging technology needed (eg. into modern JSON idioms). But when we are hoping to convert folk to use pure generic RDF tools, we better make sure they're in good shape. Some are, some aren't, and that lumpy experience can easily turn people away... cheers, Dan
Received on Monday, 14 December 2009 10:18:18 UTC