W3C home > Mailing lists > Public > public-lod@w3.org > September 2009

Re: how to consume linked data

From: John Graybeal <graybeal@marinemetadata.org>
Date: Sun, 27 Sep 2009 11:58:59 -0700
Cc: Semantic Web <semantic-web@w3.org>, public-lod@w3.org
Message-Id: <F107303C-4811-4D9B-A15B-AFD39195A576@marinemetadata.org>
To: Olaf Hartig <hartig@informatik.hu-berlin.de>
I find this answer valuable, but unsatisfying.  To me this is the  
fundamental weak spot in the whole chain of semantic web/linked data.

I do appreciate the tremendous flexibility, generality, simplicity,  
novelty, and cool factor in the semantic web/linked data frameworks.   
But having done everything you can with that, for effective  
interoperability people doing similar things (i.e., making similar  
resources) will need to build and label them in known compatible ways.

I think it is entirely analogous to "folksonomy searching" (e.g.,  
Google searches of free text, more or less) vs "Controlled vocabulary  
searching" (e.g, using metadata standards with controlled  
vocabularies).  At scale, the former will stay in the lead and be  
increasingly powerful; but the latter will always be necessary for  
more deterministic, consistent, and targeted results.  Well, at least  
until computers are Really, Really smart.

John

On Sep 26, 2009, at 3:08 AM, Olaf Hartig wrote:

> Hey Danny,
>
> On Friday 25 September 2009 22:51:37 Danny Ayers wrote:
>> 2009/9/25 Juan Sequeda <juanfederico@gmail.com>:
>>> Linked Data is out there. Now it's time to develop smart  
>>> (personalized)
>>> software agents to consume the data and give it back to humans.
>>
>> I don't disagree, but I do think the necessary agents aren't smart,
>> just stupid bots (aka Web services a la Fielding).
>
> These "stupid bots" are able to discover and make use of data from a  
> wide
> variety of sources on the Web. I'm still convinced this allows  
> applications of
> an interesting novelty. And let's not forget, these applications  
> enable users
> to retain full control over the authoritative source of data  
> provided by them.
> This is a big step.
>
> It is more a question of why so little of these applications came up  
> yet. I
> agree with Kjetil here. Tools are missing that bring developers (who  
> don't
> know all the technical details) on board. One possible approach to  
> this is:
>
>>> try also using SQUIN (www.squin.org)
>>
>> Thanks, not seen before.
>
> ... which is a query service (currently still in pre-alpha) that is  
> based on
> the functionality of the SemWeb Client Lib. An application simply  
> sends a
> SPARQL query. This query is executed over the Web of Linked Data  
> using the
> link traversal query execution approach as implemented in the SemWeb  
> Client
> Lib. The result is returned to the app which may visualize or  
> process it.
> Hence, the app developer does not need to bother with traversing RDF  
> links,
> RDF/XML vs. RDFa, etc.
>
> Another important issue in consuming LD is the filtering of data as  
> you
> mention in your original question. Indeed, we need approaches of  
> filtering
> automatically during the discovery of data. Unfortunately, for many  
> filter
> criteria (e.g. reliability, timeliness, trustworthiness) we do not  
> even now
> very well how we may filter automatically given we have the data.
>
> Greetings,
> Olaf
>
>


---------------
John Graybeal
Marine Metadata Interoperability Project: http://marinemetadata.org
graybeal@marinemetadata.org
Received on Sunday, 27 September 2009 19:00:08 UTC

This archive was generated by hypermail 2.3.1 : Sunday, 31 March 2013 14:24:23 UTC