- From: Kingsley Idehen <kidehen@openlinksw.com>
- Date: Thu, 24 Sep 2009 08:02:24 -0400
- To: Danny Ayers <danny.ayers@gmail.com>
- CC: Semantic Web <semantic-web@w3.org>, public-lod@w3.org
Danny Ayers wrote: > The human reading online texts has a fair idea of what is and what > isn't relevant, but how does this work for the Web of data? Should we > have tools to just suck in any nearby triples, drop them into a model, > assume that there's enough space for the irrelevant stuff, filter > later? > > How do we do (in software) things like directed search without the human agent? > > I'm sure we can get to the point of - analogy - looking stuff up in > Wikipedia & picking relevant links, but we don't seem to have the user > stories for the bits linked data enables. Or am I just > imagination-challenged? > > Cheers, > Danny. > > I think users have to discover, comprehend, and then exploit (consume or extend the reference chain). This is the vital sequence. fwiw, here is how I tell the story to general observers: Today, you put a resource URL in your browser and get either of the following: - Rendered Page - Markup behind the Page Linked Data simply adds the ability to see a resource description (metadata). The description takes honors the Web core architecture by providing links for each component of the description. That's it. All the other smart stuff simply happens behind the scenes and shows up in the resource description. -- Regards, Kingsley Idehen Weblog: http://www.openlinksw.com/blog/~kidehen President & CEO OpenLink Software Web: http://www.openlinksw.com
Received on Thursday, 24 September 2009 12:03:02 UTC