how to consume linked data

The human reading online texts has a fair idea of what is and what
isn't relevant, but how does this work for the Web of data? Should we
have tools to just suck in any nearby triples, drop them into a model,
assume that there's enough space for the irrelevant stuff, filter
later?

How do we do (in software) things like directed search without the human agent?

I'm sure we can get to the point of - analogy -  looking stuff up in
Wikipedia & picking relevant links, but we don't seem to have the user
stories for the bits linked data enables. Or am I just
imagination-challenged?

Cheers,
Danny.

-- 
http://danny.ayers.name

Received on Thursday, 24 September 2009 08:00:11 UTC