Re: connections

On 18/04/2010 08:09, "Danny Ayers" <danny.ayers@gmail.com> wrote:

> Hugh, I don't disagree with what you are are saying, but would like to
> express that the question of things being fit for purpose depends on
> the purpose. There is no way the web will ever be 100% reliable, the
> tools we use to interact with it have to take that into account.
Always nice to have a discussion exploring things we agree on.

I realise that my use of reliable might have misled:- I was not talking
about network reliability, but link and coreference reliability.
To put it another way, can I move through a few of the circles, following
the arrows, and end up finding something I expect and is useful.
For example, long ago, the first thing I tried was to get photos of people
by going from dbpedia to flickwrapper - I just couldn't do it because
although it was usually someone with the same name, it wasn't the right
person.

But I do think we have a strong agreement that the challenge is to consume
the data - I just think that when that happens more extensively people will
discover that the data is worse than they think.
And yes, fit for purpose can only be judged in use for purpose.

Best
Hugh
> 
> 
> On 18 April 2010 01:14, Hugh Glaser <hg@ecs.soton.ac.uk> wrote:
>> Hi,
>> 
>> Sorry, you cannot disprove a hypothesis by stating (or even proving) another
>> one.
>> Yes, I know the consumption of Linked Data systems is not great, and that is
>> at least a problem.
>> And I realise that the topic is consumption, which is great, and the most
>> important challenge at the moment.
>> 
>> But this statement of faith that the data is there, good, and fit for purpose
>> (I am an engineer) needs to be backed up with some hard evidence.
>> Until it is being used, you actually canıt tell.
>> So yes, we need tools to consume, and that will disprove (hopefully) the idea
>> that the data is not fit for purpose.
>> (Danny says in the next post ³we have the raw data I'm sure² - is he right?
>> Does anyone actually know?)
>> 
>> However, I have to say that my experience, of our systems which consume a lot
>> of Linked Data from the unbounded Web of Data, suggests that a lot of it is
>> not fit for purpose; for example, try following links across the LOD cloud
>> and see how far you get in a reliable fashion.
>> 
>> Best
>> Hugh
>> 
>> On 17/04/2010 18:46, "adasal" <adam.saltiel@gmail.com> wrote:
>> 
>>> Hugh,
>>> One hypothesis is that the data is not good.
>>> The other, being discussed, is that there is not sufficient familiarity with
>>> the means by which it can be consumed.
>>> 'sufficient familiarity' being both vertical and horizontal.
>>> Mixed in is an idea that there may not yet be the right means, which is more
>>> or less on two levels, underlying engines, I think most agree are
>>> sufficiently
>>> there, and on top tools. I think most agree they are not there sufficiently,
>>> but I don't think anyone would underestimate the difficulty associated with
>>> tooling.
>>> One of the things about tooling is that they draw in (funnel in) from broad
>>> usage to specific purpose.
>>> So that depends very much on what one is trying to do.
>>> 
>>> But I placed my reply after Kingsley's as he references one such
>>> application.
>>> 
>>> On 17 April 2010 18:36, Kingsley Idehen <kidehen@openlinksw.com> wrote:
>>>> Danny Ayers wrote:
>>>>> On 16 April 2010 19:29, greg masley <roxymuzick@yahoo.com> wrote:
>>>>> 
>>>>>> What I want to know is does anybody have a method yet to successfully
>>>>>> extract data from Wikipedia using dbpedia? If so please email the
>>>>>> procedure
>>>>>> to greg@masleyassociates.com
>>>>>> 
>>>>> 
>>>>> That is an easy one, the URIs are similar - you can get the pointer
>>>>> from db and get into wikipedia. Then you do your stuff.
>>>>> 
>>>>> I'll let Kingsley explain.
>>>>> 
>>>>> 
>>>> Greg,
>>>> 
>>>> Please add some clarity to your quest.
>>>> 
>>>> DBpedia the project is comprised of:
>>>> 
>>>> 1. Extractors for converting Wikipedia content into Structured Data
>>>> represented in a variety of RDF based data representation formats
>>>> 2. Live instance with the extracts from #1 loaded into a DBMS that exposes
>>>> a
>>>> SPARQL endpoint (which lets you query over the wire using SPARQL query
>>>> language).
>>>> 
>>>> There is a little more, but I need additional clarification from you.
>>>> 
>> 
> 
> 

Received on Sunday, 18 April 2010 07:27:53 UTC