Re: connections

Two seconds after hitting post I wish to amend that - the web should
already be about 100% reliable, given things like 404s and 500s -
whether the information is reliable is another matter.

On 18 April 2010 09:09, Danny Ayers <danny.ayers@gmail.com> wrote:
> Hugh, I don't disagree with what you are are saying, but would like to
> express that the question of things being fit for purpose depends on
> the purpose. There is no way the web will ever be 100% reliable, the
> tools we use to interact with it have to take that into account.
>
>
> On 18 April 2010 01:14, Hugh Glaser <hg@ecs.soton.ac.uk> wrote:
>> Hi,
>>
>> Sorry, you cannot disprove a hypothesis by stating (or even proving) another one.
>> Yes, I know the consumption of Linked Data systems is not great, and that is at least a problem.
>> And I realise that the topic is consumption, which is great, and the most important challenge at the moment.
>>
>> But this statement of faith that the data is there, good, and fit for purpose (I am an engineer) needs to be backed up with some hard evidence.
>> Until it is being used, you actually can’t tell.
>> So yes, we need tools to consume, and that will disprove (hopefully) the idea that the data is not fit for purpose.
>> (Danny says in the next post “we have the raw data I'm sure” - is he right? Does anyone actually know?)
>>
>> However, I have to say that my experience, of our systems which consume a lot of Linked Data from the unbounded Web of Data, suggests that a lot of it is not fit for purpose; for example, try following links across the LOD cloud and see how far you get in a reliable fashion.
>>
>> Best
>> Hugh
>>
>> On 17/04/2010 18:46, "adasal" <adam.saltiel@gmail.com> wrote:
>>
>>> Hugh,
>>> One hypothesis is that the data is not good.
>>> The other, being discussed, is that there is not sufficient familiarity with
>>> the means by which it can be consumed.
>>> 'sufficient familiarity' being both vertical and horizontal.
>>> Mixed in is an idea that there may not yet be the right means, which is more
>>> or less on two levels, underlying engines, I think most agree are sufficiently
>>> there, and on top tools. I think most agree they are not there sufficiently,
>>> but I don't think anyone would underestimate the difficulty associated with
>>> tooling.
>>> One of the things about tooling is that they draw in (funnel in) from broad
>>> usage to specific purpose.
>>> So that depends very much on what one is trying to do.
>>>
>>> But I placed my reply after Kingsley's as he references one such application.
>>>
>>> On 17 April 2010 18:36, Kingsley Idehen <kidehen@openlinksw.com> wrote:
>>>> Danny Ayers wrote:
>>>>> On 16 April 2010 19:29, greg masley <roxymuzick@yahoo.com> wrote:
>>>>>
>>>>>> What I want to know is does anybody have a method yet to successfully
>>>>>> extract data from Wikipedia using dbpedia? If so please email the procedure
>>>>>> to greg@masleyassociates.com
>>>>>>
>>>>>
>>>>> That is an easy one, the URIs are similar - you can get the pointer
>>>>> from db and get into wikipedia. Then you do your stuff.
>>>>>
>>>>> I'll let Kingsley explain.
>>>>>
>>>>>
>>>> Greg,
>>>>
>>>> Please add some clarity to your quest.
>>>>
>>>> DBpedia the project is comprised of:
>>>>
>>>> 1. Extractors for converting Wikipedia content into Structured Data
>>>> represented in a variety of RDF based data representation formats
>>>> 2. Live instance with the extracts from #1 loaded into a DBMS that exposes a
>>>> SPARQL endpoint (which lets you query over the wire using SPARQL query
>>>> language).
>>>>
>>>> There is a little more, but I need additional clarification from you.
>>>>
>>
>
>
>
> --
> http://danny.ayers.name
>



-- 
http://danny.ayers.name

Received on Sunday, 18 April 2010 07:12:35 UTC