Re: SemWeb Non-Starter -- Distributed URI Discovery

On Apr 3, 2005, at 07:20, ext Charles McCathieNevile wrote:

>
> On Sat, 02 Apr 2005 13:52:11 +1000, Josh Sled <jsled@asynchronous.org> 
> wrote:
>
>
>> URIs identify resources; the Accept header should serve only to
>> negotiate the format of that resource, not to branch between different
>> resources... you may want the HTML meta-data about the RDF data,
>> someday. :)
>>
>> Why not have a URI for the resource, and a URI for the meta-data?
>>
>> GET /foo
>> <foo>
>>   <link rel="meta" href="/foo/meta" />
>> </foo>
>
> Because often the data you want about some resource isn't written by 
> the person who happens to control what is served at that URI.
>
> As a trivial example, W3C controls what is served at the URI 
> associated with the RDF namespace. They don't happen to provide any 
> RDF about human-friendly labels for the things defined there except in 
> english.
>
> As someone working primarily in spanish, I want to have spanish names 
> for the various RDF Classes and Properties. There is no reason I 
> cannot publish, somewhere on the Sidar site, these labels. (They're 
> easy to produce...). But W3C doesn't necessarily know that I have done 
> so. If I were the Mongolian Library, they are almost certain not to 
> know that I have done so.
>
> So querying W3C's server is of limited use.
>
> The question then becomes, as Alistair noted, "so how do we find this 
> stuff". I suspect the answer is the same as the answer to the 
> equivalent question for the real web - we make use of search engines 
> that go crawling around and providing a way of finding things we are 
> looking for based on a large store of meta-information.

Yes. That will be one (of many) useful approaches to discovering
knowledge. But I don't think it should be considered the primary,
fundamental method of discovery for authoritative knowledge about
a known resource (i.e. for which one has the URI).

True, your non-authoritative ammendments to authoritative knowledge
owned/published by the W3C will likely be of interest and useful
(even essential) to various applications, but the overhead associated
with third party knowledge bases populated by harvesting/scraping
will likely have a combinatoric complexity with each third party
source accessed.

>
> In the RDF case, I think the key information is about what stores can 
> answer a given set of queries - I see the future search engines for 
> the semantic web being based on query brokers that know where to get 
> answers to a particular query, and how to distribute the query and 
> consolidate the results. This relies on things like a query language 
> (ideally a standardised one such as SPARQL, rather than two dozen 
> different ones...), HTTP GET, and RDF.

Agreed.

Though I still see an essential need for being able to access
authoritative knowledge based on no further initial knowledge
than a single web-resolvable URI -- as a foundational bootstrapping
function, and which can lead to use of more comprehensive
knowledge stores and query brokers.

Regards,

Patrick


> Cheers
>
> Chaals
>
> -- 
> Charles McCathieNevile                      Fundacion Sidar
> charles@sidar.org   +61 409 134 136    http://www.sidar.org
>

Received on Monday, 4 April 2005 05:48:07 UTC