W3C home > Mailing lists > Public > www-rdf-interest@w3.org > June 2001

Re: Location vs. names

From: Sampo Syreeni <decoy@iki.fi>
Date: Tue, 12 Jun 2001 23:45:20 +0300 (EET DST)
To: Seth Russell <seth@robustai.net>
cc: <www-rdf-interest@w3.org>
Message-ID: <Pine.SOL.4.30.0106122329380.21368-100000@myntti.helsinki.fi>
On Tue, 12 Jun 2001, Seth Russell wrote:

>> Sometimes, yes. Sometimes, no. For instance, how do you propose getting
>> from a resource name to its attached metadata if the pointer itself
>> cannot be dereferenced?
>Easy: each time a local application encounters data about something named it
>stores that data in its memory associated with that name.  The name can
>always be dereferenced to the data encountered in the local application
>storage.  With such a schema there can never be any confusion between some
>~global resource~ and the data the local application has associated with
>that name.

While it is true that quite a number of applications can tolerate this
approach, there are some that cannot. First, there is considerable overhead
in that you would have to store just about all RDF you'll ever see. Second,
there is the question of pointer validity: how do you implement a query
telling you whether a given URN is valid? Next, there are applications that
simply cannot tolerate partial knowledge. While these represent considerable
extra challenges which the current RDF work conveniently dismisses, they're
there nevertheless, waiting to be solved. Distributed financial transactions
are a good example, as is more ambitious proof generation.

Even if you argue for a closed world in these cases (as you probably have
to, given the current state of the technology), a working URN resolution
infrastructure could go a significant step towards scalability and
distribution of such applications.

>This tactic is used by human individuals.  Me thinks we need a URI scheme so
>that agents on the Internet can reliably use the same tactic with no chance
>of confusion.

Actually people are beginning to get really close to the scalability limit
for this particular approach, as demonstrated e.g. by the fact that it is
really difficult to find anything on the Web. This holds even while search
engines are commonplace and the technology is relatively mature, hence
enabling a degree of global knowledge over what is going on online.

I would also hate to see any fundamental RDF infrastructure being predicated
on the ready availability of extensive asynchronously spidered collections
of metadata, which are notoriously error prone (e.g. I do UTF-8 in XHTML and
it gets slaughtered by almost every search engine; besides, the 6 week delay
from one spider round to the next is really irritating) and often require a
bunch of strong heuristics by a talented individual to employ effectively.
This sort of thing is hardly what one would call 'machine processable' or

To draw a parallel, how do you think the Web would look without DNS? It's
doable, but so fragile as to be meaningless as a template for the Semantic

Sampo Syreeni, aka decoy, mailto:decoy@iki.fi, gsm: +358-50-5756111
student/math+cs/helsinki university, http://www.iki.fi/~decoy/front
Received on Tuesday, 12 June 2001 16:45:35 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:36 UTC