W3C home > Mailing lists > Public > www-rdf-interest@w3.org > August 2004

RE: InverseFunctional properties are the new URI?

From: Charles McCathieNevile <charles@w3.org>
Date: Mon, 2 Aug 2004 22:49:10 -0400 (EDT)
To: John Black <JohnBlack@deltek.com>
Cc: Graham Klyne <GK@ninebynine.org>, Damian Steer <damian.steer@hp.com>, www-rdf-interest@w3.org
Message-ID: <Pine.LNX.4.55.0408022236130.1773@homer.w3.org>

On Mon, 2 Aug 2004, John Black wrote:

>> From: Graham Klyne
>> At 22:26 29/07/04 +0100, Damian Steer wrote:
>> >"You can always solve a problem by introducing another layer
>> of indirection."
>> >
>> >So true :-)
>> I remember Guha saying something similar when presenting the
>> Reference-by-Description ideas as used in TAP, and then
>> adding that in the
>> case of TAP this reduced the number of URIs that must be
>> globally agreed
>> (for effective exchange of information in an open-ended community of
>> interested parties) by some orders of magnitude, thus could
>> be regarded as
>> a valuable deployment of that old panacea.
>Reducing by orders of magnitude the number of URIs needed to identify
>objects is a good thing. But it intensifies the problem of establishing
>global agreement on the references of the URIs of classes and properties.
>How is this done? or can it be done at all? Is it any easier to come to a
>global agreement on the extension of a class than on the identity of
>an object?

Essentially this is the same problem as agreeing on what any new term means -
what is "identity" and "meaning" and for that matter "the semantic web"? For
the last few thousand years of doing this people have muddled towards
solutions where they get rough agreement and some discussion that goes along
until it breaks down over terms, and then they look at what broke and try to
produce a new indirection that solves the problem for a bit longer.

The same thing happens with URIs - they can mean whatever you want, but it is
better if you manage to build systems that don't entail contradictions, nor
things that you know are not true. So we try to figure out what other people
mean by them.

Our software can tell us when it has a contradiction in mind, and we can read
the results of what it says and compare it to things we know.

Up to a point, the more we document vocabularies in real human-readable ways
(meaningful comments, not just a URI fragment that might look a bit like a
word) the easier it is for third parties to understand how something is meant
to be used. But as Jose Ramon AguŽra points out, you need to actually check
how the thing is being used, and be prepared to adapt in an evolutionary
feedback cycle - replace things that don't work and try to explain how to
migrate data to the replacement... (foaf:lastname)

There is a criticism of the Semantic Web that this process can never be
finished. This is true, but irrelevant. The criticism itself, and pretty
much all human communication, needs to be revised from time to time if it is
going to be valid and comprehensible information.

There are some things in OWL that help automate this process - ways of
versioning things that can be handled semi-automagically. Understanding the
provenance of a property definition and being able to deal with it in a
trust-aware web will help more. As people use URIs more in a single, global
framework we are getting to understand the consequences, and modifying our
usage. But it's a slow process to get consensus around the world - even the
fraction of the world who use the Web to produce semantically enriched
information in RDF. And it isn't an exact science. Still, it only has to be
better than what we have now to be better...


Received on Monday, 2 August 2004 22:49:30 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:52 UTC