W3C home > Mailing lists > Public > www-rdf-logic@w3.org > August 2001

Re: Summary of the QName to URI Mapping Problem

From: Drew McDermott <drew.mcdermott@yale.edu>
Date: Tue, 21 Aug 2001 11:12:45 -0400 (EDT)
Message-Id: <200108211512.f7LFCjB28387@pantheon-po01.its.yale.edu>
To: www-rdf-logic@w3.org

   [<Patrick.Stickler@nokia.com>]

   >
   > The core mechanisms of RDF *must* preserve the integrity of all data.
   >

   [Tom Passin]
   Aha!  I'm going to strongly disagree with you here.  One of the features of
   the current web is that it is not self-consistent, ....

   ...
   There is no way the SW is going to be any more self-consistent or stable
   than what we have today.  Now, what's the difference between inconsistent or
   changing data, and mechanisms that don't "preserve the integrity of all
   data"?  Nothing, really, it's just a matter at what point inconsistencies
   creep in.  In either case, our systems are going to have to deal with it.

At the risk of starting (or restarting) a discussion on an issue
too fuzzy ever to be resolved ---

I don't like the tactic of refuting every argument by saying, "Oh,
well, the SW is going to be inconsistent anyway."  I have several
reasons:

1) Some of the same people who say this also say the SW will provide
   formal proofs of statements such as "You owe me 3000 Deutschmarks."
   I don't see how this is possible in an inconsistent system.
   (Rather, I don't see why anyone should pay any heed to formal
   proofs in an inconsistent system.)

2) When inconsistencies arise (as I agree they will), it will usually
   be because two or more datasets from different sources are
   combined.  The combining agent gets a little too trusting, and then
   realizes it has an inconsistency on its hands.  Exactly how it
   deals with it is a difficult issue, but it always has the fallback
   position of backing out of its tentative commitment to this
   collection of datasets.

3) Note that, as Patrick implied, this sort of inconsistency is very
   different from what happens when I take a *single* dataset and
   misconstrue it.  That ought to be avoidable --- a program's reach
   should not exceed its grasp, else what's metadata for? 

4) The whole mindset of those who play the inconsistency card is too
   anthropomorphic for me.  I believe they are thinking along the
   following lines: People can cope with inconsistency, so our
   programs should be able to as well, not like those "brittle" formal
   systems.  To quote from Tom's posting (italics mine):

      The web is extensible without central repositories or contracts
      in large part because it isn't required to be self-consistent.
      But *we* learn to deal with it anyway.
   
      After all, no two *people* have exactly the same definitions of
      or connotations for any word, yet somehow *we* communicate and
      get things done.  It will have to be like that with the SW, I
      imagine.

   This style of reasoning is a perennial blind alley in AI,
   especially, but not only, among novices.  The problem with it is
   that you don't get anywhere by observing what people do; you get
   somewhere by proposing an algorithm for a task.  I don't see many
   inconsistent-SW fans proposing any algorithms.

                                             -- Drew McDermott
Received on Tuesday, 21 August 2001 11:12:46 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:40 GMT