W3C home > Mailing lists > Public > www-tag@w3.org > January 2003

Re: Valid representations, canonical representations, and what the SW needs from the Web...

From: Sandro Hawke <sandro@w3.org>
Date: Fri, 31 Jan 2003 08:21:12 -0500
Message-Id: <200301311321.h0VDLC230856@wadimousa.hawke.org>
To: Patrick.Stickler@nokia.com
cc: www-tag@w3.org


Thanks for the clear and detailed reply.  

> > Each URI string can be used to point to several different things. 
> 
> If you mean indirectly, fine, but not directly. I am very much
> opposed to the view that a URI can contextually denote different
> resources.

If we could achieve consensus here, I'd be with you on this.  A
coherent and practical description of that direct mapping, which
everyone can use and understand, would be very nice.  But as the
debate on httpRange-14 has shown (along with the issues you yourself
raise here), we're a long way from that.  

Since it's critical to me that the semantic web work, I nearly
despaired at the firefight over httpRange-14.  But then I saw that the
whole fight was based on the unfounded assumption that there was
exactly one standard direct mapping.

If we back off that assumption and allow there to be multiple ways for
a URI to point to things, without unduly blessing one over all others,
we're okay.  And this mess becomes an RDF issue not a web architecture
issue.  RDF works fine if it just becomes explicit about which way or
ways each URI is being used.  All we need from webarch is to have them
not pester us too much about using URIs to point to different things
in different ways.

> There's a good bit of nudge-nudge-wink-wink going on here. The W3C
> should play by its own rules and promote exemplary solutions
> reflecting sound use of the Web architecture. Not hacks that further
> confuse the foundational concepts and principles of the Web and SW.

Hanlon's razor: never ascribe to malice that which can be explained by
stupidity.  Which is not to say that I've met anyone at the W3C even
remotely stupid, but no one actually has the brainpower to know
exactly, "correctly", how to do all this stuff (make the web perfectly
usable, accessible, semantic, etc).  In Richard Gabriel's terminology,
the web follows the New Jersey appraach [1], for better or worse.

So the specifications are not nearly-perfect jewels.  Sometimes
there's a tension between following a spec and getting the results you
need.  If you try really really hard you can write valid and
accessible HTML that also looks good in most browsers, but you can
look better in more browsers if you don't care about validity.  It's a
sad fact of life right now, as far as I can tell.  I happen to think
validity+accessibility and looking bad in some browsers is the better
path most of the time, but the tension is still there.  The specific
points of conflict hopefully inform the next version of the
specification.  [ Maybe the HTML analogy isn't quite right.  Perhaps a
better one is DAML's "collection" parsetype, which was used in a
non-standard dialect of RDF because it was so helpful.  Now it's in
the RDF draft specifications. ]

In developing specifications, there's a similar tension between what
seems to work well enough right now and what will work perfectly for
the rest of time.  As far as I can tell this is why working groups
have deadlines: deadlines are the antidote to perfection paralysis;
they force you to accept something that you may know is "broken".
Sometimes that's only palatable because you know there will be another
version someday.


> The groundwork of REST and the pairing of the concepts of resource
> and representation are great, and serve the needs of the Web, ...

There are some things about REST that do not make sense to me.  I've
raised the questions on this list, but if there were any responses I
missed them.  (No blame here; the list has been crazy and no one has
the job of explaining REST to me.   But since you seem to be
volunteering, I'll go ahead and repeat my questions.)

From [2]: How does REST handle the case of there being
two different web pages about the same thing?  

    Perhaps one [web page] is more trusted than another, more timely,
    more complete, or throws in fewer pop-up ads.  The user experience
    is different on the two sites, yet as far as anyone can tell, the
    sites are about exactly the same thing. Let's imagine the thing is
    the Sun, and the locations are both mine. I declare
    http://www.hawke.org/sun-a and http://www.hawke.org/sun-b to both
    identify the Sun, and my server gives nice data at both addresses.
    But on sun-b, sometimes I give the wrong data, because of a bug in
    my software.  People learn this, and learn to stick with sun-a
    instead. 

From [3]: 

    It's hard to accept the idea that there is one thing identified by
    each URI when no one can tell me anything about that thing, except in
    trivial, made-up cases (like DanC's "this is a car" page).  What thing
    is, as far as you can tell, identified by
       http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
       http://www.uroulette.com/      [ used to pick the others ]
       http://www.avianavenue.com/
       http://ont.net/karate          [ 404, but don't let that stop you ]
       http://www.interactivemarketer.com/
    ...? 

Thanks.

> Patrick

    -- sandro

[1] http://www.jwz.org/doc/worse-is-better.html
[2] http://lists.w3.org/Archives/Public/www-tag/2003Jan/0337.html
[3] http://lists.w3.org/Archives/Public/www-tag/2003Jan/0252.html
Received on Friday, 31 January 2003 08:21:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:47:15 GMT