W3C home > Mailing lists > Public > www-tag@w3.org > October 2004

Re: referendum on httpRange-14 (was RE: "information resource")

From: Tim Berners-Lee <timbl@w3.org>
Date: Tue, 19 Oct 2004 21:18:51 -0400
Message-Id: <005D5ECB-2236-11D9-8088-000A9580D8C0@w3.org>
Cc: <www-tag@w3.org>, <sandro@w3.org>, <Norman.Walsh@Sun.COM>
To: <Patrick.Stickler@nokia.com>


On Oct 19, 2004, at 4:09, <Patrick.Stickler@nokia.com> wrote:

>
>
>> -----Original Message-----
>> From: www-tag-request@w3.org
>> [mailto:www-tag-request@w3.org]On Behalf Of
>> ext Tim Berners-Lee
>> Sent: 18 October, 2004 22:03
>> To: Sandro Hawke
>> Cc: www-tag@w3.org; Norman Walsh
>> Subject: Re: referendum on httpRange-14 (was RE: "information
>> resource")
>>
>>
>>
>> The range of HTTP is not a question of belief, It is a question of
>> design.
>> The Web was designed such that the Universal Document Identifiers
>> identified documents.
>> This was refined to generalize the word "Document" to the
>> unfortunately
>> rather information-free "Resource".
>> The design is still the same.
>> The web works when person (a) publishes a picture of a dog,
>> person (b)
>> bookmarks it, mails the URI to person (c) assuming that they will see
>> more or less the same picture, not the weight of the dog.
>>
>> That is why, while the dog is closely related to the picture,
>> it is not
>> what is identified, in the web architecture, by the URI.
>>
>> There is a reason.
>>
>> Tim
>
> Fine. And if the URI used to publish the *picture* of the dog
> identifies the *picture* of the dog, then one would presume to
> GET a representation of the *picture* of the dog. No argument
> there, obviously.
>
> Getting the weight of the dog via a URI identifying a picture of
> the dog would be unexpected (arguably incorrect) behavior per
> *either* view of this debate. So your example does not argue for
> or against either view.
>
> Also, using a particular URI to identify the *picture* of a dog
> does *not* preclude someone using some *other* URI to identify the
> *actual* dog and to publish various representations of that dog via
> the URI of the actual dog itself; and someone bookmarking the
> URI of the *actual* dog should derive just as much benefit
> from someone bookmarking the URI of the *picture* of the dog,
> even if the representations published via either URI differ
> (as one would expect, since they identify different things).

No, they would *not* gain as much benefit.
They would, under this different design, not have any expectation of
the same information being conveyed to (b) as was conveyed to (a).
What would happen when (b) dereferences the bookmark? Who knows
what he will get?  Something which is *about* the dog. Could be
anything.  That way the web doesn't work.
The current web relies on people getting the same information from 
reuse of the same URI.
The system relies on the URI being associated with information of 
consistent
content, not of consistent subject.

You can manke new URI schemes for arbitrary objects, but a very
convenient method is to use identifiers with a


> I think it is a major, significant, and beneficial breakthrough
> in the evolution of the web that the architecture *was* generalized
> to the more general class of resources -- so that users can
> name, talk about, and provide access to representations of, any
> thing whatsoever.

1. The URI itself was never constrained -- only HTTP URIs.
2. A great way is to write RDF files so you refer to a concept as 
described in a document, a la foo#bar

> To ask a pointed question, Tim, do you believe that the web cannot
> evolve beneficially in a direction beyond your original design?

Of course I don't believe that. The web is a seething mass of 
flexibility points,
designed to allow large chunks to be replaced.

However, to extend it is one thing, but to "evolve" it in a way which
destroys the basic assumptions of the current web may make nice working
prototypes, but is is really destructive.

Here we are trying to get the semantic web, which really cares about the
difference between a dog and a picture of a dog, to operate over
and also to model the HTTP web, which doesn't care about
dogs at all.   The http://.../foo#bar design uses the same flexibility 
point
as the hypertext design uses: to take a language, and convert local 
identifiers
in documents in that language into global identifiers using the 
document URI and "#".

One can certainly design different protocols, in which the URIs 
(without hashes)
denote arbitrary objects, and one fetches some sort of information 
about them.
I know you have been designing such systems -- you described them in
the RDF face-face meeting in Boston.  These are a different system:
similar to HTTP, but yo added more methods, and you don't have URIs for 
the
documents.  But it is a different design to the current web. You claim 
utility
for it.  Maybe it would be useful.  But please don't call it HTTP.

But I claim great benefit in designing the semantic web cleanly on top 
of the HTTP web so that the facilities of each support each other and 
become one large consistent system.
You ask what utility there is in this rule.

There is great utility in the fact that any person, on seeing a web 
page,
can use the URI instead of the content as a shorthand for that content.
This is so simple that people often haven't thought about it.
(And thinking about it leads to the aspects of version, language, and 
content type.)
This is done in all the hypertext links and bookmarks and billions of
places where the web is used.   Your proposed "evolution" would break 
that.
I hope that this is now clear.

> The core of your argument seems to be "Because the web was not
> originally designed to do that, it cannot and should not do that".

No, it is that what you propose is inconsistent with the way the web 
works now.

> Yet actual practice and deployed solutions demonstrate that there
> is clear benefit to the more generalized model; and there does
> not appear to be any substantial evidence that applying that
> more generalized model is harmful or problemmatic to the actual
> real-world functioning of the web, or that the narrower, more
> restricted (original) model is clearly better.

That is because you have not really looked at the implications of what
you are saying -- you are assuming, I suspect, that web suers will go on
using URIs as they do, and your software will use them differently,
and that the two won't bother each other.  But I am aiming higher  -
for one consistent design across WWW and SW.

> If you, or anyone, feels that there *is* evidence either showing
> how the more generalized view is harmful, or how the narrower
> (original) view is better, then I would love to see it.

Maybe that explanation will help, maybe it won't

Best Wishes,

Tim BL

> Regards,
>
> Patrick
Received on Wednesday, 20 October 2004 01:19:00 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:32:43 UTC