W3C home > Mailing lists > Public > www-tag@w3.org > March 2002

Re: The range of the HTTP dereference function

From: Roy T. Fielding <fielding@apache.org>
Date: Tue, 19 Mar 2002 16:56:58 -0800
To: Tim Berners-Lee <timbl@w3.org>
Cc: danc@w3.org, "'www-tag'" <www-tag@w3.org>
Message-ID: <20020319165658.C1419@waka.wakasoft.com>
> Here is my argument the HTTP URIs (without "#") should be understood as
> referring to documents, not cars.

I am more curious about how this artificial "without #" distinction came
about.  I think it was a mistake, one of many embodied in RDF that make
RDF incapable of reasoning about the Web.

> URIs can identify anything.
> 
> However, different schemes have different properties.
> HTTP is a protocol which provides, for the client, a mapping
> (the http URI dereference function)
> from URI starting with "http:" and not containing a "#" to
> a representation of a document.  The document is the
> abstract thing and the representation is bits.
> 
> You say that what I call document could be widened to include
> cars.

No, I don't have to.  You are making an implementation decision about what
is behind the interface, namely that it consists of a document.  That is
wrong because it violates the principle of separation of concerns and
introduces unnecessary coupling into a system that does not need it.

If, on the other hand, you define a language for describing implementations
and, using that language, declare that what is behind the interface for that
resource is in fact a document, then you have defined something which is in
addition to the semantics of the Web interface.  It may have value for
someone to know that, but it doesn't come without cost.  If you then build
software that depends on the resource description being consistent with the
resource implementation, but fail to constrain that relationship to be true,
then you have introduced a potential fault in the system.  It may be useful
to build such a system, but it is not useful for the Web itself.

And, as I said, there are many robots on the Web that can be remotely
monitored and controlled via HTTP.  They are not documents.

There is no spoon.  It is a fundamental rule of interface design.

> Of course, you can always take a system and remove a domain
> or range restriction in theory.  But if inference systems use it
> and you take it away, they break.

Inference systems that are based on false axioms are broken by design.
I cannot "take away" something that has been a feature of the Web
architecture since 1993.

> - This is not what people do at the moment.
> 
> - The properties of HTTP are useful to know, and to be able
>  to infer things from.  For example, if I ask
>  { <telnet://telnet.example.org> log:contents ?x } -> { ?x a :Interesting }.
> then software would be allowed to infer, from the fact that a telnet URI is
> involved
> that there will be no defined contents.

I don't understand what you mean.  RDF does not improve understanding here.
If I were to log a telnet session with Melvyl (UC's library catalog), then
it would have quite meaningful contents.  I just won't be able to understand
them without knowing the proprietary command/response syntax.  It would also
be meaningful to log a conversation with the Web interface to Melvyl under
<http://www.cdlib.org/collections/>, which performs the same function on
the same database behind the scenes, but with slightly more understandable
client-server interactions (CGI form fields).

If it were true that a URI scheme defines the nature of a resource, then
it would be impossible to create a resource that is available through
more than one URI scheme.  We know that to be false.

> Similarly, if    tn:logOfPort  related a session log to the port of the
> server for that session,
> { ?x tn:logOfPort   <http://www.w3.org/foo> } will be known not to match,
> without retrieving <http://www.w3.org/foo>,  because it knows that logOfPort
> takes as object
> something which is in a class disjoint with the range of http.

That makes absolutely no sense to me.

> These are useful rules.  They connect with common sense understandings
> and also by being architectural invariants,  they provide stable bases for
> building
> more efficient systems.

Those are not architectural invariants.  They are incomplete statements
in RDF that have no meaning to the Web architecture.

> Why do you want to extend the range of http URI dereference to cars?
> 
> plate://us/ma/123yui  could still be defined to identify cars - I don't
> object to other URI schemes identifying cars.  uuid schemes can
> as far as I know now.

You want to create a URI scheme that is specific to the implementation of
the type of resources to which it points?  That goes against everything you
have said to me in the past about URI.

> http2://www.w3.org/foo could be defined to have return codes
> "Here is the contents of x which is a document" and "Here is some
> information about x"
> so that as a superset of HTTP it could provide a space in which
> abstract objects existed.

Why should I create two separate namespaces just because there is a
desire to identify two separate resources.  The only thing needed to
relate resource X to some other resource "stuff about X" is an external
link.  Metadata.

> But http1.1 does not have that and that fact is a useful one to record, I
> think

Metadata is defined by the relationships between resources, not by an
attribute of the access mechanism.  XML doesn't become any more or less
powerful when it is delivered via HTTP versus e-mail.

> In this way, Resource in URI and Resource in RDF can be similarly anything,
> but we have an important concept of a <part of the Web information space>
> <document?> or whatever.

Which is false.  RDF is broken if it cannot describe the Web in its entirety.
The whole point in changing the model of the Web from a collection of
Universal Document Identifiers that can be used to retrieve documents to
one where Uniform Resource Identifiers are used to identify resources that
can be accessed in the form of a representation through a uniform interface
was so that we could accurately model how people have been actively using
the Web since computational hypertext was introduced in 1993.

....Roy
Received on Tuesday, 19 March 2002 19:59:53 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:47:05 GMT