Re: New question: distinguished status of http:?

IMHO

Abstract: Having one protocol widely deployed at one time is not a bug.

----- Original Message -----
From: "Graham Klyne" <GK@NineByNine.org>
To: <www-tag@w3.org>
Sent: Tuesday, February 19, 2002 6:17 AM
Subject: New question: distinguished status of http:?


> In reading TAG-team responses on namespace and content-type issues, I note
> that a prefererence for HTTP: scheme URIs is expressed.  The rationale is
> clear enough -- that the URI should be dereferencable -- but it does raise
> a question in my mind.
>
> Does TAG consider that the HTTP: scheme has a distinguished status among
> URI schemes?  For example, a dereferencable URI might be FTP: or LDAP: or
a
> scheme indicating one of a number of other deployed protocols for
> retrieving information.

Let me answer personally.   It is very important for web architecture that
we should *not* be limited to one protocol.  That said, the cost of bringing
in
a new URI scheme is one of the greatest possible costs in the whole design.
URIs are something every agent is expected to be able to understand,
the loss when it doesn't being that the web fragments into inaccessible
areas.
New mime types, new namespaces, new extension headers can be slipped
in at  varying much lesser degrees of cost.

So reinventing HTTP would be a bad idea.  Making a new protocol which is
some way a whole lot better than HTTP is conceivable. I would expect
such a protocol to be more general, not more specific.  Inventing more
specific protocols for given applications when what is happening is
that data is being read or put to or posted to things in effect is
clearly a bad idea.   Looking at your examples,

- FTP was a legacy protocol. Its introduction in the URI schem was essential
to
bootstrap the web with all the existing FTP information.  However, in
most areas HTTP has outgrown it, so using FTP for new projects would
mean you lost things like caching which you would have gotten with HTTP.

- LDAP  I haven't studied in detail, but I imagine that the same
functionality
  could have been designed around HTTP.  If a directory system were to be
designed from scratch, then it could be put up in a moment using
  a web of RDF/XML documents.  For example, one could create a new
  DNS easily, piggybacking on HTTP's expiration date handling and so on.

Another example is WEBDAV. Dan Connolly pointed out recently (@@where?) that
alas
webdav servers which serve directory information using PROPFIND
actually are a loss to the web, in that information which could have had  a
URL
does not have one. This is a problem of new methos in HTTP, but you get
the same syndrome with new protcols.

If folks go around making new protocols for each new application, then
either the web will fragment, or we will have huge client bloat as each
device (cellphone, wristwatch, camera, etc) has to have the latest
set of client protocol modules.  Clients need to get smaller, not larger,
at this time.


> I submit that one architectural benefit of the IETF DDDS work [1] is that
> it allows separation of naming authority concerns from URI dereferencing
> concerns,

You can't separate those concerns.   When you reinvent a system
of delegation of ownership of URI space, then you have to make a
lookup system because you will want to be able to look up definitive
information about the names. the lookup system will have to mirror
the delegation system in some way, because it has to be definitive -
it has to be controlled at each point by the relevant authority.  So
you end up reinventing DNS.  In many cases, the problem is a social
one you wanted to fix anyway, such as the lack of persistence
of domain names or server's reuse of URIs.


(None of the references below gave me anything other than 404 alas,
and the IETF announcment of that also has dead links.   The lack
of attention to URI persistence in the IETF is largely a spin-off of the
notion that a bigger better naming schem is on the way and UR"L"s
are just locators and so don't matter.    But in fact this is just another
"just add another level of indirection" syndrome (Henry Thompson:
"anything you can do, I can do meta!").

See also http://www.w3.org/DesignIssues/NameMyth.html

> thus avoiding the creation of a special /primus inter pares/
> status for the HTTP: URI scheme simply because the HTTP protocol happens
to
> be one of the most widely deployed at this time.


Having one protocol widely deployed at one time is not a bug.  It is a huge
benefit.  It is a standard.  We are unbeleivably lucky that in all the
maelstrom
of development we did in fact end up with one widely-deployed protocol.
Otherwise the web would be quite impractical.   I only wish that we had
managed to do the same thing for appliance power plugs.


> #g
> --
>
> [1] http://search.ietf.org/internet-drafts/draft-ietf-urn-ddds-toc-01.txt
>      http://search.ietf.org/internet-drafts/draft-ietf-urn-ddds-05.txt
>
http://search.ietf.org/internet-drafts/draft-ietf-urn-dns-ddds-database-07.t
xt
>
http://search.ietf.org/internet-drafts/draft-ietf-urn-uri-res-ddds-05.txt
>
>
>
>
> ------------
> Graham Klyne
> GK@NineByNine.org
>

Received on Wednesday, 27 February 2002 11:05:56 UTC