W3C home > Mailing lists > Public > www-tag@w3.org > March 2002

Re: New question: distinguished status of http:?

From: Tim Berners-Lee <timbl@w3.org>
Date: Mon, 4 Mar 2002 15:03:29 -0500
Message-ID: <019e01c1c3b7$a79ab510$84001d12@w3.org>
To: "Patrick Stickler" <patrick.stickler@nokia.com>, "WWW TAG" <www-tag@w3.org>, "Elliotte Rusty Harold" <elharo@metalab.unc.edu>

----- Original Message -----
From: "Elliotte Rusty Harold" <elharo@metalab.unc.edu>
To: "Patrick Stickler" <patrick.stickler@nokia.com>; "WWW TAG"
Sent: Friday, March 01, 2002 10:15 AM
Subject: Re: New question: distinguished status of http:?

> At 7:03 PM +0100 2/28/02, Patrick Stickler wrote:
> >Hmmmm... this seems to suggest to me that there would be utility in a
> >standardized means by which applications could obtain knowledge about
> >URI schemes mean, in some standardized manner and according to some
> >standardized ontologies, to determine what is expected of them.
> >
> This reminds me of Java protocol handlers. Of course, protocol handlers:
> 1. Were specific to Java
> 2. Never really worked in the first place
> >Would not the Semantic Web offer a means of extensibility for
> >URI scheme semantics so that agents need not know, as part of their
> >static design, about all possible URI schemes?
> >
> >We provide auxiliary, supporting knowledge for XML instances that
> >say how to validate them, display them, transform them, etc. so that
> >applications need not understand natively what the significance
> >of particular markup vocabularies are. Why then would it be
> >unreasonable to provide auxiliary knowledge about URI schemes so that
> >applications could be similarly informed about what a given URI means,
> >even if it has never seen one of that scheme before?

We do this sort of thing at other levels, of course.  But
can we do it at the URI level?  The problem is that new
URI schemes, when thery are not just a reinvention
(rather as HTTP was a sort of reinvention of FTP)
they inroduce something conceptually new.

Imagine you had a general API for URIs.  It would
have some methods supported by hhttp and ftp
subclasses.  But mailto: is quite different.
http: and ftp: spaces contain documents which can be rendered,
or provided to the user. The mailto: space contains abstract
mailboxes - you can send to them, list messages you know which
came from or to them, and list other information you
have gleaned about them, but you can't display them as such.

The md5: space contains everything but can never be
dereferenced.  So it wouldn't support the same
interface as HTTP, although it would have soemthing in common.
The telnet:  space is full or ports for possible interactive
conversations - but like mailto: you can't display it as such.
So in general new URI schemes are really only
interesting is they have quite new and difefrent properties,
and in that case the operating system will probably not be able
to use the new functionality without a serious change.


> This is a very interesting idea. If this could be defined through an
> XML description of the protocol, as opposed to compiled code, then it
> might succeed where protocol handlers failed. As a proof of concept I
> wonder if it's possible to design an XML format sufficiently general
> that it could describe all the information that a single client would
> need to implement the following used but widely diverse protocols:
> 1. http  (TCP, request-response, single socket)
> 2. ftp  (TCP, multiple sockets, bidirectional, client and server)
> 3. telnet (TCP, interactive, user input required after connection)
> 4. mailto  (TCP, interactive, user input required before connection)
> 5. rtspu (UDP)
> 6. file  (no sockets at all)
> That's a pretty diverse batch. If you can cover those six, I'd be
> willing to bet you could cover most other URLs and probably URIs too.
> However, remember that for this to work the client would have to have
> no preexisting knowledge of any of the protocols.

(Check out Dan Connolly's work with larch to define HTTP
in as declarative a form as possible.

> +-----------------------+------------------------+-------------------+
> | Elliotte Rusty Harold | elharo@metalab.unc.edu | Writer/Programmer |
> +-----------------------+------------------------+-------------------+
> |          The XML Bible, 2nd Edition (Hungry Minds, 2001)           |
> |             http://www.cafeconleche.org/books/bible2/              |
> |   http://www.amazon.com/exec/obidos/ISBN=0764547607/cafeaulaitA/   |
> +----------------------------------+---------------------------------+
> |  Read Cafe au Lait for Java News:  http://www.cafeaulait.org/      |
> |  Read Cafe con Leche for XML News: http://www.cafeconleche.org/    |
> +----------------------------------+---------------------------------+
Received on Monday, 4 March 2002 15:04:40 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:55:50 UTC