Re: UR* schemes [was: 3rd IETF/W3C coordination call...]

-----Original Message-----
From: Larry Masinter <lmm@acm.org>
To: Tim Berners-Lee <timbl@w3.org>
Date: Thursday, November 18, 1999 11:21 AM
Subject: FW: UR* schemes [was: 3rd IETF/W3C coordination call...]


>I think this is a great message, and might be suitable
>as a counter to the architectural note on the W3C sites.
>
>-----Original Message-----
>From: uri-request@w3.org [mailto:uri-request@w3.org] On Behalf Of Keith
>Moore
>Sent: Wednesday, November 17, 1999 2:34 PM
>To: Daniel LaLiberte
>Cc: Larry Masinter; Keith Moore; Dan Connolly; Patrik Fältström; Philipp
>Hoschka; w3t-arch@w3.org; t-and-s@w3.org; w3c-policy@apps.ietf.org
>Subject: Re: UR* schemes [was: 3rd IETF/W3C coordination call...]
>
>
>> I believe Tim's argument against more central registries is that it
>> would be even better to have no central registries.  We have one central
>> registry, DNS, and that should be enough.  We have problems enough with
>> that one registry.
>
>No, I don't think so.
>
>The reason DNS has been a problem is arguably that it there is too much
>control in one place - so one particular enterprise (NSI) has been able
>to unfairly restrict access to the registry and to demand tribute from
>those using it.  ICANN might help that situation (only time will tell) -
>but it's been incredibly difficult to get it going, largely due to
opposition
>from NSI and NSI-funded loonies - partially because NSI could use the
>tribute money to build its war chest.

You tell an awful story of a public trust being given to a commercial
entity and commercially exploited.    But because we (the community) have
badly managed this thing there is no reason to double the problem.
Let us do our utmost not to wring our hands over NSI's control but
to ensure as speedily and effectively that it be done well under ICANN.

> Imagine the situation if NSI had control
>over all of the ICANN registries because they were all under the DNS root.


Now this perverts the argument.  NSI may have "control" over the IETF in
that
it can charge IETF $100 for the use of the name "ietf.org".   NSI coulld
in principle (though at risk of a bloody revolution) try deleting ietf.org
if the IETF didn't pay, or just from malice or incompetence. But these are
unlikely scenarios. But otherwise, NSI has
no control over what IETF does with that space.
The IETF defines what document is identified by
http://www.ietf.org/rfc/rfc1234.txt

The IETF can, without any recourse to NSI or ICANN, set up (say) a MIME
type registry by defining (say)

http://www.w3.org/mime-types/application/xml

 to be a document defining the MIME type application/xml

This uses the DNS root in a very minor way, giving all control
of content to the IETF.  It has the advantage of using a
space which is everywhere dereferencable using a well known
protocol whose evolution is under IETF control.

>Furthermore, it's a feature of many registries that they use compact
>encodings (e.g. to minimize payload impact) and/or human-meaningful
>strings.

You can of course define Content-type: to take as a parameter a URL relative
to http://www.w3.org/mime-types/, which leaves it as a first class object
(anyone can play with one) with a short form for those types which have
passed muster at the IETF.

> I see absolutely no value and much harm in trying to use
>DNS names for port numbers, autonomous system numbers, DNS query types,
etc.


I wouldn't suggest DNS names by themselves, but http space is much more
appropriate.
specially when poulated with XML, it allows much more flexible control

>And to encode header field names, URI prefixes, etc. as DNS names strikes
>me as a very dubious compromise - it's essentially saying that it's
>more important to keep the protocol registries absolutely open (or more
>accurately, to put all of the control in one place)

Here you confuse the control NSI has in registering the "ietf.org" name with
the control the IETF has with its contents.  This is a bit like saying that
the state of Massachusetts controls my car.  I used their space to register
it, but I control its contents and direction!

> and to encourage
>vendor-specific protocol extensions, than it is to make the prefixes
>terse, meaningful to humans, or transcribable, or to try to encourage
>review or standardization of extensions.


The balance between experimentation and standardization is a crucial
part of our world.  The methods we have used to strike a compromise
up till now have involved "x-" and either "ignore what you don't understand"
or "panic at what you don't understand". Neither of these is strong enough
for the future. We must have private experiments (or local use)
and standards, and have both well defined. For this, the languages must
be first class objects, so we have free expression coupled with an
explict system of review and endorsement.

>IETF working groups have a lot of latitude when choosing extension
>mechanisms for new protocols - see RFC 2434.   Different groups
>choose different mechanisms for different protocols after weighing
>the advantages and disadvantages of each.  This seems entirely appropriate.


Each group then sets up a standards process for its area of interest.
One should recognise, however, that developers of a spec which
references a list of specs should not always assume that their own
process will be the only one which will
W3C's XML references a set of namepsaces. There is no assumption
that all namepsaces to achieve standards status must pass though the W3C.
To make such an assumption would put a bottleneck on development.
So, a namepsce is a identified by a URI.  This doesn't stop us setting
up, later, a "better namespaces bureau" which endorses namespaces which
have gone though some general standards track.

>IIRC we discussed this topic in a previous teleconference, so I don't
>think we should spend time on it again.

Pointer?

>> Sure, but do URNs really offer anything new, or is it only an illusion
>> of something new?
>
>This has been discussed many times.   URNs are a notational convention.
>They were deliberately made to look different than other kinds of URIs
>in the hope that users would associate somewhat different semantics
>with URNs than with other URIs.

There is an imputation of "semantics" on other URIs which is a political
statement not a technical one, an derrogation of other URIs to leave
a niche for URNs.  In so doing, the set of identifiers is split into
two, and hence the web is split into two.

> This is purely a human-interface issue.

Well, no, when you make a syntax deliberately different
just to make a point you also make a technically messy system.

>We agree that it is possible to build protocol engines that allow
>URLs to be stable over a long time,

It is not an "engine" in as much as an insitutional commitment by the
party owning the URI - with any URI, http:  or some "URN" scheme.

>but the point of URNs is to make
>that stability (or at least, the intent of stability) a visible part
>of the identifier.

You cannot make stability out of syntax.

>This could have been done in many ways, but after
>much deliberation the URN working group chose a particular syntax
>for URNs and particular rules for assigning them to ensure (if those
>rules are followed) uniqueness and stability.

The rules can be applied to URIs just as well.  You cannot enfore uniqueness
wiith syntax either.

> The work is done,
>the decisions have been made, it's now up for users and implementors
>to decide whether they are sufficiently valuable to use.

>
>I will however note that in relation to the first topic, the URN group
>worked very hard to avoid having a central point of control over URN
>space - especially over URN resolution - while still preventing
>conflicts in URN assignment, realizing that such control would be
>an impediment both to deployment of URNs and to their long-term stability.
>You might or might not believe that they were successful - but this
>was one of the tough problems that they tried to tackle.


Let those who would tackle these problems do so for their own
web sites, establishing a persistence policy and a contractual
commitments which allow URIs to be trsuted as persistent.

Trying to sum up my concerns about the whole URN development:

1. It is based on a lack of society to currently to support the
  facilities for persistence (including soft redirection) in HTTP.
2. The lack of URN dereferencing protocol may lead to a reinvention of HTTP,
  with consequent unnecessary and serious complication of the web;
3. The suggestion that URNs will solve the persistence problem is
 causing serious archive maintainers to ignore their duty of persistence
 while "waiting for URNs", and thus making the problem much worse.
4. The actual registry of URN names is in fact more or less at the
  "/etc/hosts" stage, and has yet to reinvent the hierarchical structure of
  DNS and the social structrue of ICANN
 http://www.ietf.org/rfc/rfc2141.txt
5. In this "waiting", web sites fail to learn to use HTTP redirects properly
 (eg http://www.isi.edu/div7/iana/)  and browsers fail to distinguish a
  "found" lookup and a "moved" redirection, making understanding of the
semantics
  of HTTP in the community worse.

Tim

>Keith
>
>
>
>

Received on Wednesday, 1 December 1999 17:06:43 UTC