W3C home > Mailing lists > Public > xml-uri@w3.org > May 2000

Re: URIs don't force behavior [was: Why are relative NS identifiers used?]

From: Al Gilman <asgilman@iamdigex.net>
Date: Sat, 20 May 2000 12:58:12 -0500
Message-Id: <Version.32.20000520114008.03fecf00@pop.iamdigex.net>
To: <xml-uri@w3.org>
At 03:45 AM 2000-05-20 -0400, Tim Berners-Lee wrote:
>
>To give something a URI is not to force anything.  It is the minimum
>requirement for that thing to be part of the web.  Much else follows.
>

Ok, let me turn to bite the other hand.

Giving something a URI is not the minimum requirement for that thing to be
part of the web.  It is not a requirement, so it can't be the minimum
requirement.

You can make a type name part of the web by declaring it in the internal
subset of the DTD for a document, and then mounting that document so as to
be accessible via one or more services each of which recognizes a different
URI as identifying a resource the request for which will be answered by an
entity body containing said document.

There is no URI that identifies the type name, however.  But the type is
part of the Web.  Not all names used in Web discourse are URIs.

There would be a URI-based way to import that type into another document if
the Namespaces in XML Recommendation had said the obvious thing that
"should recovery of the namespace URI yield [an entity body contributing]
knowledge about the use of this name, then there is implied consent for
processors to process the name as if that knowledge was germane, i.e.
applied, to document components marked using this name."  But that smelled
of AI and they didn't go there.  They tried to make it clear they didn't
want to be obligated to go there [to the URI in the ns-attr].  In the
process the interaction of namespaces and schemas never got shaken down
before the namespaces document became a Rec., and it's all over but the
shouting.  Of course the shouting shows no signs of subsiding.

And not all references on the Web are properly as broad in their terms of
reference as "an arbitrary URI-reference."

There is an architectural theme or precedent to make off-page references
from web resources by means of URIs, and not to restrict what the entity
referred to can be at this point.  This is clearly not appropriate for
references to senior documents as regards the use of a namespace.  The
senior document must share token-building foundations with XML if it is to
tell you things about items which appear in XML markup as tokens, and more
specifically the tokens playing the roles of entity type names and
attribute names.  There is an open and extensible class of resources which
meet this requirement, but it is not the base class of "just any resource
for which there is a URI."

The creators of processors for XML markup have a valid need for there to be
guarantees as to what class of surprises are _not_ to be found on
recovering a resource cited as backup in a namespace declaration.  We
cannot force recursive interpretation on the Web as the only logically
conforming implementation plan.

IMHO this is an entirely valid demand and should be met.  In attempting to
meet this demand (IMHO) the drafters put together some langauge that either
actually interferes with the use of namespaces to import knowledge into
documents, or appears to interfere.  The issue is the implementation of
partial understanding as it pertains to namespace importing.  There needs
to be some reliable propositions about the imported names that can be
trusted without requiring more knowledge of the namespace than its ns-attr.
 Then the door can be open to arbitrary enrichment with more knowledge just
so the stable first-level [syntactic] issues or propositions are not
disturbed in the process of elaborating the additional knowledge.  


The simple proposition that "of course an XML [markup-names] namespace is
identified by a URI, just like everything else on the web" is at least as
unworkable as what we have now.

What we really need is a processing model that implements partial
understanding.  That is to say, the model of the process of refining the
document model through the incremental application of more and more
knowledge resources.  There have to be stable views or invariants along
these trajectories.  It is the connection between syntactic processing of
namespaced names and their optional further processing, together with what
bets are _not_ off if you go ahead and process further.  "All bets are off"
is not an acceptable answer.

Lacking a well-wrought theory of how a) what you learn locally (from the
document where the namespace declaration occurs) and b) what you learn
remotely (i.e. from dereferencing documents governing the namespace)
interact, it is not surprising that people feel one is left with two
choices:  1) there is nothing to learn remotely, or 2) if you endeavor to
apply what you learn remotely, all bets are off.

It is indeed too bad that we had not explored the semantic side of the
linkage more when the syntax went to Rec.

On the other hand, there has been no evidence offered that the present
syntax can't be used to fill all requirements.  We may or may not need to
relax some stated or assumed semantic restrictions.  But we definitely need
to build (and sell to stakeholders such as those found on xml-dev) more
semantic infrastructure; so that XML will not be up against a glass ceiling
limiting the intelligence of its applications.

Al

PS:  One of the things that we are never going to overcome, it would seem,
is that the abstract concept of a URI is an _arcanum_, something that real
people generally scratch their heads about and say "Oh, yes... now what was
that?" no matter how often it comes up.

One of the things that I thought people had learned from the OO software
revolution was that people really do classify things in terms of what they
can do with them.  So a name by way of locating a GET opportunity is not
perceived as being the same sort of thing as a uuid: uniquifying mark.  To
the man in the street, and this includes silicon alley, they are apples and
oranges.  Yes, architecturally it is critically important that the strings
used for either follow the common rules in the URI syntax RFC to stay out
of one another's hair.  But that does not mean that our consumers view them
in any way as the same sort of thing.  Or should be.  One has only an
identity confirmation method.  The other has an "elaborate, tell me more"
method.  That's a significant class distinction.

Namespaces are an "abstract type" of beast, in the ISO EXPRESS sense that
you will never find an instance that doesn't conform to some subtype with
more proper knowledge attached.  But so are URIs.  Every URI instance
belongs to some scheme, and you know more about it from its scheme
membership than you know just from its being a URI.

I took the "one class" approach in my "Just Call me URL" speech which can
be found at


http://lists.w3.org/Archives/Public/uri/1997Oct/0006.html

But we really do need to be dealing at most times with more restrictive
modes of reference and classes of referrents.  Abstraction requires an "is
this trip necessary" test, too.
Received on Saturday, 20 May 2000 12:47:21 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:32:42 UTC