I don't think that your analogy is quite right. The problem is not that two
different URIs address the same resource. The problem is that third-parties
are encouraged to make general-purpose software that pulls apart *any* URI
and infers something about it on the basis of whether it matches that
pattern or not. That software will make the wrong inference if it encounters
a legacy URI that just happens to match the pattern.
That said:
* URIs of the form http://.../something:/... are quite rare. I don't
remember having seen one in the wild everywhere.
* If there is demand out there for an in-URI trigger that a URI is "not
just a simple HTTP URI" then I don't personally see that it would be bad or
wrong for the powers that be to standardize a general purpose
naming-extension mechanism.
* People who pretend that there is such a mechanism when there is not, ARE
in danger of doing real harm, but the devil is in the details. For example,
if the magic-triggering string were a UUID then the chances would be
infinitesimal.
* a general-purpose naming-extension mechanism might be in some sense
backwards incompatible (adding meaning to existing URIs), it seems to me
that there are companies out there with databases of URIs that could tell us
what the likely scale of the breakage would be. i.e. one in a million
indexed URIs?
The situation is not much different from the risk of clashing attributes in
XML which gave rise to XML namespaces.
Paul Prescod