Re: [rdfmsQnameUriMapping-6] Algorithm for creating a URI from a QName in RDF Model?

On 2002-05-30 12:07, "ext Graham Klyne" <GK@NineByNine.org> wrote:

> At 09:15 AM 5/30/02 +0300, Patrick Stickler wrote:
>> On 2002-05-29 15:05, "ext Graham Klyne" <GK@ninebynine.org> wrote:
>> 
>>> .. (e.g. I have a
>>> convention in my web space that http://id.ninebynine.org/ is used for such
>>> abstract identifiers.  I think it helps to clarify the intent, but it
>>> doesn't make all the problems go away, such as my second question above.)
>> 
>> Tut, tut, Graham ;-)
>> 
>> How is this any different from voc://ninebynine.org/... except that the
>> convention is not standardized and the semantics that the URI denotes an
>> abstract resource is specific/proprietary to your own practices?
> 
> Exactly that!
> 
> It's not standard, and it's something that I as owner of the domain space
> choose to do.
> 
> It does nothing to change the universal elements of interpretation of a URI.

But a human (or software agent) would have to understand the prose
provided on your site to know that a 404 response was not actually
a true error, but that the resource is simply not web-accessible.

I.e. a client that recieves an http: URI expects it to resolve to
something. It has a traditional interpretation as denoting a web
accessible resource. If the resource is e.g. the abstract concept
of "love" then anything that an HTTP server might return with a 1xx
response is misleading to the client, since that resource itself
could not be returned, only knowledge about that resource.

If having 'id.' as part of one of your URLs helps you, fine, but I
don't intend to try to understand all the internals of site specific
URLs. I will only concern myself with (a) the semantics defined
for the specific URI scheme, or (b) knowledge defined about the
specific resource based on its otherwise opaque URI. (note that
URI class taxonomies are a disjunct issue entirely)

Thus if we are to capture in the URI itself whether a given resource
is or is not web-accessible, it must be done with the URI scheme.
That's it. What is within the scope of the host domain is not
interesting or useful with regards to global architecture (and
specifically should not be).

>> This seems to conflict with your earlier expressed opinion that the URI
>> should not reflect itself whether the resource is or is not "on the web"
> 
> Er, no:  what I said was:
> [[
> (By which, I mean that I don't accept them as universal proposals:  I have
> no argument with their use as a convenient mechanism by you or any other
> developers. ...
> ]]
> 
> I might have added "domain owners".

Fair enough. Though what if you had a means to expose a portion
of the semantics of a URI scheme which would apply to all instances
of that scheme, and do so globally, and in RDF, such that any
application could obtain that specification and use it to
interpret any instance of that URI scheme.

Then, it would not be proprietary, but open and ultimately extensible.

I.e, just as folks publish DTDs, XML Schemas, RDF Schemas, etc.
to expose the structure and semantics of content models, so
one could publish the semantics of a URI scheme that would allow
all applications to interpret URIs of that scheme consistently,
yet not have internal native knowledge about the scheme itself.

So I can, eg. define in RDF that URIs of the scheme voc: denote
non-web accessible resources, and any application that is presented
by such a URI then knows that it is meaningless to try to dereference
that URI. All that is required is a standardized ontology for
expressing a basic level of semantics about URI schemes useful for
most web agents needs when dealing with interpretation of URIs.

On the other hand, in the case of http: URIs which denote non-web
accessible resources, such knowledge would need to be defined for
each and every instance, which is a huge maintenance burden. Some
may prefer or require that level of resolution, but I think most
folks will prefer to enjoy the economy of URI scheme-wide semantics.

Still, having metadata specific response codes and methods would
work in either case. How the server determines the accessibility
of a resource remains open. It could be based on URI scheme or
per-resource knowledge.

Which method one chooses depends on the nature of the resource
and the abilities/needs of the creator.

Rather than 100s or 1000s of sites all employing their own
proprietary URI tricks to reflect whether the denoted resource
is or is not web accessible, wouldn't it be better to (a) use
a smaller set of standardized URI schemes to reflect such
distinctions and (b) express such semantics for those schemes
in RDF so applications need not maintain such knowledge natively?

After all, your approach suggests we could forgo schemes such
as mailto: or ftp: in place of site specific syntactic conventions,
such as http://mailto.nokia.com/patrick_stickler, etc. because
the site owner has said somewhere what the nature of such
resources are.

There is a clear tradition of distinguishing the accessibility
characteristics of URIs based on the URI scheme, so why not also
the non-accessibility characteristics? 'http:' means resolvable
via HTTP. 'voc:' means not ever resolvable or accessible. Simple,
clear, concise, consistent, and (ideally) standardized.

Eh?

Regards,

Patrick

--
               
Patrick Stickler              Phone: +358 50 483 9453
Senior Research Scientist     Fax:   +358 7180 35409
Nokia Research Center         Email: patrick.stickler@nokia.com

Received on Thursday, 30 May 2002 07:17:23 UTC