W3C home > Mailing lists > Public > www-rdf-interest@w3.org > March 2004


From: Patrick Stickler <patrick.stickler@nokia.com>
Date: Wed, 10 Mar 2004 13:11:29 +0200
Message-Id: <ADFE2356-7283-11D8-964D-000A95EAFCEA@nokia.com>
Cc: www-rdf-interest@w3.org
To: "ext Phil Dawes" <pdawes@users.sourceforge.net>

On Mar 10, 2004, at 12:47, ext Phil Dawes wrote:

> Hi Patrick,
> (this is basically a re-wording of the previous mail to fit in with
> your responses)
> Patrick Stickler writes:
>> There are several arguements against that approach:
>> (1) it violates the rights of web authorities to control their own URI
>> space
> I'm not sure what you mean here. AFAICS Web authorities are still free
> to do what they like with their web spaces. The agent won't get any
> guarantees that the RULE will work, just as it doesn't if the server
> chooses to implement MGET to mean e.g. 'multiple-get'.

It has to do with standards mandating what URIs web authorities must 
not that every web authority that uses URIs matching the pattern are
using them to denote resource descriptions.

The RULE approach is like if the HTTP spec mandated that all resources
which resolve to HTML representations must be denoted by URIs ending
in '.html'.

Even with conneg, web authorities are free to do as they like, and
e.g. associate '.html' with image/jpeg files if they are so inclined.

Common practices and best practices are good, as they keep management
costs down, but good architectures remain agnostic about such details.

>> (2) it violates the principle of URI opacity
> Is this a real-world problem? robots.txt violates the principal of
> URI opacity, but still adds lots of value to the web.

And it is frequently faulted, and alternatives actively discussed.

In fact, now that you mention it, I see URIQA as an ideal replacement
for robots.txt in that one can request a description of the root
web authority base URI, e.g. 'http://example.com' and recieve a
description of that site, which can define crawler policies in
terms of RDF in a much more effective manner.

>> (3) it violates the freedom of URI schemes to define their own syntax
> How - can't we just restrict this to scheme to HTTP?

Too restrictive and short sighted, IMO. And in any case, one has
to consider the sum weight of the arguments not just individual

>> (4) it may not be possible to define any rule that will work for
>> arbitrary URIs
> So just do it for HTTP URIs.

Same comment as above.

>> (5) it is less convenient for implementors who don't want to posit
>> explicit, static descriptions
> I suspect it's easier than deploying a whole new MGET infrastructure
> with proxes, caches and servers.
> Most webservers can direct requests to implementations based on
> e.g. suffixes. Apache can do it based on regex matches over the URI.
>> (I could go on, but I think the above is sufficient ;-)
> Please do - I'm haven't seen the killer reason yet.

Then you perhaps do not value generic, modular, scalable architectural
design as much as I.

For me, #1 and #2 alone are sufficient to reject this approach, though
the rest are non-trivial concerns.




Patrick Stickler
Nokia, Finland
Received on Wednesday, 10 March 2004 06:11:50 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:50 UTC