W3C home > Mailing lists > Public > www-rdf-interest@w3.org > March 2004

Re: RULE vs MGET

From: Phil Dawes <pdawes@users.sf.net>
Date: Wed, 10 Mar 2004 10:47:49 +0000
Message-ID: <16462.62037.307244.901695@gargle.gargle.HOWL>
To: Patrick Stickler <patrick.stickler@nokia.com>
Cc: www-rdf-interest@w3.org

Hi Patrick,

(this is basically a re-wording of the previous mail to fit in with
your responses)

Patrick Stickler writes:
 > 
 > There are several arguements against that approach:
 > 
 > (1) it violates the rights of web authorities to control their own URI 
 > space

I'm not sure what you mean here. AFAICS Web authorities are still free
to do what they like with their web spaces. The agent won't get any
guarantees that the RULE will work, just as it doesn't if the server
chooses to implement MGET to mean e.g. 'multiple-get'.

 > (2) it violates the principle of URI opacity

Is this a real-world problem? robots.txt violates the principal of
URI opacity, but still adds lots of value to the web.

 > (3) it violates the freedom of URI schemes to define their own syntax

How - can't we just restrict this to scheme to HTTP?

 > (4) it may not be possible to define any rule that will work for 
 > arbitrary URIs

So just do it for HTTP URIs.

 > (5) it is less convenient for implementors who don't want to posit 
 > explicit, static descriptions
 > 

I suspect it's easier than deploying a whole new MGET infrastructure
with proxes, caches and servers.
Most webservers can direct requests to implementations based on
e.g. suffixes. Apache can do it based on regex matches over the URI.

 > (I could go on, but I think the above is sufficient ;-)
 > 

Please do - I'm haven't seen the killer reason yet.

Many thanks,

Phil
Received on Wednesday, 10 March 2004 05:49:04 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 18 February 2014 13:20:06 UTC