W3C home > Mailing lists > Public > www-rdf-interest@w3.org > March 2004

Re: RULE vs MGET

From: Patrick Stickler <patrick.stickler@nokia.com>
Date: Wed, 10 Mar 2004 14:11:07 +0200
Message-Id: <02F0C0DE-728C-11D8-964D-000A95EAFCEA@nokia.com>
Cc: www-rdf-interest@w3.org
To: "ext Phil Dawes" <pdawes@users.sourceforge.net>


On Mar 10, 2004, at 13:53, ext Phil Dawes wrote:

>
>
> Hi Patrick,
>
> Patrick Stickler writes:
>>
>>>>
>>>> (1) it violates the rights of web authorities to control their own 
>>>> URI
>>>> space
>>>
>>> I'm not sure what you mean here. AFAICS Web authorities are still 
>>> free
>>> to do what they like with their web spaces. The agent won't get any
>>> guarantees that the RULE will work, just as it doesn't if the server
>>> chooses to implement MGET to mean e.g. 'multiple-get'.
>>
>> It has to do with standards mandating what URIs web authorities
>> must use, not that every web authority that uses URIs matching the
>> pattern are using them to denote resource descriptions.
>>
>> The RULE approach is like if the HTTP spec mandated that all resources
>> which resolve to HTML representations must be denoted by URIs ending
>> in '.html'.
>>
>
> Actually that's not a good analogy, since we're not suggesting that
> *all* metadata to do with 'http://example.com/foo' must go in
> http://example.com/foo.meta (or whatever).
>
> Just that if there exists a http://example.com/foo, *and* there exists
> a http://example.com/foo.meta, the .meta URI should resolve to
> metadata description of http://example.com/foo.

It's the '.meta' suffix that is the problem.

> A closer analogy would
> be if the HTTP spec mandated that URIs ending in .html should resolve
> to representations containing html.

Er... didn't I just say that?  ;-)

>
> I suppose in theory the webspace provider is still free to use
> http://example.com/bah.meta to be something else entirely, since if
> there doesn't exist a 'http://example.com/bah', then an agent won't
> attempt to resolve 'http://example.com/bah.meta' anyway. (although
> they may attempt to resolve http://example.com/bah.meta.meta ;-)

But what if there are both?

>
>
> Actually, my only real concern with this MGET stuff is that if it does
> become the standard way for an agent to retrieve descriptive metadata,
> the likelyhood of me personally being able to participate in the
> semantic web in the near future is vastly reduced. I just can't
> imagine web hosting providers providing URIQA enabled servers cheaply
> in the near future. The main benefit of the RULE approach for me is
> that I can participate today with my existing web account. - I suspect
> this also translates to a much faster uptake globally.

I appreciate your position. Adoption of URIQA is similar to adoption
of WebDAV. It requires the involvement of the web authority to a
greater or lesser degree.

Similar challenges exist for those who wish to define personal
site specific policies, yet have no facility such as robots.txt
to do so.

Cheers,

Patrick


>
> Thanks again,
>
> Phil
>
>
>

--

Patrick Stickler
Nokia, Finland
patrick.stickler@nokia.com
Received on Wednesday, 10 March 2004 07:11:23 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 18 February 2014 13:20:06 UTC