W3C home > Mailing lists > Public > www-rdf-interest@w3.org > November 2003

Re: RDF query and Rules - my two cents

From: Patrick Stickler <patrick.stickler@nokia.com>
Date: Thu, 20 Nov 2003 12:28:39 +0200
Cc: www-rdf-interest@w3.org
To: =?ISO-8859-1?Q? "ext_Bill_de_h=D3ra" ?= <dehora@eircom.net>
Message-Id: <4E4D210F-1B44-11D8-8364-000A95EAFCEA@nokia.com>

On Thursday, Nov 20, 2003, at 11:49 Europe/Helsinki, ext Bill de hÓra  

> Patrick Stickler (NMP-MSW/Tampere) wrote:
>> Because with GET, you have to know the specific name of a specific
>> service interface on a particular server (whether the web authority
>> server or other).
>> With MGET, all you need is the URI. Nothing more. No registries. No
>> trying to figure out how the server is organized. No concerns about
>> which service, and which protocol of that service, etc. etc.
> I don't understand the distinction above between the name of a
> service interface and a URI. What's the difference?

I.e. the difference between having to know http://example.com/blargh
(about which you want a description) and being able to simply ask

MGET http://example.com/blargh HTTP/1.1

versus having to know the URI of a particular web service, e.g.


and the name of a particular parameter, e.g. "theURI=" to then ask


Standardizing the name of the particular service is IMO not
acceptable, because that incroaches on the rights of a web server
owner to control his/her own URI space grounded in that web

At best, one could work out a way, e.g. via OPTIONS or HEAD to
obtain the service URI, and presuming it supports a standardized
protocol, work out how to submit ones request that way. But
that involves several server calls for each request or forces the
agent to maintain records of each server, etc. which I consider
to be an unreasonable burden on the clients/agents, since
implementation of a specialized method such as MGET is so

And with the deployment of MGET support, we then have a standardized
SW-based solution for bootstrapping more involved interchanges
between web services, by being able to submit MGET requests for
web servers and from their descriptions, discover the services they
offer and then request descriptions of each service, etc.

>> It makes the SW as simple as the web. On the web, if you have a URI
>> and want a representation, you just GET it.
>> On the SW, if you have a URI and want a description of the denoted
>> resource, you just MGET it.
> I think I would prefer a name other than MGET (maybe DESCRIBE)

Call it BLARGH for all I care, as long as the semantics are clear.

> and
> less talk about web and semantic web.

But the SW *is* distinct from the Web, even if their architectures

> Presumably if the method is actually useful, it's useful on both webs.

I don't see that as a requirement. These methods are specifically
to address *SW* functionality, so I don't see how they have to have
any relevance whatsoever to the Web.

> But still, I don't see the need for separate webs to begin with...

But they *are* separate in very key ways, even if they share alot of
common infrastructure.

The Web is, in terms of GET, a network of representations, linked by  
reference to resources which they represent. One traverses the web by  
from representation to representation to representation.

The SW is, in terms of MGET, a network of resource descriptions, linked  
by direct
reference to related resources. One traverses the SW by moving from  
to description to description. This is of course, only one view of the  
SW. It's
also valid, IMO, to view the SW as that virtual, global, dynamic body  
of knowledge
which is made accessible via standardized protocols, of which a given  
will typically have explicit possession of and utilize a small portion  
any given time. The means by which an agent increases its view of that  
knowledge base should be by standardized protocols, either  
such as URIQA, or query centric such as via a generalized RDF query  

>> *And* there is no ambiguity between the semantics of GET or MGET,
>> and if the server is not SW enabled, you get back a status code
>> indicating the MGET method is not understood/implemented rather
>> than possibly some other representation that is not the description,
> I'd be concerned about this a point of architecture. Web resources
> are ideally uniform.

This comment makes me wonder if we are using the term 'resource'
in the same way.

Can you elaborate?

>> yet might even be RDF/XML! This is why extra headers in the request
>> don't work well, because it ends up being a fragile hack that
>> often works, but when it fails, you can't always be sure that it
>> did.
> So you say, but how is partitioning web resources into SW and W any
> less of a hack?

Er, because the latter provides explicit, well defined semantics
and consistent behavior, which are far more crucial to a SW agent
than a Web agent (which is more often than not, simply a human).

>> GET {URI} HTTP/1.1
>> URI-Resolution-Mode: description
>> and the server had no idea what the header URI-Resolution-Mode:
>> means (or the header gets lost in transit due to some misbehaving
>> proxy, etc.) the you'd likely get back RDF/XML yet have no clear
>> way to know if it was the description in RDF/XML or a representation
>> in RDF/XML.
> If you're an agent capable of asking the question, why can't you
> look in the RDF/XML to find out the answer? I thought this stuff was
> descriptive.

Because SW agents are stupid. And because there's simply no need
to be so sloppy and careless about SW server behavior.

> Or why not a header? Below, I understand you're asking agents to do
> that for M*GET resolution, but here objecting to using a header to
> begin with as a fraile hack.

Headers are fragile because (a) they can get lost in transit and
(b) if they are not understood by a server, they can be ignored.

>> If someone things they have a better solution, please speak up.
>> But using the existing verbs with just adding headers does *not*
>> work (and even more serious issues than above arise with PUT and
>> DELETE, but I won't go into that here).
> You've claimed this before, but you haven't really demonstrated it
> doesn't work.

I have. Though you may have missed it. The first incarnation of
the Nokia Semantic Web Server took the header approach, and it
resulted in precisely the kinds of destructive, unintended behavior
I've documented.

>>> Agreed.
>>> The biggest problem with MGET is:
>>>     What do you do when you want metadata about metadata?
>> This is not a problem. This has been covered countless times in
>> detail on this and other lists. Google for MGET and read up.
>> [...]
>> But not only is this not a problem, it's a pretty narrow corner use
>> case IMO. And, it's not a problem.
> Well, some might argue that the SW is a pretty narrow usecase for
> creating a new verb on the web. WebDAV added new verbs, it didn't
> work out so well in retrospect.

Really, I use numerous WebDAV enabled servers daily. If you have any
pointers to known problems, I'd be happy to follow them.

And if someone can demonstrate a solution that does not involve new
verbs, but also provides the same degree of robustness and semantic
precision, I'd be very happy to see it. At the moment, though, having
tried that route and failed, I'm skeptical that it is doable (and
apologies for the arrogance of that statement).

Received on Thursday, 20 November 2003 05:30:57 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:48 UTC