W3C home > Mailing lists > Public > www-rdf-interest@w3.org > November 2003

Re: RDF query and Rules - my two cents

From: Patrick Stickler <patrick.stickler@nokia.com>
Date: Fri, 21 Nov 2003 17:58:47 +0200
Cc: <www-rdf-interest-request@w3.org>, "Graham Klyne" <GK@ninebynine.org>, "Jim Hendler" <hendler@cs.umd.edu>, "Dan Brickley" <danbri@w3.org>, <www-rdf-rules@w3.org>, <www-rdf-interest@w3.org>
To: "ext Danny Ayers" <danny666@virgilio.it>
Message-Id: <970F7995-1C3B-11D8-9C8D-000A95EAFCEA@nokia.com>


On Friday, Nov 21, 2003, at 12:15 Europe/Helsinki, ext Danny Ayers 
wrote:

>
>
>>> It would only be needed for the description in RDF, so that only 
>>> means
>>> around 3 extra mime types. This has to be weighed against
>>> reconfiguring the
>>> web.
>>>
>>
>> I think this view is a bit too constrictive, and (forgive me) short
>> sighted. We don't know what other encodings we may wish to be able
>> to deploy 5 or 10 years from now.
>
> The same applies for MGET etc.

??? Sorry. I don't follow you. MGET is not restrictive with
regards to any other layer or component of web server behavior.

It's about as modular and future proof as an extension can get.

>
>> I am opposed to the purposeful introduction of ambiguity, blurring
>> the distinction between encoding and semantics.
>>
>> Why not suffix dates to MIME types to indicate we only want
>> representations modified after the specified date? How about
>> language, etc. etc.
>
> Because there isn't any pressing need for those. But to turn the 
> argument
> back, this would call for DGET and LGET methods...

True, but such methods would not be necessary, as they would not
conflict with the semantics of the presently defined web architecture.

I'm not proposing MGET just because I prefer that way of doing things.
I'm proposing (and using) it because that's the only way I've found
to get the job done in a reliable, robust, and efficient manner.

URIQA is not the result of arm-chair engineering.

>
>> And even if one is able to figure out how to reliably parameterize
>> GET requests, PUT and DELETE bring far greater problems into the
>> mix.
>
> I'm sure they do. But I believe you've done the bulk of this work 
> already in
> URIQA, in particular the parameterised queries for GET etc you have
> (shadowing MGET etc).

Yes. But the parameterized requests based on the existing methods which
have analogous semantics to the M* based requests require explicit 
knowledge
about service interfaces specific to each individual server, which 
imposes
a substantial overhead on both processing and maintenance.

I certainly expect there to be alot of traffic using such 
service-specific
interfaces (the Nokia SW Server uses them heavily) but for bootstrapping
the SW, and for fundamental, atomic knowledge discovery based solely on
URIs alone, such an approach is insufficient.

>
>> If we can work all this stuff out without new methods, so that things
>> work with single system requests, I'm all for that -- but in my
>> experience,
>> it's one rat's nest after another...
>>
>>>> It's really just a variant of the URI-suffix approach. E.g. append
>>>> _META or such to the end of any URI to get the URI that denotes its
>>>> metadata description.
>>>
>>> Yes, as is MGET, except there the switch is shunted even further back
>>> into
>>> the machinery.
>>
>> But that's where it belongs!
>
> That's a debatable point. The Concise Bounded Resource Description of a
> resource could be seen as simply just another representation of that
> resource.
>

Yes. And I point that out in the URIQA spec.

But the semantics of interacting with descriptions is specialized
and different in significant ways from the semantics of interacting
with arbitrary representations.

When communicating with a server:

1. One must be able to indicate to the server that the request concerns
    a description and not a(nother form of) representation.

2. One must be able to ensure that if the server has no clue what a
    description is, that it won't do something to a(nother form of)
    representation.

Furthermore,

3. A description is an abstraction for which one should
be able to use the full richness of web functionality, e.g. content
negotiation, etc. in conjunction with whatever SW extensions are
deployed.

4. The distinction to be made in #1 above should not depend on
    any part of the URI itself (i.e. no special suffixes, etc.)

Thus, #2 above rules out headers with PUT/DELETE, and #3 rules out
using MIME types or similar hacks.


>> URIs are the domain of the application designer, not the architecture
>> and architectural machinery should not mandate how URIs should be
>> constructed.
>>
>>> I agree this looks on the surface a more elegant approach,
>>> but if the current web can be used without breaking anything, then I
>>> think
>>> that should be the preferred approach.
>>
>>
>> I agree. I agree. I agree. But over a *year* of work has proven to
>> me (at least) that this cannot be done -- either at all, or at best
>> not easily, without introducing many real or potential ambiguities
>> or reinterpretations of the present web architecture.
>>
>> So if you want to see the SW implemented with minimal impact on the
>> existing web, you should be happy to see new methods such as MGET,
>> MPUT and MDELETE which keep segregated the implementation, deployment,
>> interpretation, and behavior of web versus SW applications while
>> *still* allowing both web and SW applications to share as maximal
>> an intersection of infrastructure as possible.
>>
>> MGET, MPUT, and MDELETE have a very specialized, focused role relating
>> to the *bootstrapping* needs of the SW, yet all other SW services can
>> (and should) be deployed using the existing web methods, GET, PUT,
>> etc.
>
> I'd be interested to hear how you'd characterise those bootstrapping 
> needs.

In a nutshell: You have a URI. You want to know what it means. You use
protocol X to find out, without having to know or discover *any* 
additional
information other than the URI and the generic, implementation agnostic
features of protocol X.

This approach can be used to obtain knowledge about a given server. E.g.

MGET http://sw.nokia.com HTTP/1.1

tells you about that server, and the services/portals it provides, and
with the URI of each service, you can further use MGET to get the
descriptions about those services, and the parameters they support, and
with each URI of each parameter you can use MGET to get the description
of each parameter, etc, etc, ad nauseum.

*One* single method does it all (presuming, of course, that the web
authority of that URI chooses to publish a description of the denoted
resource ;-)


>
>>> (I don't think anything gets broken
>>> with the mimetype approach, the description can be considered just
>>> another
>>> representation of the identified resource).
>>>
>>>>> I'd be grateful for an example of how this is different with MGET, 
>>>>> it
>>>>> sounds
>>>>> like there's something I'm not grokking here.
>>>>>
>>>>
>>>> See above. I.e. a description can have multiple representations...
>>>
>>> Ok, so perhaps the mimetype approach isn't the best, but I still hold
>>> out
>>> hope for an approach that doesn't need lowish-level rewiring.
>>>
>>
>> Great. Tell us how it's done. I've invested more than enough time
>> towards
>> such an approach and don't intend to spin my wheels indefinitely over
>> it.
>>
>> I've got real systems and solutions to build and deploy.
>>
>> If someone else is able to figure it out, great, more power to them, 
>> but
>> I'm getting pretty tired of folks saying "I don't like that" or "it
>> would
>> be better another way" and not giving folks "in the trenches" any
>> benefit
>> of the doubt when it comes to challenging the religious matras of REST
>> without providing the code to back it up. Even those who challenge the
>> status quo might be strong advocates of minimal and cautious change
>> (I being one of them).
>
> Ok, I have been questioning the need for the new verbs, but I can't do 
> the
> issue justice - I'd suggest this issue gets passed to a WG, with more 
> hours
> to focus on it.

I agree. Which is why we are becoming actively involved in the 
formation of
the proposed RDF Query WG.

>
>> Not to take it out on you, personally, but the bottom line is, code
>> talks
>> (even psuedocode ;-)
>>
>> *Show* me a solution that doesn't introduce new verbs, that
>>
>> 1. works for GET, PUT, and DELETE operations
>> 2. does not require any additional knowledge other than the URI alone
>> 3. doesn't fall into the various semantic rat's nests that lurk about
>>
>> and I'll be impressed, and will (probably) willingly and happily
>> support it.
>
> For MGET:
> --------------------
> If a client wants the description, it includes in the header:
>
> Accept: application/rdf+xml-description
>
> and does a HEAD, and if it sees
>
> Content-Type: application/rdf+xml-description
>
> it can carry on and GET the description, anything else is a failure.
>

Try proposing that the Web architecture be modified so that *every* time
a web client want to submit a request to a server, it first has to do
a HEAD to see how to do that. See what kind of reaction you get ;-)

I *refuse* to allow SW agents become second class citizens having to
do extra work to accomplish the same level of functionality as other
web agents. *Especially* since it is possible to deploy simple 
extensions
which allow both web and SW agents to work equally efficiently.

I consider it to be an absolute requirement that, for any URI, a
SW agent must be able to obtain a description in *one* system request,
just as a web agent can obtain a representation in one system request.

> If the client wants the full representation, it includes in the header:
>
> Accept: application/rdf+xml
> ---------------------
>
> The description is the Concise Bounded Resource Description as 
> described in
> URIQA, in fact everything else follows URIQA, only with the alternate 
> mime
> type used along with existing verbs.
>
>> In the meantime, I've got work to do...
>
> I'm sorry if you got the impression I was trying to devalue the work 
> you've
> done, quite the opposite, I think it's very worthwhile. I'm still not 
> sure
> it's what's needed to bootstrap the Semantic Web, but I'm pretty sure 
> it's
> all that's needed to bootstrap a WG on a remote API for RDF.
>

In short: URIQA and the RDF Net API are two different kinds of 
protocols.
Both are needed.

Regards,

Patrick


> I'd better get some work done too...
>
> Cheers,
> Danny.
>
>
Received on Friday, 21 November 2003 11:02:06 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:03 GMT