W3C home > Mailing lists > Public > www-rdf-interest@w3.org > November 2003

Re: RDF query and Rules - my two cents

From: NMP-MSW/Tampere <patrick.stickler@nokia.com>
Date: Wed, 19 Nov 2003 10:19:46 +0200
Cc: "Graham Klyne" <GK@ninebynine.org>, "Jim Hendler" <hendler@cs.umd.edu>, "Dan Brickley" <danbri@w3.org>, <www-rdf-rules@w3.org>, <www-rdf-interest@w3.org>
To: "ext Danny Ayers" <danny666@virgilio.it>
Message-Id: <22D70378-1A69-11D8-B3FE-000A95EAFCEA@nokia.com>


On Tuesday, Nov 18, 2003, at 20:13 Europe/Helsinki, ext Danny Ayers 
wrote:

>
>>> I think a suitable approach would be to build on the existing RDF
>>> remote
>>> access API - that of RDF/XML+HTTP. A http GET will retrieve a model
>>> over the
>>> network based on a supplied URI. The RESTful continuation would start
>>> with a
>>> PUT to place it on the network, DELETE remove it.
>>
>>
>> I consider this far too coarse grained to be efficient and generally
>> useful
>> (note the important qualification 'generally').
>
> Indeed, which is why I said: "more complex interactions such as
> partial/filtered/query GETs and POSTs do need working out". However 
> I'm not
> sure efficiency should be an initial consideration - as long as 
> queries can
> be expressed simply and without ambiguity, efficiency shouldn't be an 
> issue
> (or at worst careful implementation will be needed).
>

Well, having a deployment platform that is sensitive to efficiency
issues (i.e. mobile phones), I'd just as soon not leave such issues
for "later work".

A good model/protocol will facilitate efficient deployment, and 
efficient
deployment can be seen as a useful litmus test for a good 
model/protocol.

Being satisfied that it provides "acceptable" performance on my 2.8GHz
desktop server with 4GB of RAM is not an approach I'd like to see taken
here.

>> A given model might be *huge*. GETting and PUTting entire models seems
>> to me to be a corner case, and not what most folks really need/want to
>> do.
>
> A given model might be huge, it might be tiny, but either way most 
> people
> are already doing GETs when they deal with RDF on the web. What I'm 
> trying
> to suggest is that the existing REST verbs be built on, rather than 
> using
> protocols orthogonal to existing web architecture.
>

Use the existing web architecture, definitely. Trying to extend/bend
the semantics of the existing verbs, no.

If you're not familiar with URIQA, have a look at 
http://sw.nokia.com/uriqa/URIQA.html
to see how I've been approaching this problem.

>> Let's please separate issues relating to knowledge management from
>> those relating to knowledge discovery. What the SW needs acutely, IMO,
>> is a lightweight, efficient, intuitive and easy to implement solution
>> for knowledge discovery.
>
> A fair call, though I'm not certain the difference between KBs and 
> resources
> is as clear-cut as you suggest - a KB could be a single (reified) 
> statement.
> Whether a KB appears on the web through a single queryable location or 
> its
> interface is distributed over many URIs (/URIrefs) is surely an
> implementation issue.
>

I'm not entirely sure what point you're trying to make here. Yes, it's
true that a KB could be a single triple, or could be virtual -- 
corresponding
to a distributed query space across multiple physical databases. But I
don't see how that has to be relevant to the query language or protocol.
So I agree, it's an implementational issue. I.e. *which* KB or set of
KBs (however implemented) that a given query service employs in order to
respond to queries should not be relevant to the core standard. Clients
should not *have* to know which KB the service should use. I.e. the KB
(virtual or physical) is exposed as a service. And one service may very
well use *other* query services as components.

Each service then is a portal to a particular body of knowledge, and
whether other portals to subsets of that knowledge are provided by
other services is irrelevant to clients using *that* service.

Explicit specification of KB/model/database/etc. should only be via the
URI denoting the query service to which the client interacts. That 
allows
for maximal opacity regarding implementation and minimal impact to 
clients
when things change.


>> C.f.
>> http://lists.w3.org/Archives/Public/www-rdf-interest/2003Nov/0115.html
>
> I may be wrong, but I fear that addressing push and pull separately 
> might
> lead in the direction of new, web-unfriendly tunnelling protocol(s).
>

Not so much separately, but sequentially. I.e., the WG would keep in 
mind
the push functionality during the first standardization round, to ensure
that both push and pull share an integrated conceptual core, but alot of
the details can be deferred to a second round.

> But what I'm suggesting doesn't actually conflict with the 
> requirements you
> propose, in fact passing of RDF/XML (and perhaps other syntaxes) over 
> HTTP
> is exactly what I had in mind, along with "Existing web standards 
> should be
> employed as much as possible;". My main concern is that "overloading 
> of the
> semantics of existing web protocols should be avoided" be 
> misinterpreted as
> a need for alternatives to GET, POST etc, or perhaps worse still that
> everything be opaquely encoded into POSTs.
>

Well, as you'll see from URIQA, I believe that there *is* a need for
alternatives to GET, PUT and DELETE -- insofar as a bootstrapping SW
protocol is concerned, as there are special issues/problems in ensuring
correct SW behavior based solely and exlusively on a URI alone (rather
than two URIs, one denoting a target resource and another denoting a
web service).

However, GET, PUT, and DELETE *are* used and should be used by SW
services (which is the case with URIQA) wherever possible, so I think
that for the most part, we are in agreement.

Cheers,

Patrick
Received on Wednesday, 19 November 2003 03:23:51 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:03 GMT