W3C home > Mailing lists > Public > www-rdf-interest@w3.org > November 2003

RE: RDF query and Rules - my two cents

From: Danny Ayers <danny666@virgilio.it>
Date: Wed, 19 Nov 2003 16:40:57 +0100
To: "Patrick Stickler (NMP-MSW/Tampere)" <patrick.stickler@nokia.com>
Cc: "Graham Klyne" <GK@ninebynine.org>, "Jim Hendler" <hendler@cs.umd.edu>, "Dan Brickley" <danbri@w3.org>, <www-rdf-rules@w3.org>, <www-rdf-interest@w3.org>
Message-ID: <BKELLDAGKABIOCHDFDBPOEGMEIAA.danny666@virgilio.it>

> Well, having a deployment platform that is sensitive to efficiency
> issues (i.e. mobile phones), I'd just as soon not leave such issues
> for "later work".

Fair enough, but wouldn't this route lead straight to a binary serialization
of RDF?
> Use the existing web architecture, definitely. Trying to extend/bend
> the semantics of the existing verbs, no.
> If you're not familiar with URIQA, have a look at
> http://sw.nokia.com/uriqa/URIQA.html
> to see how I've been approaching this problem.

If you can GET why is there a need for MGET?

The introduction of new HTTP methods doesn't strike me as being consistent
with extending the present web architecture, rather it seems like creating a
new, albeit similar architecture. Personally I think it's important to
deploy Semantic Web systems without having to modify the servers - in fact
it's probably critical for adoption.

> I'm not entirely sure what point you're trying to make here. Yes, it's
> true that a KB could be a single triple, or could be virtual --
> corresponding
> to a distributed query space across multiple physical databases. But I
> don't see how that has to be relevant to the query language or protocol.
> So I agree, it's an implementational issue. I.e. *which* KB or set of
> KBs (however implemented) that a given query service employs in order to
> respond to queries should not be relevant to the core standard. Clients
> should not *have* to know which KB the service should use. I.e. the KB
> (virtual or physical) is exposed as a service. And one service may very
> well use *other* query services as components.

The point I was trying to make was that if http methods are used
intelligently, then there is no need for the access to a KB necessarily to
be that coarse-grained, even if you limit yourself to http GET.

For example, consider my blog as a KB. In that KB there is information about
recent posts, and information about me.

Ok, I understand the need for fine granularity, and the resource-centric
approach you suggest makes sense, so you might want to make queries like:

GET http://dannyayers.com/q?muri="http://dannyayers.com/"

for information about the posts and

GET http://dannyayers.com/q?muri="http://dannyayers.com/misc/foaf/foaf.rdf"

for stuff about me. As it is, the data is served using

GET http://dannyayers.com/index.rdf
GET http://dannyayers.com/misc/foaf/foaf.rdf

The statements aren't partitioned quite as cleanly as they could be, but
this could easily be rectified.
The use of the two URIs gives finer-grained access to my KB than just using
a single URI. The way in which this is implemented behind the scenes is
irrelevant, but conveniently a simple implementation is already supported by
the standard http server ;-)

There is of course a built-in limitation that the subject resources being
examined must be within this domain, but as far as I can see this limitation
is exactly the same with GET as MGET.

> Each service then is a portal to a particular body of knowledge, and
> whether other portals to subsets of that knowledge are provided by
> other services is irrelevant to clients using *that* service.
> Explicit specification of KB/model/database/etc. should only be via the
> URI denoting the query service to which the client interacts. That
> allows
> for maximal opacity regarding implementation and minimal impact to
> clients
> when things change.

Ok, but I would favour:




> Not so much separately, but sequentially. I.e., the WG would keep in
> mind
> the push functionality during the first standardization round, to ensure
> that both push and pull share an integrated conceptual core, but alot of
> the details can be deferred to a second round.

Fair enough.

> > But what I'm suggesting doesn't actually conflict with the
> > requirements you
> > propose, in fact passing of RDF/XML (and perhaps other syntaxes) over
> > HTTP
> > is exactly what I had in mind, along with "Existing web standards
> > should be
> > employed as much as possible;". My main concern is that "overloading
> > of the
> > semantics of existing web protocols should be avoided" be
> > misinterpreted as
> > a need for alternatives to GET, POST etc, or perhaps worse still that
> > everything be opaquely encoded into POSTs.
> >
> Well, as you'll see from URIQA, I believe that there *is* a need for
> alternatives to GET, PUT and DELETE -- insofar as a bootstrapping SW
> protocol is concerned, as there are special issues/problems in ensuring
> correct SW behavior based solely and exlusively on a URI alone (rather
> than two URIs, one denoting a target resource and another denoting a
> web service).
> However, GET, PUT, and DELETE *are* used and should be used by SW
> services (which is the case with URIQA) wherever possible, so I think
> that for the most part, we are in agreement.

I think we are largely in agreement, although I'm not convinced on the need
for http extensions. As with the rest of the web "correct behaviour" cannot
be guaranteed, so I can't see this as justification for something that would
require retooling the web. Things could perhaps be kept relatively tidy by
taking advantage of the "application/rdf+xml" mimetype - in the bootstrap
period at least this is comparatively virgin territory.

Received on Wednesday, 19 November 2003 10:49:13 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:44:45 UTC