W3C home > Mailing lists > Public > www-rdf-interest@w3.org > January 2005

Re: Exchange of Named RDF Graphs

From: Danny Ayers <danny.ayers@gmail.com>
Date: Wed, 5 Jan 2005 19:14:32 +0100
Message-ID: <1f2ed5cd050105101463120d1@mail.gmail.com>
To: Giovanni Tummarello <giovanni@wup.it>
Cc: Eric Jain <Eric.Jain@isb-sib.ch>, www-rdf-interest@w3.org

On Wed, 05 Jan 2005 11:14:22 +0100, Giovanni Tummarello <giovanni@wup.it> wrote:
> [my2c]
> I cant get rid of the feeling that named graphs are just, a bad idea?

It might just be the menopause, but I got the opposite feeling. Looks
to me like named graphs map pretty cleanly onto the typical deployment
of RDF on the Web - a resource has a representation that is an
document, that representation contains a bunch of statements. Using
HTTP on the URI of the resource gives access to the bunch of
statements. The bunch of statements might include an rdf:about=""
enabling it to talk about its container document. I can't think of any
harmful side effects of treating that document identifier as being the
same as the graph identifier, as the operational semantics of the
document handling (GETting the thing over HTTP) will generally be
orthogonal to the declarative semantics any graph-juggling system will
be using.

On the other hand reification-based handling of provenance does seem
mostly doable (Rich Boakes' material is good on this [1]), though I'd
forgotten about the Venus in the morning (thanks Reto). However I
don't think anyone would suggest that reification is an elegant
solution, all that fluff, Venus in Furs if you like.

In the Webbish view, the URI of an RDF document is already in effect
the name of the graph in contains. It seems pretty common to use quads
internally when dealing with triples from different sources,
explicitly using the graph name (or its hash) as the fourth tuple
seems reasonable.

I could well be misinterpreting it, but he way Carroll et al [2] talk
about the graph being the extension of the named graph (in the sense
of the graph being member of the set of the things thus named) seems
to offer quite a bit of elbow room between systems working on the
current RDF semantics and RDF + NGs, but with compatibility where
there needs to be. Seems like existence of named graphs on the SW
doesn't negate the existence of everything else, NG-approaches should
be able to get along with non-NG approaches and vice versa.

I too would appreciate some more clarification from the
nomigraphophiles, but in the meantime, Giovanni, I'd like to ask how
specifically you'd represent the resource/document/graph side of
things using core RDF alone - you mentioned bags a few times, do you
have rdf:Bags in mind, or are you talking of reification a la Boakes?
If the latter, how do you stand Venus on her shell?

Going back to Morten's original suggestion of zipped multi-docs - I'm
more confident that this is a good idea (probably), and I'm not sure
it needs to bring the notion of named graphs. Seems to me it's the
same as using seeAlsos on the Web, only the retrieval mechanism isn't
HTTP. If fact, didn't I see Java using the jar: scheme somewhere? The
problem of there being practical limits to how many files you can have
in a zip is just another practical problem. Plenty of those around.

I've got a practical use for Morten's technique nearby: I want to make
my blog archives accessible as RDF, not just the most recent 15 items,
as usually done with RSS. I' should be able to rig up the blog
software (WordPress) to provide this without too much effort. The way
I was thinking of exposing this data was in monthly chunks, with a
top-level file containing a list of those chunks (addressed with
seeAlsos). Same structure as Morten's talking about. I should be able
to use gzip on the data files - they won't be huge, but as they'll
have the content will be significantly sized docs. For myself on this
dial-up connection, I think it would be convenient to get the whole
thing in a blob, and pull it apart locally. (It would also be nice to
do this per-category or whatever other facet, if the manifest
creation/zipping could be done as part of a pipeline). The data should
still be reasonably transparent, but maybe transporting the whole lot
in a blob might be handy as a low-budget form of (big atomic)
transaction too.


[1] http://semanticweb.deit.univpm.it/swap2004/cameraready/boakes.pdf
[2] http://www.hpl.hp.com/techreports/2004/HPL-2004-57

Received on Wednesday, 5 January 2005 18:14:35 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:44:54 UTC