W3C home > Mailing lists > Public > www-rdf-interest@w3.org > December 2001

Re: RDF/XML Syntax Specification (Revised) W3C Working Draft published

From: Jeremy Carroll <jjc@hplb.hpl.hp.com>
Date: Fri, 21 Dec 2001 16:57:49 +0100
To: <www-rdf-interest@w3.org>
Message-ID: <MABBLGKMPIJFCKFGDBEPAEIKCAAA.jjc@hplb.hpl.hp.com>

> I either have to
> preprocess my own internal vocabulary into RDF, doing the expansion
pre-RDF, or
> define some rules to post-process the RDF model to make the inferences I

My understanding of why we are wanting to drop aboutEach is twofold:

1: aboutEach (with file scope) is a preprocess-ing instruction. Really, it
is more useful to have the triples explicit in the model, and to be able to
manipulate the generalization explicitly.

2: even as a preprocess-ing instruction it is problematic.

I agree with Brian's approach of defining an additional property
(moran:also) to link a resource with some shared values. Another useful
additional property might link a resource to default values, that only apply
if the resource has not got a specification of that value from anywhere
Peter is quite right to point out that RDF, per se, does not offer any such
properties. Properties like these belong to extensions to RDF. The default
one is quite problematic in terms of the mess that it allows you to make.

In the Jena team we have been discussing new functionality in which
"inferencing" models are constructed from other models. When completed this
should allow the definition of a new RDF model in terms of an old one by
specifying rules (in Java) to generate the new triples. At this stage it
feels that the moran:also property would fit very naturally into that
design. You could read an RDF/XML file in as a standard model, and then use
that to construct an inferrencing model with code that expanded moran:also.
This model could then be reserialized as a second (bloated) RDF/XML file.

With a correct design, the same RDF/XML input file could be used for both
the preprocessing and postprocessing approaches. So that for now you could
use XSLT, and then later use an RDF transform layer (of some sort) when you
find one that you feel confortable with. The vital thing is to have the
input file as valid RDF/XML syntax and corresponding to triples that can be
given an appropriate semantics without too much difficulty.

Received on Friday, 21 December 2001 10:50:02 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:38 UTC