Re: Semantic Web User Agent Conformance

On Nov 22, 2007 5:52 PM, Alan Ruttenberg wrote:

> 1) Is asking the number of triples an interesting question. Why?

Yes, because it entails asking "how many of the triples that I am
asserting can actually be consumed?". It's an interesting question
because people usually only publish data when they intend it to be
consumed.

As Keith Alexander put it earlier today, "my opinion, as a document
author, was that I wanted to be specific about what triples you were
expressing".

> 2) In the case of an OWL document, I would be interested in the
> entailments, not the number of base triples.

OWL entailment is well defined in the OWL specification suite.
Semantic Web User Agent conformance, on the other hand, is not defined
at all. You might have, say, a conforming GRDDL user agent, but as I
noted previously it's such an open conformance definition that I think
you could argue that all programs meet it. There's nothing that
defines how far one ought to go with supporting the overplus of RDF
formats that we have now.

That's what I'm concerned about--the level of understanding of common
practice, specifications evolving to make things difficult for writers
of Semantic Web UAs, and authors being able to express themselves
clearly.

> 3) Should this sort of activity not first be driven by defining some
> example set of activities for semantic web agents?

No, because most serialisations are equivalent. What we're talking
about here is something very low down on the Semantic Web layer cake
diagram:

http://www.w3.org/2007/03/layerCake

There aren't any activities you might want to do in a Semantic Web
application with an RDF/XML document that you wouldn't want to do with
a Turtle document: all serialisations exist for is to encode RDF
graphs, and they're all pretty much the same once you've parsed them.

The only exceptions to this are things like the SPARQL XML Results
format that is really highly specialised for a particular application.

The practical side of this is that I'm implementing an RDF API which
has a Graph class, and the simplest case of using it is that its
constructor takes a single argument, a URI, and parses it to form the
graph...

G = Graph('http://example.org/')

But it's too computationally and network expensive to apply, say, all
of the GRDDL mechanism and RDFa, so what subset should I use? This can
get quite tricky. Should an RDF/XML document that's also a GRDDL
document merge its GRDDL results with itself, for example?

Of course there are options to restrict parsing to some level of
granularity, but you have to choose acceptable defaults, and that's
not an easy task at the moment.

-- 
Sean B. Palmer, http://inamidst.com/sbp/

Received on Thursday, 22 November 2007 19:27:11 UTC