Re: Semantic Web User Agent Conformance

On 22.11.2007 19:26:43, Sean B. Palmer wrote:
>The practical side of this is that I'm implementing an RDF API which
>has a Graph class, and the simplest case of using it is that its
>constructor takes a single argument, a URI, and parses it to form the
>graph...
>
>G = Graph('http://example.org/')

>But it's too computationally and network expensive to apply, say, all
>of the GRDDL mechanism and RDFa, so what subset should I use?

Yeah, I had a similar problem in ARC ("LOAD <http://example.org/>"), 
and the answer is simply: the API developer can't decide that, it has
to be done by the app developer. E.g. in the knowee project we are 
mainly interested in social graph stuff, so we are activating 
   "xfn => xfn/rdf" and 
   "hcard => foaf/rdf" conversions, 
in another app, I might use
   "erdf => rdf",
   "rdfa => rdf", and
   "hcard => vcard/rdf (the 2001 one)

I don't think there is a way to really decide which "default"
triples should be generated by a SWUA w/o a use case. 

I do agree that it would make sense to 
 * collect and document possible mappings, and also 
 * potential extraction approaches (parsing, scraping, grddl, 30x 
   following, ConNeg, etc.), and also 
 * approach-specific triggers to make it easier for SWUAs once 
   they decided which approaches to try.

So, these are actually *three* (connected, but different) tasks.
Once we have those documented (I still have a sweo action for
microformats2rdf mappings), we might be able to specify conformance 
levels and enable standardized calls such as

G = Graph('http://example.org/', 'rdfa hcard-vcard2001 openid-foaf hatom-rss')

which could then allow the generation of the same triples via  
different toolkits. I think #2 and #3 are probably best handled 
by SWD, #1 could perhaps be provided by SWIG and SWEO. But it's
quite some work, either way.


Benji

--
Benjamin Nowack
http://bnode.org/

>
>-- 
>Sean B. Palmer, http://inamidst.com/sbp/
>

Received on Thursday, 22 November 2007 21:26:55 UTC