W3C home > Mailing lists > Public > semantic-web@w3.org > November 2007

Re: Semantic Web User Agent Conformance

From: Chimezie Ogbuji <chimezie@gmail.com>
Date: Fri, 23 Nov 2007 17:20:51 -0500
Message-ID: <f6ec8dcb0711231420k506bf574u6dc076bc4c0df9c2@mail.gmail.com>
To: sean@miscoranda.com, semantic-web@w3.org

> One of the biggest Semantic Web questions people are asking right now is: when a Semantic Web User Agent gets a document, how many normative ways of getting triples from it are there?

Yes, a question I've been asking repeatedly over the past year or so
with very little useful response or any indication that there is any
coherent consensus.  This does not bode well if we wish the Semantic
Web to indeed be a framework for autonomous agents.  GRDDL, RDFa,
etc.. get us much of the way there, but a more comprehensive "agent
contract" is needed.

> Or, from the other direction: how many triples is the author asserting in some document?

This is a bit divergent since it relies on many things including (but
not limited to): a linkage contract (which RDF predicates did the
author use to indicate that the linked graphs are normatively
included), an entailment regime (we still don't have a solid notion
for this), .. I'm sure I'm missing other factors ..

> I'm proposing some kind of work on conformance levels for Semantic Web User Agents, such that when someone says "how many triples are in $uri", we can answer confidently "a Class 2 Semantic Web User Agent will return 53 triples"; or perhaps not *that* abstract, but along those lines.

Given the scattered nature of this space, I think this would be very prudent.

> The aim is for document producers to know how many UAs out there support the format that they're using, and to give some kind of regularity to what is currently a bundle of ad hoc solutions.

Absolutely..

> As a bit of context, here are some people who've been thinking about this specifically from an engineering point of view:

By the way, I've since dusted off a set of diagrams i drew earlier
this year while trying to tackle this very problem.  I started off
writing a generic semantic web agent "module" which plugs into a
'vanilla' RDF processing / inference library thinking it should be a
no brainer and quickly found out how significantly tangled the
landscape is in this regard.  I've put together a wiki with some of my
thoughts (which also pull from earlier [http://redfoot.net Redfoot]
ideas):

http://code.google.com/p/python-dlp/wiki/Agentem

> There's also been a lot of discussion about Xiaoshu Wang's paper on kinda the same issues, but I think that's a distraction so I won't bother to link to it.

I think those discussions (though related) are more about web
architecture "semantics" than a standard framework for semantic web
agents.  I consider the two separate and conversation that suggests
that the overlap between the two is significant is part of the problem
here.

[snip]

> What a huge tax on Semantic Web User Agents that also have to be conforming GRDDL agents and conforming RDF/XML agents and so on!

Agreed.  A discrete set of SWUA capabilities would go along way in
doing some damage control on the tax which is mostly because
compliance (for each framework) is defined in terms of an isolated,
all-or-nothing set of requirements.

> GRDDL in particular is a tricky case because the conformance is left open but it recommends that you implement *all* of the underlying processing mechanisms and then XSLT 1.0 as the main transformation language; but this particular conformance class doesn't have a name, it's just represented in the GRDDL Test Cases REC.

Well, let's start the conversation you are suggesting by giving this
class of GRDDL Aware Agents a name :) - In the FSM diagrams on the
Agentem wiki above, I tried to indicate where such a set of
conformance classes might fit in the larger picture.

[[[
If you really want to take this to its logical conclusion, it would be
nice to have a vocabulary for describing the capabilities of Semantic
Web user agents to consume various documents, a writeup of the
heuristics that they ought to use, and a kind of extra layer of
conformance levels for Semantic Web user agent authors to try to meet.
"Don't wanna support all of GRDDL? Here are a few common subsets that
are well deployed."

This should be based on some level of description, looking to see what
kinds of documents people are actually using, and prescription, what
kinds would be good to produce especially in future when things like
RDFa go to rec.
]]] - Best Practices Issue: RDF Format Discovery
http://lists.w3.org/Archives/Public/public-swd-wg/2007Nov/0056

Sounds like a (much needed) first step in the right direction.

I  think it would be a shame if none of what you suggest is considered
simply as a result of it being out of scope for the forums you pinged.
 The vision of a web of data where agents can behave efficiently on
our behalf will not happen without a framework which contributes some
minimal amount of coherence and semi-determinism.

My $0.02

 -- Chimezie
Received on Friday, 23 November 2007 22:27:56 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 21:45:19 GMT