W3C home > Mailing lists > Public > public-sweo-ig@w3.org > January 2007

Reference online services?

From: Danny Ayers <danny.ayers@gmail.com>
Date: Sat, 20 Jan 2007 22:28:52 +0100
Message-ID: <1f2ed5cd0701201328v33aacd31h5a617b04fec955d7@mail.gmail.com>
To: "W3C SWEO IG" <public-sweo-ig@w3.org>
I forget if this has come up before, but it would be very useful to have a
set of online tools etc for reference use in demos, tutorials and for
developers to play with. I think ideally they'd be hosted by the W3C like
the validators and XSLT service, but I've no idea of the
logistics/feasibility there.

Virtually as good would be a list of 3rd party tools checked & (presumably
informally) rubber-stamped as live references by a group such as SWEO, hence
reasonably reliable, spec-conformant and *easy to find*. This could be
considered a minor extension/focussing of the InfoGathering task (running a
few tests), and could help the Web Developer Outreach task.

The data/tools/services might include:

* "Useful" static RDF data sets (i.e. RDF/XML & N3/Turtle files covering
various domains)
* SPARQL services
* Useful XSLT stylesheets (in particular for  conversion from other formats
a la GRDDL, plus some designed for use on SPARQL XML results, i.e. for
conversion to other formats)
* Triplestore(s) available for use with the SPARQL services (maybe read-only
and/or allowing data to be POSTed, but with periodic wiping)
* Reasoners (especially one or two limited to RDFS subsumption & IFP
smushing, although OWL DL engines are readily available)
* Data viewers/browsers (i.e. reliable installs of Tabulator etc)

Background: what brought this to mind was tonight I had a brief noodle
prompted by a great line from Lee, which turned into a blog post [1] which
approximated half-baked tutorial material. Lee said "reading the RDF via a
query effectively allows the application to define its own API" [2] (where
the application is consuming data from another source). Potentially a very
hot idea for the Web 2.0/mashup crowd.

Where my blog post fell down (aside from the Saturday evening prose ;-) was
in hooking together a demo, which should have been possible, without writing
more than a few lines of code (which would wind up in a long URI, actually
providing a real, useful service). The components needed were :

1. an online SPARQL engine (with an associated HTML form input)
2. a SPARQL results XML XSLT for a fairly common usage (RSS 1.0 named
variables: ?title, ?description etc to RSS 1.0)
3. an online XSLT processor

For (1), the first that came to mind was Dave Beckett's Rasqal demo, but
that choked on the URI I was feeding it (I'm guessing there was a bit of
%-escaping missing somewhere). Although I think I've got a version of (2)
somewhere myself, I've no idea where - I bet that applies to a lot of people
that have played with SPARQL. (3) is available, bravo W3C.

Wearing the optimist's hat, going beyond a handful of services for
reference/tutorial purposes (effectively application components), it's only
a small step to there being a whole load of Semantic Web-integrated
applications, dynamically configured by hooking together existing web APIs.
This thing could easily be a lot more programmable... (see [3]).


[1] http://dannyayers.com/2007/01/20/qotd--make-your-own
[3] http://programmableweb.com/


Received on Saturday, 20 January 2007 21:28:58 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:28:51 UTC