W3C home > Mailing lists > Public > public-sweo-ig@w3.org > February 2007

Re: updated InfoGathering, proposing a portal as a solution - do you agree on a portal?

From: Danny Ayers <danny.ayers@gmail.com>
Date: Fri, 16 Feb 2007 20:42:49 +0100
Message-ID: <1f2ed5cd0702161142j769fa38epb900265033f20309@mail.gmail.com>
To: "Leo Sauermann" <leo.sauermann@dfki.de>
Cc: "Ivan Herman" <ivan@w3.org>, "W3C SWEO IG" <public-sweo-ig@w3.org>

A portal should be a reasonable way of presenting the material, and
more critically sounds an achievable goal (assuming maintenance can
somehow be taken care of).

But I'm not sure I understand the requirements list on the Wiki -  why
should it be MySQL/PHP? We only need one portal, no? Are we sure there
isn't an existing system that could do the job (or at least 90% of
it)? If there isn't an RDF-based system that fits the bill, then
surely there's something that can at least expose RDF (Drupal
perhaps?). The primary objective is Information Gathering, not
software development, however appealing that may be for demo purposes.

(Whatever, there's always RAP).

more comments inline -

On 16/02/07, Leo Sauermann <leo.sauermann@dfki.de> wrote:

>  We will provide a portal integrating data and providing user interfaces to
> edit the most important information resources - so the pain to keep up to
> date should be forwarded to people like Dave Beckett, who keeps his list of
> Tools anyway (he just now either uses the portal to manage the list or
> publishes his data as RDF/XML)

I see no reason not to leave the pain of finding new stuff to people
like Dave, but I really don't think it's reasonable to expect them to
change their current practice (unless they really want to), or check
for stale items.

For Dave's list a bit of XSLT & a little manual tweaking should be
enough to get it in RDF (I've a feeling I started one sometime last
year - not sure how far I got). A periodic automatic check for 404s &
a human-reporting mechanism should be adequate for dead sites.

>  Before the architecture, I would define the user experience.
>  Features first, then architecture.

Software agents are users too! I would hope all the data will be
available to remote systems as RDF, and ideally via SPARQL too (plus
Atom/RSS for newsreaders). It might be worth investigating automated
3rd party addition of entries, along the lines of CodeZoo's
DOAP-over-Atom [1] and/or Pingthesemanticweb. Ok, a bit of development
may be needed...

One point that should probably be considered early on is
licensing/copyright. We should be aiming for maximally open data here.
Any automated parts likely need Creative Commons awareness, otherwise
permission needs asking...

Cheers,
Danny.

[1] http://www.codezoo.com/about/doap_over_atom.csp


-- 

http://dannyayers.com
Received on Friday, 16 February 2007 19:42:58 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:17:35 GMT