- From: Dan Connolly <connolly@w3.org>
- Date: Wed, 23 Aug 2006 09:29:32 -0500
- To: Fabien Gandon <Fabien.Gandon@sophia.inria.fr>
- Cc: public-grddl-wg@w3.org, Jean-Guilhem Rouel <jean-gui@w3.org>
Thanks for getting this started... On Wed, 2006-08-23 at 15:40 +0200, Fabien Gandon wrote: > [...] By crawling the > published reports and applying this transformation to them, a complete > and up-to-date RDF index is built from resources distributed over the > organization. That can be read to mean: every time one tech report is published, all of them are scanned for metadata. That's not the way it works. The way it works is: when a request to publish comes in, the document is checked by machine; if the checks pass, the checking tools extract some RDF from the document to be published. That RDF data is appended to a log of recent publications; the log of recent publications is mixed with the last full "checkpoint" (taken about once every six months) to produce the current data about publications. The various views are generated from the current data using XSLT. Currently, the tech reports don't have GRDDL markup in them. But if they did, the RDF data would travel with them when they're copied, for example. I need to think a bit about a story that goes: W3C has a digital library... ... then GRDDL comes along ... ... and life is better, TADA! > The simple fact that the XHTML fallows an official > template allows a GRDDL stylesheet to be defined to extract > corresponding RDF annotations that can then be used to build a portal or > support a workflow (e.g. TR automation). The workflow currently works without GRDDL, actually. -- Dan Connolly, W3C http://www.w3.org/People/Connolly/ D3C2 887B 0F92 6005 C541 0875 0F91 96DE 6E52 C29E
Received on Wednesday, 23 August 2006 14:29:51 UTC