- From: Chris Bizer <chris@bizer.de>
- Date: Fri, 20 Nov 2009 19:07:39 +0100
- To: "'Georgi Kobilarov'" <georgi.kobilarov@gmx.de>, "'Michael Hausenblas'" <michael.hausenblas@deri.org>, "'Herbert Van de Sompel'" <hvdsomp@gmail.com>
- Cc: "'Linked Data community'" <public-lod@w3.org>
Hi Michael, Georgi and all, just to complete the list of proposals, here another one from Herbert Van de Sompel from the Open Archives Initiative. Memento: Time Travel for the Web http://arxiv.org/abs/0911.1112 The idea of Memento is to use HTTP content negotiation in the datetime dimension. By using a newly introduced X-Accept-Datetime HTTP header they add a temporal dimension to URIs. The result is a framework in which archived resources can seamlessly be reached via the URI of their original. Sounds cool to me. Anybody an opinion whether this violates general Web architecture somewhere? Anybody aware of other proposals that work on HTTP-level? Have a nice weekend, Chris > -----Ursprüngliche Nachricht----- > Von: public-lod-request@w3.org [mailto:public-lod-request@w3.org] Im Auftrag > von Georgi Kobilarov > Gesendet: Freitag, 20. November 2009 18:48 > An: 'Michael Hausenblas' > Cc: Linked Data community > Betreff: RE: RDF Update Feeds > > Hi Michael, > > nice write-up on the wiki! But I think the vocabulary you're proposing is > too much generally descriptive. Dataset publishers, once offering update > feeds, should not only tell that/if their datasets are "dynamic", but > instead how dynamic they are. > > Could be very simple by expressing: "Pull our update-stream once per > seconds/minute/hour in order to be *enough* up-to-date". > > Makes sense? > > Cheers, > Georgi > > -- > Georgi Kobilarov > www.georgikobilarov.com > > > -----Original Message----- > > From: Michael Hausenblas [mailto:michael.hausenblas@deri.org] > > Sent: Friday, November 20, 2009 4:01 PM > > To: Georgi Kobilarov > > Cc: Linked Data community > > Subject: Re: RDF Update Feeds > > > > > > Georgi, All, > > > > I like the discussion, and as it seems to be a recurrent pattern as > > pointed > > out by Yves (which might be a sign that we need to invest some more > > time > > into it) I've tried to sum up a bit and started a straw-man proposal > > for a > > more coarse-grained solution [1]. > > > > Looking forward to hearing what you think ... > > > > Cheers, > > Michael > > > > [1] http://esw.w3.org/topic/DatasetDynamics > > > > -- > > Dr. Michael Hausenblas > > LiDRC - Linked Data Research Centre > > DERI - Digital Enterprise Research Institute > > NUIG - National University of Ireland, Galway > > Ireland, Europe > > Tel. +353 91 495730 > > http://linkeddata.deri.ie/ > > http://sw-app.org/about.html > > > > > > > > > From: Georgi Kobilarov <georgi.kobilarov@gmx.de> > > > Date: Tue, 17 Nov 2009 16:45:46 +0100 > > > To: Linked Data community <public-lod@w3.org> > > > Subject: RDF Update Feeds > > > Resent-From: Linked Data community <public-lod@w3.org> > > > Resent-Date: Tue, 17 Nov 2009 15:46:30 +0000 > > > > > > Hi all, > > > > > > I'd like to start a discussion about a topic that I think is getting > > > increasingly important: RDF update feeds. > > > > > > The linked data project is starting to move away from releases of > > large data > > > dumps towards incremental updates. But how can services consuming rdf > > data > > > from linked data sources get notified about changes? Is anyone aware > > of > > > activities to standardize such rdf update feeds, or at least aware of > > > projects already providing any kind of update feed at all? And > > related to > > > that: How do we deal with RDF diffs? > > > > > > Cheers, > > > Georgi > > > > > > -- > > > Georgi Kobilarov > > > www.georgikobilarov.com > > > > > > > > >
Received on Friday, 20 November 2009 18:06:29 UTC