Re: RDF Update Feeds + URI time travel on HTTP-level

On 20 Nov 2009, at 19:07, Chris Bizer wrote:
> just to complete the list of proposals, here another one from  
> Herbert Van de
> Sompel from the Open Archives Initiative.
>
> Memento: Time Travel for the Web
> http://arxiv.org/abs/0911.1112
>
> The idea of Memento is to use HTTP content negotiation in the datetime
> dimension. By using a newly introduced X-Accept-Datetime HTTP header  
> they
> add a temporal dimension to URIs. The result is a framework in which
> archived resources can seamlessly be reached via the URI of their  
> original.

Interesting! It seems to be most useful for “time travelling” on the  
web, and would allow me to browse the web as it was at some point in  
the past, similar to the Wayback Machine [1]. Unlike the Wayback  
Machine, it would work without a central archive, and only on those  
servers that implement the proposal, and only with a browser/client  
that supports the feature.

I don't immediately see how this could be used to synchronize updates  
between datasets though. Being able to access past versions of URIs  
doesn't tell me what has changed throughout the site between then and  
today.

> Sounds cool to me. Anybody an opinion whether this violates general  
> Web
> architecture somewhere?

 From a web architecture POV it seems pretty solid to me. Doing stuff  
via headers is considered bad if you could just as well do it via  
links and additional URIs, but you can argue that the time dimension  
is such a universal thing that a header-based solution is warranted.

The main drawback IMO is that existing clients, such as all web  
browsers, will be unable to access the archived versions, because they  
don't know about the header. If you are archiving web pages or RDF  
document, then you could add links that lead clients to the archived  
versions, but that won't work for images, PDFs and so forth.

In summary, I think it's pretty cool. Anyone who has used Apple's Time  
Machine would probably get a kick out of the idea of doing the same on  
a web page, zooming into the past on a a Wikipedia page or on Github  
or on a weather site. But if you're only interested in doing something  
for a single site, then an ad-hoc solution based on URIs for old  
versions is probably more practical.

Best,
Richard


[1] http://www.archive.org/web/web.php

> Anybody aware of other proposals that work on HTTP-level?
>
> Have a nice weekend,
>
> Chris
>
>
>
>> -----Ursprüngliche Nachricht-----
>> Von: public-lod-request@w3.org [mailto:public-lod-request@w3.org] Im
> Auftrag
>> von Georgi Kobilarov
>> Gesendet: Freitag, 20. November 2009 18:48
>> An: 'Michael Hausenblas'
>> Cc: Linked Data community
>> Betreff: RE: RDF Update Feeds
>>
>> Hi Michael,
>>
>> nice write-up on the wiki! But I think the vocabulary you're  
>> proposing is
>> too much generally descriptive. Dataset publishers, once offering  
>> update
>> feeds, should not only tell that/if their datasets are "dynamic", but
>> instead how dynamic they are.
>>
>> Could be very simple by expressing: "Pull our update-stream once per
>> seconds/minute/hour in order to be *enough* up-to-date".
>>
>> Makes sense?
>>
>> Cheers,
>> Georgi
>>
>> --
>> Georgi Kobilarov
>> www.georgikobilarov.com
>>
>>> -----Original Message-----
>>> From: Michael Hausenblas [mailto:michael.hausenblas@deri.org]
>>> Sent: Friday, November 20, 2009 4:01 PM
>>> To: Georgi Kobilarov
>>> Cc: Linked Data community
>>> Subject: Re: RDF Update Feeds
>>>
>>>
>>> Georgi, All,
>>>
>>> I like the discussion, and as it seems to be a recurrent pattern as
>>> pointed
>>> out by Yves (which might be a sign that we need to invest some more
>>> time
>>> into it) I've tried to sum up a bit and started a straw-man proposal
>>> for a
>>> more coarse-grained solution [1].
>>>
>>> Looking forward to hearing what you think ...
>>>
>>> Cheers,
>>>      Michael
>>>
>>> [1] http://esw.w3.org/topic/DatasetDynamics
>>>
>>> --
>>> Dr. Michael Hausenblas
>>> LiDRC - Linked Data Research Centre
>>> DERI - Digital Enterprise Research Institute
>>> NUIG - National University of Ireland, Galway
>>> Ireland, Europe
>>> Tel. +353 91 495730
>>> http://linkeddata.deri.ie/
>>> http://sw-app.org/about.html
>>>
>>>
>>>
>>>> From: Georgi Kobilarov <georgi.kobilarov@gmx.de>
>>>> Date: Tue, 17 Nov 2009 16:45:46 +0100
>>>> To: Linked Data community <public-lod@w3.org>
>>>> Subject: RDF Update Feeds
>>>> Resent-From: Linked Data community <public-lod@w3.org>
>>>> Resent-Date: Tue, 17 Nov 2009 15:46:30 +0000
>>>>
>>>> Hi all,
>>>>
>>>> I'd like to start a discussion about a topic that I think is  
>>>> getting
>>>> increasingly important: RDF update feeds.
>>>>
>>>> The linked data project is starting to move away from releases of
>>> large data
>>>> dumps towards incremental updates. But how can services consuming  
>>>> rdf
>>> data
>>>> from linked data sources get notified about changes? Is anyone  
>>>> aware
>>> of
>>>> activities to standardize such rdf update feeds, or at least  
>>>> aware of
>>>> projects already providing any kind of update feed at all? And
>>> related to
>>>> that: How do we deal with RDF diffs?
>>>>
>>>> Cheers,
>>>> Georgi
>>>>
>>>> --
>>>> Georgi Kobilarov
>>>> www.georgikobilarov.com
>>>>
>>>>
>>>>
>
>
>

Received on Sunday, 22 November 2009 00:07:20 UTC