W3C home > Mailing lists > Public > public-lod@w3.org > April 2009

Re: Keeping crawlers up-to-date

From: Yves Raimond <yves.raimond@gmail.com>
Date: Tue, 28 Apr 2009 17:05:01 +0100
Message-ID: <82593ac00904280905k22153eadj13d84f81a6239d52@mail.gmail.com>
To: Peter Coetzee <peter@coetzee.org>
Cc: Kingsley Idehen <kidehen@openlinksw.com>, Melvin Carvalho <melvincarvalho@gmail.com>, Linking Open Data <public-lod@w3.org>, Nicholas J Humfrey <njh@aelius.com>, Patrick Sinclair <metade@gmail.com>
Hello!

>
> Alternatively, why not take an approach similar to the Wikipedia live feeds,
> and push them out on public chat channels; perhaps SPARQL/Update messages on
> a read-only Jabber/IRC etc stream? Interested parties are free to consume
> them, and use the queries to keep their local copy up-to-date with each set
> of changes. Possibly preferable to reinventing the wheel with some kind of
> stream server :)

We're actually using the WIkipedia IRC live feeds for BBC Music, and
that does work quite well :-) I think XMPP/IRC/MQ -based solutions are
really nice (but not really web-friendly, I guess?) as it helps with
the load on our servers. We just post a new message whenever we update
something, and anybody can subscribe to the posted messages to keep
their aggregation in sync.

Cheers,
y

>
> Peter
>
>
>>>
>>>
>>>>
>>>> Did anyone tried to tackle this problem already?
>>>>
>>>> Cheers!
>>>> y
>>>>
>>>>
>>>> [1] http://n2.talis.com/wiki/Changeset
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>> --
>>
>>
>> Regards,
>>
>> Kingsley Idehen       Weblog: http://www.openlinksw.com/blog/~kidehen
>> President & CEO OpenLink Software     Web: http://www.openlinksw.com
>>
>>
>>
>>
>>
>
>
Received on Tuesday, 28 April 2009 16:05:50 UTC

This archive was generated by hypermail 2.3.1 : Sunday, 31 March 2013 14:24:20 UTC