- From: Lars Heuer <heuer@semagia.com>
- Date: Thu, 25 Oct 2012 13:23:34 +0200
- To: Lars Marius Garshol <larsga@garshol.priv.no>
- Cc: "public-sdshare@w3.org" <public-sdshare@w3.org>
Hi Lars, [...] > I don't get it. If the problem is that the client must completely process feed X over again in the case of failure, how would this help? The client has still got to do X over again, right? > No, if the server uses archived feeds, the client could process the fragments partially. And if the process is interrupted, it can start with the feed where the process was broken. Thinking further about it, it implies an ordering of feeds, so my suggestion may not solve the problem. I'll try to explain it anyway ;) In [1] you wrote about the problem of processing 60.000 entries. Pagination wouldn't help here since it does not guarantee stable, reliable IRIs. If the server uses archived feeds and, for example 1000 entries per feed, the client could start again processing the feed X using its IRI instead of reading all entries again. If the server uses 1000 entries/feed and the client detects an error while processing entry 19.900, the client could go straight to the IRI of feed 19 and start again without reading all successfully synchronized entries again. Anyway, as said, archived feeds may not solve the "sort or not to sort" problem since they imply some kind of ordering/sorting. [1] <http://lists.w3.org/Archives/Public/public-sdshare/2012Oct/0010.html> Best regards, Lars -- Semagia <http://www.semagia.com/>
Received on Thursday, 25 October 2012 11:24:02 UTC