- From: Martin J. Dürst <duerst@it.aoyama.ac.jp>
- Date: Wed, 30 Oct 2013 11:38:42 +0900
- To: Simon Sapin <simon.sapin@exyr.org>
- CC: spec-prod@w3.org
On 2013/10/30 0:39, Simon Sapin wrote: > Le 29/10/2013 15:22, Robin Berjon a écrit : >> (There are also resource issues to consider, a spider going through all >> the history of a long and complex draft would likely use up >> non-negligible resources.) > > I don’t think a spider is needed. It could be server side-software that > serves files directly from the repository based on a commit hash in the > URL, which AFAIK is not very resource-intensive. I agree. I don't know if spider refers to e.g. search engines or something more W3C-internal. If it's search engines, I'd not be worried. They have a lot more to crawl than a few draft versions, and seem to be doing just fine. If it's internal, then it always can be cached somehow, that's probably what Simon is referring to. Regards, Martin.
Received on Wednesday, 30 October 2013 02:39:32 UTC