W3C home > Mailing lists > Public > site-comments@w3.org > February 2008

Re: W3c Link Checker

From: Andrew Cates <Andrew@sos-uk.org.uk>
Date: Thu, 21 Feb 2008 08:55:12 -0600
Message-ID: <80D53D6BB794D44AA427979D2B2BC66CBA944B@SOSSERVER.local.soschildren.org.uk>
To: <site-comments@w3.org>, <Gary@RiddellHouse.Net>
Hi Gary, 
The problem is that the link checker pages really do have to be
forbidden to robots and especially including to the W3 robot which
checks the links.
People often link to the output page from the linkchecker directly from
the webpage which has been linkchecked. 
If all the search engine robots were allowed to follow all those links
they would trigger a recalculation of the link check for each page every
time they followed the link. 
Worse still, the poor W3 robot would get into a circular loop when it
saw a page linking to the linkcheck page for itself; it would try the
linkcheck link and automatically call on itself to retest all the links
in the page (including adding another linkcheck link to its wait queue)
ad nauseam.
That make sense?
Andrew Cates
http://catesfamily.org.uk <http://catesfamily.org.uk/> 
Received on Thursday, 21 February 2008 14:55:21 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:15:37 UTC