- From: <bugzilla@wiggum.w3.org>
- Date: Tue, 18 Jul 2006 16:32:20 +0000
- To: www-validator-cvs@w3.org
- CC:
http://www.w3.org/Bugs/Public/show_bug.cgi?id=2346 ------- Comment #4 from Otto.Stolz@uni-konstanz.de 2006-07-18 16:32 ------- How could a robots.txt entry influence the link checker's handling of recursive requests, at all? Under normal circumstances, a link checker will find many identical links in its input; so it certainly will keep a list of links already checked, and any sort of recursive link structure will not be able to get the link checker into an infinite recursion, or loop. What I am asking for has nothing to do with the size of the link-checkers task; it simply tells the link-checker not to balk on links (from client pages) to the link-checker. Note that your own documentation recommends to place such links in the client pages -- yet, your link-checker balks on them. If you are concerned about links pointing into your pages, beyond your link-ckecker, you certainly can disallow link-checking into your private directories.
Received on Tuesday, 18 July 2006 16:33:30 UTC