W3C home > Mailing lists > Public > www-validator@w3.org > March 2005

checklink & output

From: Eric Gorr <mailist@ericgorr.net>
Date: Tue, 22 Mar 2005 16:29:48 -0500
To: www-validator@w3.org
Message-id: <42408E4C.9030308@ericgorr.net>

I am running checklink with:

   checklink -q -b -s -r -u

basically trying to suppress as much output as I can since I only want 
to see the broken links.

However, I am still getting output that looks like:

  http://iplab.com/resources/StdCommLib1.js       Line: 15
  Code: (N/A) Forbidden by robots.txt
  To do: The link was not checked due to robots exclusion rules.
  Check the link manually.

Is there anyway to suppress this output as well?

Ideally, what I would like to see is output that looks something like:

   BROKEN: <web page> <broken link>
   BROKEN: <web page> <broken link>
   BROKEN: <web page> <broken link>

which is what the script:

   http://world.std.com/~swmcd/steven/perl/pm/lc/linkcheck.html

does, but checklink seems to be a far better one in every other respect.

Rather then a lot of output I don't care about like....

   Processing      <web page>

   This may take some time if the document has many links to check.


Should I make this a feature request and enter it into the bugzilla 
database?
Received on Wednesday, 23 March 2005 01:21:02 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 25 April 2012 12:14:18 GMT