W3C home > Mailing lists > Public > public-qa-dev@w3.org > February 2008

automated validator test suite

From: olivier Thereaux <ot@w3.org>
Date: Wed, 27 Feb 2008 16:33:49 -0500
Message-Id: <0DEA5B26-521B-49D4-B1B4-C314D8FE37EB@w3.org>
To: W3C tools hacking list <public-qa-dev@w3.org>


I have just committed to CVS some of the work I've been doing in the  
past couple of days: I took our code for the link test suite and  
turned it into an automated test suite for the markup validator.

This is the completion of an old action item for me to automate what  
we have at:

The test suite is at (CVS repository):

As the commits messages state, there are a number of changes from the  
link test suite:

* instead of a nested directory structure, the test suite is described  
in a single XML file
* comparison of results and expected results are more flexible, e.g it  
is possible to specify
   "results should have some warning(s)" rather than
   "results should have 12 warnings exactly"
* easy selection of the tested validator instance from the command line
* possibility of running only a subset of the test suite with a  
commandline option

The main script for the test suite is harness.py. Its documentation is  
displayed if you run it without options or arguments. Most of the  
feature code, however, is in the python classes within lib/.

Anything I forgot? Any big flaw? The design is still fairly flexible  
at this point and nothing has been announced (this mail doesn't count)  
so there is room for change and improvement.

Things that remain to be done/completed (and for which help would be  
* add a way to document normative references to the expected results  
of test suite
* finish editing the catalog.xml test suite file.
   That means mostly giving each test case a title and expected results
* write some code to generate the html index from the metadata
   (as in http://dev.w3.org/cvsweb/2008/link-testsuite/harness/lib/Documentation.py 
  or maybe some XSLT? )
* write handlers for other validators (validator.nu has its API,  
others will be screen-scraped I suppose)
* add more tests
* write code to save the results of a test run
* allow annotations of test run results (why is this failing, etc)
* make pretty html reports from test run results

Received on Wednesday, 27 February 2008 21:33:59 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:36:27 UTC