W3C home > Mailing lists > Public > public-html-testsuite@w3.org > November 2010

Re: Automated Test Runner

From: James Graham <jgraham@opera.com>
Date: Tue, 16 Nov 2010 11:21:55 +0100
Message-ID: <4CE25B43.3090306@opera.com>
To: Kris Krueger <krisk@microsoft.com>
CC: Anne van Kesteren <annevk@opera.com>, "public-html-testsuite@w3.org" <public-html-testsuite@w3.org>, "Jonas Sicking (jonas@sicking.cc)" <jonas@sicking.cc>
On 11/16/2010 01:39 AM, Kris Krueger wrote:
> +Jonas
> Jonas are you still interested in helping with this?
> -----Original Message-----
> From: public-html-testsuite-request@w3.org [mailto:public-html-testsuite-request@w3.org] On Behalf Of Anne van Kesteren
> Sent: Monday, November 15, 2010 4:38 AM
> To: public-html-testsuite@w3.org
> Subject: Automated Test Runner
> Hi,
> I thought this would be worth sharing. I put up a sketch of what an automated test runner could be like here:
> http://tc.labs.opera.com/apis/EventSource/testrunner.htm
> http://tc.labs.opera.com/apis/XMLHttpRequest/testrunner.htm
> (Now these are not (well, no longer) part of HTML5, but they use the infrastructure we agreed to use for HTML5 so they work for illustrating the concept, I think.)
> All it requires is tests to be written using testharness.js as well as linking to testharnessreport.js. See the individual files for details.
> testharnessreport.js can be very simple:
> http://tc.labs.opera.com/resources/testharnessreport.js
> The problem with this test runner is that the amount of tests are not known upfront. We only know the amount of files. So if we make a test file manifest it will have to include data on how many tests are in a given file to give accurate reporting. The reporting itself could be improved as well.

FWIW I have some plans in this area. I even have a little code, but it 
doesn't do anything useful yet :) (I also note that Ms2ger does have 
some code that does do something useful).

As part of my plan, I would like to add per-directory metadata to the 
test system. I think this has the advantage over global metadata that it 
is closer to the tests and so more likely to be kept up to date when 
tests change. In particular I would expect it to be owned by the test 
owner rather than someone coordinating the testsuite as a whole. It has 
the advantage over per-file metadata that it doesn't affect the test 
itself. In particular I propose having a json manifest file with a 
well-known name like "manifest.json" in each directory containing tests. 
The file would have a structure like (missing some syntax for ease of 

  subdirs: ["more_tests"]

type is "javascript", "reftest" or "manual"
flags indicates specific optional features required by the test or other 
unusual dependencies
expected_results (missing default: 1) indicates the number of tests in 
that file
top_level_browsing_context (missing default: false) indicates that the 
test needs to run in a top level browsing context (e.g. for testing 

subdirs is a list of subdirectories in the current directory that should 
be checked for tests.

Does this sound reasonable? Did I miss anything obvious?
Received on Tuesday, 16 November 2010 10:22:42 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 15:49:37 UTC