- From: James Graham <james@hoppipolla.co.uk>
- Date: Thu, 12 Dec 2013 13:30:52 +0000
- To: public-webapps@w3.org, "public-test-infra@w3.org" <public-test-infra@w3.org>
Redirecting this conversation to public-test-infra.
On 12/12/13 13:01, Arthur Barstow wrote:
> On 12/12/13 7:31 AM, ext Simon Pieters wrote:
>> First I ran the tests using
>> https://bitbucket.org/ms2ger/test-runner/src on a local server, but
>> then I couldn't think of a straight-forward way to put the results in
>> the wiki so I just ran the tests manually, too. :-( Since most tests
>> are automated it's silly to run them manually and edit a wiki page. Is
>> there a better way?
>
> Re automated running, there is <http://w3c-test.org/framework/app/suite>
> but I think it is considered obsolete (and isn't maintained). Test
> automation is/was on Tobie's ToDo list. I'll followup separately about
> the status on public-test-infra.
Ms2ger has a simple in-browser runner which we could adapt to use a
top-level browsing context rather than an iframe, and to use the
manifest file generated by the script in review at [1].
> (Re using a wiki for the implementation report, to produce the Web
> Messaging and Web Sockets implementation reports, I created a script
> that merges tests results from individual runs and outputs the wiki
> table syntax.)
Yeah, so I forsee this taking longer to output than to actually do the
run (which I can fully automate for gecko). We should agree on a simple
format that can be produced by any kind of automated runner and make a
tool that can turn that format into an implementation report. Something like
[{test_id:string|list, status:string, subtests:[{name:string,
status:string}]}]
Seems like it would work fine. The test id would either by the url to
the top-level test file or the list [test_url, cmp, ref_url] for
reftests. The harness status would be something like OK|TIMEOUT|ERROR
and the subtest statuses would be something like PASS|FAIL|TIMEOUT|NOTRUN.
If we do something like this I can likely organise for such output to be
automatically generated for every testrun on gecko, so producing an
implementation report for any feature would just be a matter of
importing the data from the latest nightly build.
[1] https://critic.hoppipolla.co.uk/r/440
Received on Thursday, 12 December 2013 13:31:18 UTC