Re: Writing tests where browsers are known to not be conforming

On 12/06/14 15:31, Patrik Höglund wrote:
> Hi!
> 
> Posting here by request of dom@w3.org.
> 
> I'm writing some testharness.js-based conformance tests for the getUserMedia
> spec <http://dev.w3.org/2011/webrtc/editor/getusermedia.html>. I was
> planning to check those in here
> <https://github.com/w3c/web-platform-tests/tree/master/webrtc>. We have a
> mechanism for chromium/blink which can run these tests continuously so we
> know we don't regress. However, since the getUserMedia spec is quite new
> and evolving, Chrome and Firefox fail a bunch of the test cases (e.g. that
> attributes aren't in the right place, methods aren't implemented yet, etc).
> 
> Since we want the tests running continuously to not fail all the time, is
> there some established way of "disabling" these tests in continuous
> integration? Like, could we pass a parameter ?dont_run_known_failing=true
> where we keep a list of known broken test cases in the test file for each
> browser?

I don't know how blink are planning to integrate web-platform-tests in
their CI. However for integration with Mozilla infrastructure I have
created the wptrunner tool [1], which actually turns out to be fairly
browser neutral (you can run the tests in Chrome using WebDriver, for
example) and to be suitable for local running of the tests.

To deal with the problem you describe, this tool can take a directory
tree of expectation manifest files. These are files in an ini-like
format which record the expected results for tests where that result
isn't "pass". Then, for each test, the actual result and the expected
result are compared and a problem is only reported if they differ. This
doesn't require any changes in the tests or in testharness.js.

More documentation is available at [2].

[1]
https://github.com/w3c/wptrunner/tree/jgraham/initial
[2] http://wptrunner.readthedocs.org/en/latest/

Received on Thursday, 12 June 2014 14:56:59 UTC