W3C home > Mailing lists > Public > public-test-infra@w3.org > April to June 2014

Re: Writing tests where browsers are known to not be conforming

From: Dirk Pranke <dpranke@chromium.org>
Date: Thu, 12 Jun 2014 09:00:00 -0700
Message-ID: <CAEoffTBL-hZuJWT2_YpnxfRBa_xsKk5DTOij=xi8va64b=g9Wg@mail.gmail.com>
To: James Graham <james@hoppipolla.co.uk>
Cc: public-test-infra <public-test-infra@w3.org>, Harald Alvestrand <hta@google.com>, Henrik Kjellander <kjellander@google.com>
On Thu, Jun 12, 2014 at 7:56 AM, James Graham <james@hoppipolla.co.uk>

> On 12/06/14 15:31, Patrik Höglund wrote:
> > Hi!
> >
> > Posting here by request of dom@w3.org.
> >
> > I'm writing some testharness.js-based conformance tests for the
> getUserMedia
> > spec <http://dev.w3.org/2011/webrtc/editor/getusermedia.html>. I was
> > planning to check those in here
> > <https://github.com/w3c/web-platform-tests/tree/master/webrtc>. We have
> a
> > mechanism for chromium/blink which can run these tests continuously so we
> > know we don't regress. However, since the getUserMedia spec is quite new
> > and evolving, Chrome and Firefox fail a bunch of the test cases (e.g.
> that
> > attributes aren't in the right place, methods aren't implemented yet,
> etc).
> >
> > Since we want the tests running continuously to not fail all the time, is
> > there some established way of "disabling" these tests in continuous
> > integration? Like, could we pass a parameter ?dont_run_known_failing=true
> > where we keep a list of known broken test cases in the test file for each
> > browser?
> I don't know how blink are planning to integrate web-platform-tests in
> their CI.

This is actually documented at
http://www.chromium.org/blink/importing-the-w3c-tests , if anyone is

As part of the import process, we maintain a blacklist of things not to
import (checked in to Blink), so skipping broken tests is already supported.

I thought Patrik was asking whether we should have a similar list actually
checked into web-platform-tests, but perhaps I misunderstood him.

I have not looked at the wptrunner infrastructure in any detail yet. It
does sound interesting and could be useful to us in some situations.

- Dirk

> However for integration with Mozilla infrastructure I have
> created the wptrunner tool [1], which actually turns out to be fairly
> browser neutral (you can run the tests in Chrome using WebDriver, for
> example) and to be suitable for local running of the tests.
> To deal with the problem you describe, this tool can take a directory
> tree of expectation manifest files. These are files in an ini-like
> format which record the expected results for tests where that result
> isn't "pass". Then, for each test, the actual result and the expected
> result are compared and a problem is only reported if they differ. This
> doesn't require any changes in the tests or in testharness.js.
> More documentation is available at [2].
> [1]
> https://github.com/w3c/wptrunner/tree/jgraham/initial
> [2] http://wptrunner.readthedocs.org/en/latest/
Received on Thursday, 12 June 2014 16:00:50 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:34:10 UTC