- From: fantasai <fantasai.lists@inkedblade.net>
- Date: Thu, 21 Mar 2013 15:03:32 -0700
- To: Tobie Langel <tobie@w3.org>
- CC: James Graham <jgraham@opera.com>, Robin Berjon <robin@w3.org>, Dirk Pranke <dpranke@chromium.org>, public-test-infra <public-test-infra@w3.org>, Kris Krueger <krisk@microsoft.com>
On 03/21/2013 07:28 AM, Tobie Langel wrote: > On Thursday, March 21, 2013 at 2:11 PM, James Graham wrote: >> Many vendor's systems are designed around the assumption that "all tests >> must pass" and, for the rare cases where tests don't pass, one is expected >> to manually annotate the test as failing. This is problematic if you >> suddenly import 10,000 tests for a feature that you haven't implemented >> yet. Or even 100 tests of which 27 fail. I don't have a good solution for >> this other than "don't design your test system like that" (which is rather >> late). I presume the answer will look something like a means of >> auto-marking tests as expected-fail on their first run after import. > > Afaik, this is what Mozilla does now with CSS ref tests. Fantasai, please correct me if I'm wrong. > > Tests are batch imported (monthly) and ran. New tests which fail are marked as such in a manifest file (that Mozilla hosts) and get skipped in test runs. Not sure what happens to existing tests which were previously passing and now fail, but they're probably either skipped too or investigated. > The only correction I have is that we don't actually do this regularly, automatically, or systematically. Tests are imported ad-hoc atm. ~fantasai
Received on Thursday, 21 March 2013 22:04:05 UTC