- From: Tobie Langel <tobie@w3.org>
- Date: Thu, 21 Mar 2013 15:28:24 +0100
- To: James Graham <jgraham@opera.com>
- Cc: Robin Berjon <robin@w3.org>, Dirk Pranke <dpranke@chromium.org>, public-test-infra <public-test-infra@w3.org>, fantasai.lists@inkedblade.net, Kris Krueger <krisk@microsoft.com>
On Thursday, March 21, 2013 at 2:11 PM, James Graham wrote: > Many vendor's systems are designed around the assumption that "all tests > must pass" and, for the rare cases where tests don't pass, one is expected > to manually annotate the test as failing. This is problematic if you > suddenly import 10,000 tests for a feature that you haven't implemented > yet. Or even 100 tests of which 27 fail. I don't have a good solution for > this other than "don't design your test system like that" (which is rather > late). I presume the answer will look something like a means of > auto-marking tests as expected-fail on their first run after import. Afaik, this is what Mozilla does now with CSS ref tests. Fantasai, please correct me if I'm wrong. Tests are batch imported (monthly) and ran. New tests which fail are marked as such in a manifest file (that Mozilla hosts) and get skipped in test runs. Not sure what happens to existing tests which were previously passing and now fail, but they're probably either skipped too or investigated. Would be great to understand how WebKit (or would that be vendor specific?) and Microsoft plan to proceed with this. --tobie
Received on Thursday, 21 March 2013 14:28:37 UTC