- From: Geoffrey Sneddon <gsneddon@opera.com>
- Date: Fri, 25 Feb 2011 18:28:09 +0000
- To: Aryeh Gregor <Simetrical+w3c@gmail.com>
- CC: "L. David Baron" <dbaron@dbaron.org>, James Graham <jgraham@opera.com>, Kris Krueger <krisk@microsoft.com>, Anne van Kesteren <annevk@opera.com>, "public-html-testsuite@w3.org" <public-html-testsuite@w3.org>, "Jonas Sicking (jonas@sicking.cc)" <jonas@sicking.cc>
On 20/02/11 00:42, Aryeh Gregor wrote: > I get the impression that Opera wants to have a fixed number of tests > because their internal test runner expects that. Maybe someone from > Opera can clarify this. At this point I see no particular reason that > we'd need to always run the same number of tests. So fundamentally the issue here is that a test that is present in one build but not present in another does not show up as a regression; for a variety of reasons, it's not overly practical for us to change this behaviour. As such, things that implicitly fail by not reporting back a result don't work for us. -- Geoffrey Sneddon — Opera Software <http://gsnedders.com> <http://opera.com>
Received on Friday, 25 February 2011 18:28:57 UTC