Re: Test timeouts

On Monday, September 16, 2013 at 4:00 PM, James Graham wrote:
> To fix this issue and, for the moment, this issue alone, I have made a
> patch to move timeout specification from setup() to a <meta> element:
> 
> <meta name="timeout" content="test-timeout-in-ms">
> 
> There is a review for this at [1].
Moving the timeout value to a meta tag seems like a reasonable change. How do you plan to handle existing content relying on the previous API?
> 
> There is the further question of how much control authors should have 
> over the timeouts of tests. Opinions on this so far have varied from 
> "none at all" through "normal or slow" to "full control". I now have a 
> litte empirical data from the repository so far to guide us here. [2] 
> shows preliminary results of the testharness in gecko with all tests 
> fixed to a 20s timeout. This shows around 100 timeouts from around 4000 
> top level test files. A number of these are missing features, a number 
> more are due to the non-support for PHP in server that I'm using. 
> However we also see legitimate tests that are just slow to run; all of 
> those that show some child test results before getting "timeout" are 
> examples of this. In my opinion the fact that we have both tests that 
> timeout due to implementation bugs and tests that timeout due to 
> slowness is enough to scupper the zero-timeout approach. It's not really 
> clear if a longer dual-timeout approach would work e.g. 5s for most 
> tests (on desktop hardware) but <meta name=timeout content=long> to 
> increase the timeout to 60s or longer for those tests. It at least seems 
> plausible that this could cover a lot of cases although I suspect that 
> we will still find edge cases where we want even longer or more finely 
> controlled timeouts for certain tests. I also don't know if the 
> performance impact of waiting 60s for a test that typically should 
> finish in 6s is prohibitive.

I prefer this approach. Allowing test developers to set timeouts in ms in only really meaningful if all run similar machines. Also, perf evolves over time (requiring shorter timeouts, until processors are fast enough that a new generation of device can run browsers).

We can always fine tune the approach at a latter stage if needed.

Both approaches (this one and the one based on a device multiplier) require informing testharness.js of the timeout the runner is using, right? How do you propose to do that?

Thanks,

--tobie 

Received on Monday, 16 September 2013 16:21:39 UTC