Re: Test timeouts

On 17/09/13 12:58, Tobie Langel wrote:
> On Tuesday, September 17, 2013 at 10:55 AM, James Graham wrote:
>> On 16/09/13 18:37, James Graham wrote:
>>
>>> Good question. The easiest (hacky) way is to regenerate the local
>>> testharnessreport.js file for the testrun. However that only works if
>>> the multiplier (or equivalent) is fixed for the entire run. Supporting
>>> per-test overrides in this way would work in the absence of
>>> parallelisation, or with one test server per instance, but neither of
>>> those sound ideal. However the only way I can think of to pass in the
>>> timeout so that it is avaliable before any script has loaded is via a
>>> query parameter, which is something that I have traditionally resisted
>>> commandeering.
>>
>
> In general, this doesn't bother me as much as it bothers you; I see query params as the equivalent of params in a CLI.

Having thought more about it I still need some mechanism to pass in a 
per-test timeout multiplier; although I could disable these entirely it 
would have bad effects if one test in a large set is badly-behaved since 
it could prevent the timeout running. I think with the marionette (i.e. 
WebDriver)-based harness I can pass this in as a seperate parameter when 
I specify the test to run and, in testharnessreport.js read it from the 
opener window (in the harness webdriver controls one window which itself 
window.opens a window for the actual tests to run in).

Received on Tuesday, 17 September 2013 12:08:28 UTC