W3C home > Mailing lists > Public > public-webapps@w3.org > January to March 2017

Re: [service-workers] A pattern for preventing worker termination indefinitely

From: Andrew Sutherland <asutherland@asutherland.org>
Date: Tue, 14 Mar 2017 13:16:35 -0400
To: public-webapps@w3.org
Message-ID: <d40fe8c3-f4e8-ea23-128d-b3ee6db8ed8a@asutherland.org>
On 03/14/2017 12:26 PM, Mike Pennisi wrote:
> Unfortunately, this pattern isn't consistently applied (see, for 
> example [1]).

To be clear, the waitUntil(Promise) case is usually used in tests where 
the page wants to control when the SW advances to a different state, not 
as a means of keeping the SW alive.

I get that that test is holding state in its global scope without a 100% 
guarantee it won't be terminated, but are you actually seeing that test 
fail in browsers because the SW is being terminated?  See next point.

> I'm hoping that this behavior can be built in to the tooling (with 
> some opt-out
> mechanism) so that we can address this consistently and so that future
> contributors don't unknowingly create new race conditions.

> See [1] for an example. Service worker termination is a variable that 
> I'd like
> to control for in these tests. In well-structured application 
> contexts, this
> detail shouldn't matter.

I agree that the tests shouldn't have to worry about SW termination.  
However, I think this is something best left to the test runner for each 
browser engine to ensure that the browser termination timeout 
preferences are set reasonably based on the hardware's performance.  For 
example, Firefox runs (or used to run) a bunch of tests on extremely 
slow android emulators.  In that situation, we apply a multiplication 
factor to all timeouts consistent with the slowdown compared to 
reasonable hardware.

> In the tests, though, I'd like to be able to track a
> single worker through its lifecycle. I'm hoping this will help to 
> catch bugs
> that you wouldn't necessary be able to observe through the 
> `statechange` API
> alone (such as events firing on redundant works, etc.).

I get where you're coming from on this.  In a prior life I worked on 
Thunderbird where all kinds of crazy things could interact in 
frustrating emergent behaviors and I made test logic perform as many 
invariant checks as possible because it can and would catch unforeseen 

I don't think that's particularly helpful in this case.  Namely:

* We really only need one test that covers breakage.  And it's helpful 
for this test to be explicitly clear in what it's checking, rather than 
having the checking be a side-effect of test helper infrastructure code.

* Although there's a ton of complexity under the hood, the exposure 
points of the SW and fetch API's are relatively clean.

* Debugging sanity.  It's much easier for me to set breakpoints and 
understand what's going on if the only logic running is the test logic.  
I don't have to worry about figuring out whether a breakpoint is 
tripping because of actual test code or infrastructure code.  Or have 
inter-process IPC traces or other logs filled up with keepalive 
heartbeats or assertions the test didn't directly ask for.

* As a corollary, those extra checks do have potential performance 
overhead.  This really starts to matter when running things under a tool 
like RR (http://rr-project.org/) which has potentially non-trivial 
overhead.  (rr in particular serializes all execution down to a single 
thread, which makes the cost of everything more prominently felt.)

Having said all that, I'm just one SW dev.  If you want to talk more 
about this and get better visibility, I'd create an issue in 
https://github.com/w3c/ServiceWorker/issues which is where all the fun 
stuff happens.

Also relatedly, if you're interested in the boundary between WPT tests 
and browser-specifics, 
https://github.com/w3c/web-platform-tests/issues/4751 may be of 
interest.  What I propose there doesn't line up with termination 
testing, however.  That's something that will probably remain in 
Firefox's Firefox-specific tests.

Received on Tuesday, 14 March 2017 17:17:03 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:15:04 UTC