- From: Michael Nordman <michaeln@google.com>
- Date: Wed, 10 Jun 2009 14:11:20 -0700
On Wed, Jun 10, 2009 at 1:46 PM, Jonas Sicking <jonas at sicking.cc> wrote: > On Tue, Jun 9, 2009 at 7:07 PM, Michael Nordman<michaeln at google.com> > wrote: > >> > >> This is the solution that Firefox 3.5 uses. We use a pool of > >> relatively few OS threads (5 or so iirc). This pool is then scheduled > >> to run worker tasks as they are scheduled. So for example if you > >> create 1000 worker objects, those 5 threads will take turns to execute > >> the initial scripts one at a time. If you then send a message using > >> postMessage to 500 of those workers, and the other 500 calls > >> setTimeout in their initial script, the same threads will take turns > >> to run those 1000 tasks (500 message events, and 500 timer callbacks). > >> > >> This is somewhat simplified, and things are a little more complicated > >> due to how we handle synchronous network loads (during which we freeze > >> and OS thread and remove it from the pool), but the above is the basic > >> idea. > >> > >> / Jonas > > > > Thats a really good model. Scalable and degrades nicely. The only problem > is > > with very long running operations where a worker script doesn't return in > a > > timely fashion. If enough of them do that, all others starve. What does > FF > > do about that, or in practice do you anticipate that not being an issue? > > Webkit dedicates an OS thread per worker. Chrome goes even further (for > now > > at least) with a process per worker. The 1:1 mapping is probably overkill > as > > most workers will probably spend most of their life asleep just waiting > for > > a message. > > We do see it as a problem, but not big enough of a problem that we > needed to solve it in the initial version. > > It's not really a problem for most types of calculations, as long as > the number of threads is larger than the number of cores we'll still > finish all tasks as quickly as the CPU is able to. Even for long > running operations, if it's operations that the user wants anyway, it > doesn't really matter if the jobs are running all in parallel, or > staggered after each other. As long as you're keeping all CPU cores > busy. > > There are some scenarios which it doesn't work so well for. For > example a worker that works more or less infinitely and produces more > and more accurate results the longer it runs. Or something like a > folding at home website which performs calculations as long as the user > is on a website and submits them to the server. > > If enough of those workers are scheduled it will block everything else. > > This is all solveable of course, there's a lot of tweaking we can do. > But we figured we wanted to get some data on how people use workers > before spending too much time developing a perfect scheduling > solution. I never did like the Gears model (1:1 mapping with a thread). We were stuck with a strong thread affinity due to other constraints (script engines, COM/XPCOM). But we could have allowed multiple workers to reside in a single thread. A thread pool (perhaps per origin) sort of arrangement, where once a worker was put on a particular thread it stayed there until end-of-life. Your FF model has more flexibility. Give a worker a slice (well where slice == run-to-completion) on any thread in the pool, no thread affinity whatsoever (if i understand correctly). > > > / Jonas > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090610/00a2e197/attachment.htm>
Received on Wednesday, 10 June 2009 14:11:20 UTC