- From: Rob Ennals <rob.ennals@gmail.com>
- Date: Fri, 6 Nov 2009 18:17:44 -0800
[this is Rob Ennals from Intel] I assume the use case for this is to allow parallel processing of a potentially large set of data? Maybe what we really want here is some kind of parallel map operation where we give the user agent an array and then say "call this function on each element, using as many threads as you deem appropriate" given the resources available. Each function call would logically execute in it's own worker context, but to keep semantics transparent, we might declare that such workers are not be allowed to send messages (other than a final result) and so could not tell how many parallel workers had actually been created. In the single core case this reduces to creating a single worker and then executing each function call in sequence. In the 1000 way cores of the not-so-distant future it will schedule tasks as appropriate. I'm not convinced that it is a good idea to expose details like the number o processor cores to a user. Such numbers can be messy. What if it changes? What if some of them are faster than others? What if another app is using some of them? What if they have different instruction sets? What if we'd like to keep some powered off? What if some are expensive to communicate with? What if we don't have enough memory to do them all in parallel? What if they cant all share cache? What is some jobs complete earlier than others? I don't think we should trust user code to know how many threads they should create. If what they want is to do a parallel map over a set of items making optimal use of the available resources, then we should give them a primitive that does exactly that. If we felt clever we might want to allow shuffle and reduce as well. Thoughts? -Rob On Nov 6, 2009, at 2:41 PM, David Bruant <bruant at enseirb-matmeca.fr> wrote: > ben turner a ?crit : >> I think it's important to note that there is no guarantee that each >> worker is tied to an actual OS-level thread. Firefox, for instance, >> will schedule workers on a limited number of OS threads to prevent >> resource swamping. Other implementations (Chromium only?) create new >> processes to run worker code. The only guarantee is that code >> executed >> in a worker will not block the main thread. >> > I didn't know the differences between the current web workers > implementations and that's interesting. > The problem with developing JS code using Web workers in a delegation > use case is that you cannot predict what is the hardware, the OS and > the > browser you will run your code on. So, what is the "right", "best" > number of workers to use ? 1, 10, 16, 1000 ? It is not a defined > number, > it depends on hardware, OS and browser (or any user agent, of course). > > My point is to give this information to the developer. > > If FF decides that all the workers will run on 3 OS threads even if > you're on a 16-core, my number is 3. If you're in Chrome and your OS > allows you to create only one more process, this number is 1. If > you're > in Chrome and your OS allow you to create "as many processes as you > want" on a quad-core, this number is 4. But for each case, the web > browser can ask this information to the OS (once when you install it ? > each time you open your browser ? dynamically ?). > > This information is available (and shouldn't be that hard to > retrieve !) > and can be given to the web developer. > > David > >> -Ben >> >
Received on Friday, 6 November 2009 18:17:44 UTC