[whatwg] Workers

I've updated the Workers specification in response to feedback. The 
proposal I sent recently contains a summary of the changes I made:

   http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2008-August/015853.html


On Sat, 9 Aug 2008, Martin Ellis wrote:
> 
> Could it not be set that a there is a maximum execution time for any 
> workers that are still active, definable by the browser but with a 
> suggested value of say 1000milliseconds in the spec, any processing that 
> takes longer than this is killed, but gives the option for well built 
> scripts and cleanup processes to run gracefully.

That's basically what the spec does now.


On Sat, 9 Aug 2008, Ojan Vafai wrote:
> On Thu, Aug 7, 2008 at 1:01 AM, Ian Hickson <ian at hixie.ch> wrote:
> > On Wed, 6 Aug 2008, Aaron Boodman wrote:
> > > I am opposed to the utils object. I don't see any precedent for this 
> > > anywhere, and it just feels ugly to me. I liked it the way you had 
> > > it before, with these APIs in a shared base interface.
> >
> > Ok. I don't have an opinion on this. Jonas?
> >
> > In the absence of any arguments either way, my default would be put it 
> > all on the global object; clashes are manageable, the Window object 
> > does it that way, and there are enough things that we kinda want to 
> > put on the global scope anyway (the core worker stuff) that it's not a 
> > clear that the gain is huge.
> 
> I don't see why it makes any sense to pollute the global scope. Yes the 
> Window object does it that way, but that's an accident of history that 
> causes all sort of confusion and bugs. It complicates both 
> implementation for browser vendors and expected behavior from a 
> developer point of view.

I generally agree with this, and indeed it didn't take to convince me to 
use a utils object in the first place. If Jonas and Aaron agree, I'd be 
happy to go back to a utils object. It's a trivial change to the spec.


> That all said, I think calling it a "utils" object does seem messy in 
> it's genericness. Maybe just call it the "worker" object, in the same 
> way that there is a "window" object?

That would be confusing because there is a 'worker' object, it's the 
global scope. (Whatever we do, _some_ of the API surface will end up on 
the global scope, and that's the 'worker'.)


On Sun, 10 Aug 2008, Shannon wrote:
>
> I've been following the WebWorkers discussion for some time trying to 
> make sense of the problems it is trying to solve. I am starting to come 
> to the conclusion that it provides little not already provided by:
> 
> setTimeout(mainThreadFunc,1)
> setTimeout(workThreadFunc,2)
> setTimeout(workThreadFunc,2)
> ....

The key things that it provides are:

 * Ability to do long computations without breaking it up into blocks.

 * Ability to do synchronous I/O without blocking the UI.

 * Ability to use multiple cores on modern CPUs.

Scripts run using setTimeout() all run in series, on one thread, so they 
can't do any of the above.


> Obviously WebWorkers would make all this clearer and possibly easier but 
> surely any number of free JS libraries could do that too.

Given the single-threaded model of JavaScript today, I don't see how you 
could emulate the Workers API today.


> Another issue with eliminating threads is that they are very desirable 
> to developers. Because they are desirable it's likely that one of more 
> browser vendors may go ahead and implement them anyway, essentially 
> "embracing and extending" HTML5 and ECMAScript.

It's browser vendors who have asked for the no-sharing model, so this 
seems unlikely. If browser vendors wanted a shared-data model with 
mutexes, etc, I would likely oblige, but so far they seem to agree that 
exposing Web authors to such constructs is a bad idea.


On Mon, 11 Aug 2008, Shannon wrote:
> 
> I think Lua co-routines solve every issue you raise. I hope WebWorkers 
> will follow this model because I know from experience they are very easy 
> to use.

Co-routines don't provide the three things described above. They only use 
one core, they don't intrinsically yield on I/O unless you do some pretty 
fancy changes to the libraries, and they definitely don't handle doing 
long computations without chopping the calculation into blocks, since the 
whole point of coroutines is chopping things up. :-)


On Wed, 13 Aug 2008, Shannon wrote:
> 
> Actually I was referring to the browser forcefully interleaving the 
> callback execution so they appear to run simultaneously. I was under the 
> impression this is how they behave now. I don't see how Javascript 
> callbacks can be cooperative since they have no yield statement or 
> equivalent.

They just run one after the other. There's no concurrency.


On Thu, 14 Aug 2008, Shannon wrote:
> 
> What I really don't understand is how the WebWorkers proposal solves 
> this. As far as I can tell it does some hand-waving with MessagePorts to 
> pretend it goes away but what happens when you absolutely DO need 
> concurrent access to global variables - say for example the DOM - from 
> multiple threads? How do you perform any sort of synchronisation?

If you need to obtain a lock on some global state that is used across 
multiple workers, then have an object that receives messages and simply 
queues up the requests, sending the value (and ownership) to each script 
in turn, never sending the value to a new script until the previous one 
has closed or relinquished the lock.


So for example instead of this:

> --- worker.js ---
> updateGlobalLa = function (e) {
>   var localLa = someLongRunningFunction( e );
>   workerGlobalScope.port.postMessage("set la = "+ localLa);
> }
> workerGlobalScope.port.AddEventListener("onmessage", updateGlobalLa, false);
> workerGlobalScope.port.postMessage("get la");
>
> --- main.js ---
> // global object or variable
> var la = 0;
> 
> handleMessage = function(e) {
>   if (typeof e.match("set la"))
>      la = parseInt(e.substr(3));
>   } else if (typeof e.match("get la")) {
>      worker.postMessage(la.toString());
>   }
> }
> var worker = new Worker("worker.js");
> worker.AddEventListener("onmessage", handleMessage, false);

You would have:

  // worker.js
  var laPort;
  addEventListener('message', function (e) {
    if (e.message == 'la') {
      // received a handle to the la manager
      laPort = e.port;
    }
  }, false);
  function useLa() {
    laPort.postMessage('get');
    laPort.onmessage = function (e) {
      var localLa = parseInt(e.message);
      someLongRunningFunction(localLa);
      laPort.postMessage(localLa);
    };
  }
  ...
  onmessage = function (e) {
    if (e.message == 'use la')
      useLa();
  };

  // main.js -- implementation of la
  var la = 0;
  var laQueue = [];
  var current;
  function registerLa(port) {
    port.onmessage = function (e) {
      if (e.message == 'get') {
        if (current) {
          // someone has the lock, so queue this up
          laQueue.push(port);
        } else {
          // nobody has the lock, send the data
          port.postMessage(la);
        }
      } else {
        if (port == current) {
          // release the lock
          la = e.message;
          current = null;
          // if the queue is not empty, post to the next one
          if (laQueue.length > 0) {
            current = laQueue.shift();
            current.postMessage(la);
          }
        }
      }
    };
  }

  // main.js -- worker creation code
  var worker = new Worker('worker.js');
  registerLa(worker.startConveration('la'));
  ...
  worker.postMessage('use la');


However, I have yet to come across a time where you would need a global 
locked object across multiple threads in this way in a Web app, so I don't 
think it's a big deal.


> I don't think I can stress enough how many important properties and 
> functions of a web page are ONLY available as globals. DOM nodes, style 
> properties, event handlers, window.status ... the list goes on.

Why would a worker ever deal with any of those? The whole point of workers 
is to do non-UI-related work.


> Without direct access to these the only useful thing a worker can do is 
> "computation" or more precisely string parsing and maths.

And network I/O and database I/O, yes.


> I've never seen a video encoder, physics engine, artificial intelligence 
> or gene modeller written in javascript and I don't really think I ever 
> will.

I've seen a video encoder and a physics engine in JS, and I expect to see 
more, especially with Workers. However, those aren't the target. Things 
like synchronising databases in the background are more the kind of thing 
we are imagining here.


On Tue, 12 Aug 2008, Shannon wrote:
> 
> If a WebWorker object is assigned to local variable inside a complex 
> script then it cannot be seen or stopped by the calling page. Should the 
> specification offer document.workers or getAllWorkers() as a means to 
> iterate over all workers regardless of where they were created?

What's the use case? It seems that it would be easy for the author to keep 
track of workers.


> Is it wise to give a web application more processing power than a single 
> CPU core (or HT thread) can provide? What stops a web page hogging ALL 
> cores (deliberately or not) and leaving no resources for the UI mouse or 
> key actions required to close the page? (This is not a contrived 
> example, I have seen both Internet Explorer on Win32 and Flash on Linux 
> consume 100% CPU on several occasions). I know it's a "vendor issue" but 
> should the spec at least recommend UAs leave the last CPU/core free for 
> OS tasks?

That's an implementation detail, I don't think we should comment on it.


> Can anybody point me to an existing Javascript-based web service that 
> needs more client processing power than a single P4 core?

Since it's not possible to do so right now, I don't think any exist. 
However, video and 3D maniplation both require lots of CPU and aren't 
really possible on the Web yet.


> Shouldn't an application that requires so much grunt really be written 
> in Java or C as an applet, plug-in or standalone application?

Those don't work in the browser without the user having to download binary 
code.


> If an application did require that much computation isn't it also likely 
> to need a more efficient inter-"thread" messaging protocol than passing 
> Unicode strings through MessagePorts? At the very least wouldn't it 
> usually require the passing of binary data, complex objects or arrays 
> between workers without the additional overhead of a string 
> encode/decode?

Yeah, we'll add that in due course.


> Is the resistance to adding threads to Javascript an issue with the 
> language itself, or a matter of current interpreters being 
> non-threadsafe?

No, locks are just a hard-to-use programming model. They are frequently 
the source of bugs for experienced programmers, putting them into the 
hands of Web authors seems highly ill advised.


> The draft spec says "protected" workers are allowed to live for a 
> "user-agent-defined amount of time" after a page or browser is closed. 
> I'm not really sure what possible value this could have since as an 
> author we won't know whether the UA allows _any_ time and if so whether 
> that time will be enough to complete our cleanup (given a vast 
> discrepancy in operations-per-second across UAs and client PCs). If our 
> cleanup can be arbitrarily cancelled then isn't it likely that we might 
> actually leave the client or server in a worse state than if we hadn't 
> tried at all? Won't this cause difficult-to-trace sporadic bugs caused 
> by browser differences in what could be a rare event (a close during 
> operation Y instead of during X)?

This is the same as the script time execution limit on unload today.


On Tue, 12 Aug 2008, Aaron Boodman wrote:

> There is currently no API to stop a worker from the outside, only from 
> the inside. Providing an API to kill a worker from the outside is a 
> little weird because we also want to be able to share workers between 
> multiple pages. If we did both of these things then one page could kill 
> a worker that other pages are relying on. Not saying this is 
> unreasonable, just something to think about.

I've allowed killing of workers but only non-shared ones.


On Wed, 20 Aug 2008, Michael Nordman wrote:
> >
> > [Worker]
> >  void close();
> 
> Is the shutdown sequence initiated by this method call different then 
> the shutdown sequence initiated by a call to self.close() from within 
> the worker itself?  The comment hints that it is... if so why?

The close() method kills a worker dead without cleanup. The idea is that 
this would be used for cases like when you are doing a search across a 
wide amount of data. You could fire off ten workers, each searching within 
one tenth of the data, and then as soon as any of them reply with a hit, 
you kill all of them.


> Is it possible for a worker (shared or dedicated) to reload itself?

Not currently.


> How do workers and appCaches interact?

workers are associated with browsing contexts, so they go through the 
normal app cache networking changes. This probably interacts badly with 
shared workers used from different app caches. We should probably study 
this more.

Aaron, Maciej, others, do you have opinions on how these should interact?


On Wed, 20 Aug 2008, Jonas Sicking wrote:
> 
> If we keep the constructors (see first paragraph), I would prefer the 
> syntax "new Worker" over "new DedicatedWorker".

Done.


On Thu, 21 Aug 2008, Jonas Sicking wrote:
> > >
> > > Do we really need the SharedWorker interface. I.e. couldn't we just 
> > > return a MessagePort?
> > 
> > We could. It would mean either putting onerror on all message ports, 
> > or not reporting error information for shared workers. Actually even 
> > if we did put onerror on ports, it would be difficult to define which 
> > ports actually get the error events.
> > 
> > It would also mean not using a constructor, but that's fine.
> > 
> > Finally, it would make extending the interface later much harder if we 
> > found that it was useful to allow different users of the shared worker 
> > to control something.
> > 
> > Anyone else have any opinions on this?
> 
> Yeah, I don't feel strongly either way here. It feels "cleaner" to not 
> have separate SharedWorker instances for each user of a shared worker, 
> but there are also downsides with not doing that (like onerror and other 
> future properties that we might want).
> 
> So I'm fine either way.

I've stuck with having a SharedWorker instance; it seems safer on the long 
run, and allows us to have onerror and onclose, which might be useful.


On Thu, 21 Aug 2008, Jonas Sicking wrote:
>
> There is a race condition in proposed new Web Workers spec. The return 
> value from postMessage can in some cases be true, even if the worker 
> never receives the message.
> 
> For example:
> 
> main.js:
> 
> w = new Worker("worker.js");
> if (w.postMessage("hello there")) {
>   alert("success!!");
> }
> 
> 
> worker.js:
> 
> close();
> 
> 
> If the postMessage is called before the worker has executed the 
> 'close()' call the function will return true. But if the worker starts 
> up fast enough and calls close(), the function will return false.
> 
> To put it another way. Even if the worker is currently up and running 
> when postMessage is called, there is no guarantee that the worker will 
> run long enough to actually get to process the message.
> 
> The only solution I can see is making postMessage return void. What use 
> cases require knowing that the postMessage succeeds?

I've made postMessage() return void throughout. This means that now if you 
want to know that there is a problem, you should register for onclose on 
your port or worker, and assume that anything you sent recently failed.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Wednesday, 27 August 2008 02:50:15 UTC