W3C home > Mailing lists > Public > public-webapps@w3.org > October to December 2011

Re: Opening discussion on StreamWorker

From: Andrew Wilson <atwilson@google.com>
Date: Fri, 18 Nov 2011 17:35:14 -0800
Message-ID: <CAArhhiv3Q8qSr7xsjMU9fkUjejKmWtEymh5vx=_U6OzKo6=FBg@mail.gmail.com>
To: Charles Pritchard <chuck@jumis.com>
Cc: Charles Pritchard <chuck@visc.us>, "public-webapps@w3.org" <public-webapps@w3.org>
On Thu, Nov 17, 2011 at 7:30 PM, Charles Pritchard <chuck@jumis.com> wrote:

> On 11/17/2011 4:52 PM, Charles Pritchard wrote:
>> Currently, Web Workers provides a "heavy" scope for multithreaded Web
>> Apps to handle heavy data processing.
>> I'd like to draw on those specs and create a new lightweight scope useful
>> for various data processing tasks typically associated with stream
>> processing and GPUs.
>> CSS/FX is looking at custom filter tags using WebGL. I think we can
>> implement these in Workers as well. The most important constraints are that
>> the data is opaque: no shared storage allowed.
>> There are many examples of using web workers to apply effects to 32bit
>> pixel data. Those could be easily applied to CSS pixel filters just as
>> WebGL shaders are.
>> River Trail and W16 are showing us ways in which tight for loops can take
>> advantage of multiple cores.
>> Let's look at these use cases and consider a new lightweight worker
>> scope. Nothing but the bare bones, designed and expected to be used for a
>> very specific type of task.
>> Existing CanvasPixelArray processing scripts are a great place to start.
>> I suspect we'll be able to handle other cases, such as stream ciphers.
>> I'm still trying to bikeshed a name on this... StreamWorker,
>> OpaqueWorker, SimpleWorker, DataWorker etc.
>> Please join me in the discussion. I think we can make rapid progress here
>> now that Transferable has matured and we have two moderately-parallel JS
>> implementations.
> To be more clear: here is some in-the-wild code that is similar to what
> I'd expect to produce and consume with StreamWorker:
> http://code.google.com/p/**chromabrush/source/browse/**
> frontend/js/filter.blur.js<http://code.google.com/p/chromabrush/source/browse/frontend/js/filter.blur.js>
> Pseudo-code:
> onmessage(data) { for(... data) { data[i] *= fancyness; };
> postMessage(data); };
> In doing this, could attach to CSS such as:   img { filter:
> custom(url('basicpixelworker.**js')); }.
> The worker may only use postMessage once, and it must send back an array
> of the same size.
> There are no other options, no ways to pass a message to other contexts,
> no File or IDB or other APIs.
> The concept here is to be very restrictive. That way, no data is leaked,
> and it behaves more like a WebGL shader (think GPGPU) than our existing web
> worker context.
> If it's rigid, we can get very good performance, high parallelism, and
> modularity. We can also get quick implementation from vendors.
> And they can decide when they want to optimize.

Can you clarify what optimizations are enabled by these workers? It's not
clear to me that removing APIs makes starting up a worker any more
efficient, and I don't think significant efficiencies are enabled by
restricting workers to only sending/receiving a single message per

> As a completely different use case, such a simple worker could provide
> stream encryption, or perhaps some other kind of basic but heavy number
> crunching. Since it's just a simple in-out routine, it can be highly
> optimized and easily added into API pipelines.
> These workers would still be backward compatible. They could still be used
> as normal web workers. But in their more basic state, they can be more
> lightweight, there are no side effects
> and so they are more appropriate for massive parallelism.
> -Charles
Received on Saturday, 19 November 2011 01:35:42 UTC

This archive was generated by hypermail 2.3.1 : Friday, 27 October 2017 07:26:36 UTC