W3C home > Mailing lists > Public > whatwg@whatwg.org > January 2011

[whatwg] WebWorkers and images

From: Jorge <jorge@jorgechamorro.com>
Date: Thu, 13 Jan 2011 13:24:11 +0100
Message-ID: <B371C96B-621A-48D1-B008-04E4D27C41A9@jorgechamorro.com>
On 13/01/2011, at 11:35, Glenn Maynard wrote:
> On Thu, Jan 13, 2011 at 5:08 AM, Berend-Jan Wever <skylined at chromium.org> wrote:
>> I ended up creating a PageWorker object, which is constructed in the page
>> rather then in a WebWorker. It uses setInterval to repeatedly run a function
>> in the background to do the image processing directly on the canvas imageD
>> data. To reduce overhead, each interval runs the function in a small loop
>> for a certain number of ms. After each interval, the browser gets some time
>> to do UI updating. This seems to work well in my Mandelbrot fractal
>> renderer; the browser remains responsive:
>> http://skypher.com/SkyLined/demo/FractalZoomer/Mandel.html
> 
> That's exactly the cumbersome, problematic programming model that web
> workers specifically seek to eliminate.
> 
> (I loaded that, and the browser became painfully unresponsive in FF
> 3.6; opening a menu took about a quarter second.  In order to keep the
> browser properly responsive, you'd need to return so often that the
> 10ms minimum timer duration in most browsers

Not too long ago, the browsers did allow timeouts of less than 10ms. Why was the >= 10ms minimum timer duration spec'ed this way ?

> will cause the algorithm
> to take notably longer.)
> 
> Note that if your computational work is entirely working with
> ImageData, you can send the ImageData to a thread.  It's limiting (you
> can't blit images to the canvas that way, since you don't have the
> Canvas interface), but it may be enough for your case.

I've tried once to improve a full-screen animation like that, and found that the cost of passing the data back and forth to the worker is so high that the worker-improved version was in fact slower (less fps), and on top of that the cpu was skyrocketing. A complete FAIL.

That was passing the objects serialized as text messages. Perhaps with structured clones, the situation may have been improved a bit.

But I think that the workers desperately need a mechanism that permitted to pass objects *quickly*, and *quickly* most likely means by reference not by copy.

To preserve shared-nothingness, the passed object (and the object's children) could be made unreachable (somehow, don't ask me) in the sending context as soon as passed to the worker. Perhaps other constraints might need to exist, e.g. perhaps no methods allowed in these objects.

This would make the transfers lightning fast, especially for heavy objects like images.

This would allow a threaded program to spend the time where it's worth, doing useful work, instead of copying data over and over, as it happens now.

Is it possible to achieve something like that ?
-- 
Jorge.
Received on Thursday, 13 January 2011 04:24:11 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:29 UTC