W3C home > Mailing lists > Public > whatwg@whatwg.org > March 2013

[whatwg] Sending large structured data between threads without compromising responsiveness (was asynchronous JSON.parse)

From: David Rajchenbach-Teller <dteller@mozilla.com>
Date: Sun, 10 Mar 2013 21:17:20 +0100
Message-ID: <513CEA50.4050507@mozilla.com>
To: Glenn Maynard <glenn@zewt.org>
Cc: whatwg@whatwg.org, Tobie Langel <tobie.langel@gmail.com>
On 3/9/13 1:14 AM, Glenn Maynard wrote:
> By the way, I'd recommend keeping sample benchmarks as minimal and
> concise as possible.  It's always tempting to make things configurable
> and dynamic and output lots of stats, but everyone interested in the
> results of your benchmark needs to read the code, to verify it's correct.

Well noted. I will try and get rid of dynamic/configurable stuff and get
the stats part away from the main js file.

> I don't think making a call asynchronous is really going to help much,
> at least for serialization.  You'd have to make a copy of the data
> synchronously, before returning to the caller, in order to guarantee
> that changes made after the call returns won't affect the result.  This
> would probably be more expensive than the JSON serialization itself,
> since it means allocating lots of objects instead of just appending to a
> string.
>
> If it's possible to make that copy quickly, then that should be done for
> postMessage itself, to make postMessage return quickly, instead of doing
> it for a bunch of individual computationally-expensive APIs.
> 
> (Also, remember that "returns quickly and does work asynchronously"
> doesn't mean the work goes away; the CPU time still has to be spent. 
> Serializing the complete state of a large system while it's running and
> trying to maintain 60 FPS doesn't sound like a good approach in the
> first place.)

I concur with your points:
- copying synchronously just to allow asynchronous transfer would be a
performance killer;
- somehow backgrounding a process while keeping processing on the
performance-critical thread is no magic bullet.

Both points seem to indicate that the API should not be of the
fire-and-forget style but would rather require fine control by the
developer, to ensure that it does not eat on performances.

Hypothetically, this could be solved by an API with primitives to:
1. enqueue data to be sent;
2. allocate x milliseconds to processing/sending the data;
3. cancel sending some of the data;
4. cancel complete communication.

The idea being that all operations have (soft) time bound guarantees so
can be interleaved by the user as part of a |requestAnimationFrame|
loop. Also, data is never copied, so if this data changes before
communication is complete, API clients need to handle invalidation on
their own.

Now, this sounds very much like something that can be implemented as a
pure JS library.

>     Seriously?
>     FirefoxOS [1, 2] is a mobile operating system in which all applications
>     are written in JavaScript, HTML, CSS. This includes the browser itself.
[...]
> That doesn't sound like a good idea to me at all, but in any case that's
> a system platform, not the Web.
[...]

If you do not mind, I will not continue on this part of the
conversation, as I believe that the core of the discussion has shifted
anyway to the more general issue of sending large structured data
between threads without compromising their responsiveness.

Best regards,
 David

-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla
Received on Sunday, 10 March 2013 20:17:47 GMT

This archive was generated by hypermail 2.3.1 : Sunday, 10 March 2013 20:17:48 GMT