- From: Brendan Eich <brendan@mozilla.org>
- Date: Tue, 04 Sep 2012 13:59:59 -0700
- To: David Bruant <bruant.d@gmail.com>
- CC: Jonas Sicking <jonas@sicking.cc>, "public-webapps@w3.org" <public-webapps@w3.org>, Alon Zakai <azakai@mozilla.com>
David Bruant wrote: > I can imagine, it sounds hard indeed. Do you have numbers on how it > affects performance? Or an intuition on these numbers? I don't need to > be convinced that it affects performance significantly, but just to get > an idea. This is not going to be easy to estimate, but you might benchmark generator vs. non-generator code in the latest SpiderMonkey. I don't think we need quantification, though. Alon's right, the optimizing VMs are not focused on uncommon code other than what's in the dopey industry-standard benchmarks. > I remember that at some point (your JSConf.eu talk last October), in > order to be able to compile through Emscripten, the source codebase (in > C/C++) had to be manually tweaked sometimes. Is it still the case? If > it's an acceptable thing to ask to authors, then would there be easy > ways for authors to make their IO blocking code more easily translated > to async JS code? I'm pessimistic, but it seems like an interesting > question to explore. BananaBread required zero Cube 2 changes, IIRC. Other Emscripten examples are also pure compilation. Forget it. Inversion of control flow is hard enough and error prone that developers won't do it. It's the #1 reason Mozilla's Electrolysis project is paused indefinitely. The SuperSnappy work (threads, not processes) preserves most execution model compatibility, and avoids requiring programmers writing Firefox XUL front-end and add-on code from having to manually callback-CPS their code (on every DOM access!). /be
Received on Tuesday, 4 September 2012 21:00:26 UTC