- From: Kristof Zelechovski <giecrilj@stegny.2a.pl>
- Date: Tue, 12 Aug 2008 17:24:02 +0200
A very interesting post, too bad nobody from Google bothered to give an exhaustive reply. Just a couple of thoughts below. I think the problem workers are trying to solve is quite practical. It is not about processing power (desktop applications rarely use threads for that) and it is not about using all computing resources available. Using workers makes sense even if the browser is allowed to use only one CPU. In absence of an existing implementation, developers try to get a similar functionality with setTimeout (requires stackless code) and hidden windows. The current specification is an attempt to make it cleaner and more robust. Problem statement: the browser does not respond when it runs script code; it does not even update the content area so it is not possible to indicate progress. Some useful scripts can work for a long time, either because they are waiting for data (directly or by periodically examining the document) or they are computationally intensive (a much rarer phenomenon, but it still can happen in interactive scientific publications, e.g. plotting a graph with parameters that the user can adjust, or interactive fractals). Aside: Scientists do not call for threads in HTML because they are hardly acquainted with HTML, even less with its bleeding edge. However, things may change when they see the tool is right there and easy to use. Admittedly, Java can be used for the purpose, but it has the didactical disadvantage that Java is a compiled language and tweaking anything would require JDK. Besides, Java applets do not run everywhere: they do not run in 64-bit Firefox for Linux (unless they are very old). The question about native threads in JavaScript would be much better asked in a JavaScript forum. Workers are an external mechanism provided by the host, not by the interpreter. If a Web browser, being a desktop application, uses all processing resources from the operating system, either it is the operating system that is at fault because it should not have allowed this or the application uses a driver or a service in an unsupported way, which probably means the driver or the service in question is to blame. I do not think leaving one CPU free would help much in this case because there is no guarantee the system process would use it anyway (it could end up being used by something else). Chris _____ From: whatwg-bounces@lists.whatwg.org [mailto:whatwg-bounces at lists.whatwg.org] On Behalf Of Shannon Sent: Tuesday, August 12, 2008 1:50 PM To: WHAT working group Subject: [whatwg] WebWorker questions A few questions and thoughts on the WebWorkers proposal: Is it wise to give a web application more processing power than a single CPU core (or HT thread) can provide? What stops a web page hogging ALL cores (deliberately or not) and leaving no resources for the UI mouse or key actions required to close the page? (This is not a contrived example, I have seen both Internet Explorer on Win32 and Flash on Linux consume 100% CPU on several occasions). I know it's a "vendor issue" but should the spec at least recommend UAs leave the last CPU/core free for OS tasks? Can anybody point me to an existing Javascript-based web service that needs more client processing power than a single P4 core? Shouldn't an application that requires so much grunt really be written in Java or C as an applet, plug-in or standalone application? If an application did require that much computation isn't it also likely to need a more efficient inter-"thread" messaging protocol than passing Unicode strings through MessagePorts? At the very least wouldn't it usually require the passing of binary data, complex objects or arrays between workers without the additional overhead of a string encode/decode? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20080812/49b10516/attachment.htm>
Received on Tuesday, 12 August 2008 08:24:02 UTC