- From: Alistair Macdonald <al@bocoup.com>
- Date: Wed, 2 Feb 2011 05:54:52 -0500
- To: srikumarks@gmail.com
- Cc: Silvia Pfeiffer <silviapfeiffer1@gmail.com>, Chris Rogers <crogers@google.com>, "Tom White (MMA)" <lists@midi.org>, public-xg-audio@w3.org
I have personally run builds of Firefox that do allow you to spawn and process audio data in JavaScript workers Kumar, but you're right in that it is not something that Mozilla have made a part the spec. But the ideas have had *some* testing at least. The workers in this instance were run from a file like "myworker.js", and attached to the audio *before* anything reached the active DOM. This way the output of data into the worker does not go through the DOM first, yet it is still processed using JavaScript in a Worker thread. It worked well for me, but there was one issue that comes to mind... My take away from that experience was that there is a performance hit whenever that code interacts with the DOM *after* the initial instantiation. If I am understanding things correctly (which I fully understand that I may not), I would think that this is partly due to the nature of the threaded programming model, and that similar performance issues would arise with this type of architecture regardless of the browser in which it being implemented. Perhaps others can speak to this? But with regards to running and listening to DOM-less threaded audio code that does not touch the DOM... I have tried this personally, and it seems to perform very well for a certain kind of use-case. Al On Wed, Feb 2, 2011 at 3:48 AM, Kumar <srikumarks@gmail.com> wrote: > > On Wed, Feb 2, 2011 at 5:43 AM, Silvia Pfeiffer <silviapfeiffer1@gmail.com> > wrote: >> >> > * all audio processing is done in JavaScript which although fast enough >> > for >> > some applications is too slow for others >> > * has difficultly reliably achieving low-latency, thus there's a delay >> > heard >> > between mouse / key events and sounds being heard >> > * more prone to audio glitches / dropouts >> >> I agree - these are indeed the disadvantages of writing your own audio >> sample handling in JavaScript. But they are best effort and often >> enough completely sufficient for many applications. > > The "start minimal" thinking behind the audio data API is useful > to get things going to figure out what people would actually want to do > with the API, but it is hard to declare it as *the* approach as it stands. > Consider the possibility of support on the mobile device front. > Chris' approach with the web-audio api -- that of implementing some > units and the pipeline natively -- is likely to be better in that scenario > in two ways - a) just plain speed (and, by implication, power consumption) > and b) improvements in the audio pipeline benefit everybody. > Also, low latency and glitch-free audio are near and dear to > quite a few interested in this space I think (you're looking at one > who switched for those specific reasons), but the audio data api is > yet to address them satisfactorily. They are critical to a good online > gaming experience for example. Granted that improved javascript > performance may bring that closer to practicality, there is still a > single-threaded model standing in the way as far as I can tell. > It looks like web workers might offer a way out by letting > an audio worker run uninterrupted. ... but wait! Workers can't > access the DOM and the data api ties into <audio> using the > same DOM class. So it looks like audio data API based code > can't run in workers *by design* (and at least in the current > implementation). An orthogonal api, though, stands a chance of > being able to run in a worker (at least in the future if not right now) > with the main thread dedicated to visuals (ex: WebGL). > Meanwhile, let's hope JS performance approaches light speed - I mean "C" :) > Regards, > -Srikumar K. S.
Received on Wednesday, 2 February 2011 10:55:25 UTC