Re: Web Audio API is now available in Chrome

On Wed, Feb 2, 2011 at 7:48 PM, Kumar <srikumarks@gmail.com> wrote:
>
> On Wed, Feb 2, 2011 at 5:43 AM, Silvia Pfeiffer <silviapfeiffer1@gmail.com>
> wrote:
>>
>> > * all audio processing is done in JavaScript which although fast enough
>> > for
>> > some applications is too slow for others
>> > * has difficultly reliably achieving low-latency, thus there's a delay
>> > heard
>> > between mouse / key events and sounds being heard
>> > * more prone to audio glitches / dropouts
>>
>> I agree - these are indeed the disadvantages of writing your own audio
>> sample handling in JavaScript. But they are best effort and often
>> enough completely sufficient for many applications.
>
> The "start minimal" thinking behind the audio data API is useful
> to get things going to figure out what people would actually want to do
> with the API, but it is hard to declare it as *the* approach as it stands.

I agree. As I said: I'd like to see both move forward.


> Consider the possibility of support on the mobile device front.
> Chris' approach with the web-audio api -- that of implementing some
> units and the pipeline natively -- is likely to be better in that scenario
> in two ways - a) just plain speed (and, by implication, power consumption)
> and b) improvements in the audio pipeline benefit everybody.
> Also, low latency and glitch-free audio are near and dear to
> quite a few interested in this space I think (you're looking at one
> who switched for those specific reasons), but the audio data api is
> yet to address them satisfactorily. They are critical to a good online
> gaming experience for example. Granted that improved javascript
> performance may bring that closer to practicality, there is still a
> single-threaded model standing in the way as far as I can tell.
> It looks like web workers might offer a way out by letting
> an audio worker run uninterrupted. ... but wait! Workers can't
> access the DOM and the data api ties into <audio> using the
> same DOM class. So it looks like audio data API based code
> can't run in workers *by design* (and at least in the current
> implementation).

You can hand over the audio data into the thread (i.e. web worker) and
then have the web worker execute all the hard work on it. That's
totally possible. I've implemented face segmentation in that way.
However, that does introduce lags and asynchronicity that may not be
desirable. We've already had a discussion in the WHATWG about whether
it might make sense to give web workers direct access to image and
video data. This probably also makes sense for audio. I think we just
have to show some use cases / good examples and that can move forward.


> An orthogonal api, though, stands a chance of
> being able to run in a worker (at least in the future if not right now)
> with the main thread dedicated to visuals (ex: WebGL).

Don't misunderstand me: I basically want "all of the above". I
dislike, however, how the Audio Data API is being dissed as
unacceptable when it is in fact completely adequate for specific
problems.

> Meanwhile, let's hope JS performance approaches light speed - I mean "C" :)

Hehe! :-)

> Regards,
> -Srikumar K. S.

Silvia.

Received on Wednesday, 2 February 2011 09:34:39 UTC