W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

[Bug 17415] (JSWorkers): JavaScriptAudioNode processing in workers

From: <bugzilla@jessica.w3.org>
Date: Fri, 27 Jul 2012 06:42:07 +0000
Message-Id: <E1SueFT-0008Dw-I4@jessica.w3.org>
To: public-audio@w3.org

--- Comment #59 from Marcus Geelnard (Opera) <mage@opera.com> 2012-07-27 06:42:06 UTC ---
(In reply to comment #51)
> Option 1 does not make the situation for gapless audio any better here. We're
> just making it harder to push out audio. The browser knows best when to fire
> audio refills. Forcing the JS code to schedule audio will make audio buffering
> and drop outs worse.

It seems to me that you're not really interested in doing audio *processing* in
the audio callback (which is what it was designed for). Am I right in assuming
that you're looking for some kind of combination of an audio data push
mechanism and a reliable event mechanism for guaranteeing that you push often

AFAICT, the noteOn & AudioParam interfaces were designed for making it possible
to schedule sample accurate audio actions ahead of time. I think that it
*should* be possible to use it for providing gap-less audio playback (typically
using a few AudioBuffers in a multi-buffering manner and scheduling them with
AudioBufferSourceNodes). The problem, as it seems, is that you need to
accommodate for possible jittering and event drops, possibly by introducing a
latency (e.g, would it work if you forced a latency of 0.5s?).

Would the following be a correct conclusion?:

- Audio processing in JavaScript should be done in workers.
- We need a reliable main-context event system for scheduling audio actions
(setInterval is not up to it, it seems).

Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.
Received on Friday, 27 July 2012 06:42:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:11 UTC