Re: [web-audio-api] (JSWorkers): ScriptProcessorNode processing in workers (#113)

> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17415#58) by Marcus Geelnard (Opera) on W3C Bugzilla. Fri, 27 Jul 2012 06:42:06 GMT

(In reply to [comment #51](#issuecomment-24244611))
> Option 1 does not make the situation for gapless audio any better here. We're
> just making it harder to push out audio. The browser knows best when to fire
> audio refills. Forcing the JS code to schedule audio will make audio buffering
> and drop outs worse.

It seems to me that you're not really interested in doing audio *processing* in the audio callback (which is what it was designed for). Am I right in assuming that you're looking for some kind of combination of an audio data push mechanism and a reliable event mechanism for guaranteeing that you push often enough?

AFAICT, the noteOn & AudioParam interfaces were designed for making it possible to schedule sample accurate audio actions ahead of time. I think that it *should* be possible to use it for providing gap-less audio playback (typically using a few AudioBuffers in a multi-buffering manner and scheduling them with AudioBufferSourceNodes). The problem, as it seems, is that you need to accommodate for possible jittering and event drops, possibly by introducing a latency (e.g, would it work if you forced a latency of 0.5s?).

Would the following be a correct conclusion?:

- Audio processing in JavaScript should be done in workers.
- We need a reliable main-context event system for scheduling audio actions (setInterval is not up to it, it seems).

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/113#issuecomment-24244648

Received on Wednesday, 11 September 2013 14:30:53 UTC