W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: [web-audio-api] (JSWorkers): ScriptProcessorNode processing in workers (#113)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:30:11 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/113/24244713@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17415#68) by Grant Galitz on W3C Bugzilla. Fri, 27 Jul 2012 16:29:16 GMT

(In reply to [comment #59](#issuecomment-24244648))
> (In reply to [comment #51](#issuecomment-24244611))
> > Option 1 does not make the situation for gapless audio any better here. We're
> > just making it harder to push out audio. The browser knows best when to fire
> > audio refills. Forcing the JS code to schedule audio will make audio buffering
> > and drop outs worse.
> It seems to me that you're not really interested in doing audio *processing* in
> the audio callback (which is what it was designed for). Am I right in assuming
> that you're looking for some kind of combination of an audio data push
> mechanism and a reliable event mechanism for guaranteeing that you push often
> enough?
> AFAICT, the noteOn & AudioParam interfaces were designed for making it possible
> to schedule sample accurate audio actions ahead of time. I think that it
> *should* be possible to use it for providing gap-less audio playback (typically
> using a few AudioBuffers in a multi-buffering manner and scheduling them with
> AudioBufferSourceNodes). The problem, as it seems, is that you need to
> accommodate for possible jittering and event drops, possibly by introducing a
> latency (e.g, would it work if you forced a latency of 0.5s?).
No, 0.5 seconds is trash. Needs to be no worse than 100 ms or it sounds like poop to the user.
> Would the following be a correct conclusion?:
> - Audio processing in JavaScript should be done in workers.
> - We need a reliable main-context event system for scheduling audio actions
> (setInterval is not up to it, it seems).
Main thread needs access to audio clock to drive audio correctly. Doing audio generation by Date object is bad design. Need actual input from the browser that we played x number of samples just now. Making me generate multiple buffers to schedule with no respect to the machine's actual buffering seems bound to fail.

Reply to this email directly or view it on GitHub:
Received on Wednesday, 11 September 2013 14:31:52 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC