- From: Olivier Thereaux <notifications@github.com>
- Date: Wed, 11 Sep 2013 07:30:02 -0700
- To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Received on Wednesday, 11 September 2013 14:37:29 UTC
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17415#49) by Philip Jägenstedt on W3C Bugzilla. Thu, 26 Jul 2012 12:57:57 GMT Grant, it seems to me that there are at least two options for main-thread audio generation even if there's no JavaScriptAudioNode. 1. Generate your audio into AudioBuffers and schedule these to play back-to-back with AudioBufferSoruceNodes. (I haven't tried if the WebKit implementation handles this gapless, but I don't see why we shouldn't support this in the spec.) 2. Generate your audio into AudioBuffers and postMessage these to a WorkerAudioNode. If ownership of the buffer is transferred it should be cheap and there's no reason why this should incur a large delay, particularly not half a second like you've seen. That sounds like a browser bug to be fixed. In both cases one will have one new object per buffer to GC, in the first case it's a AudioBufferSourceNode and in the second case it's the event object on the worker side. --- Reply to this email directly or view it on GitHub: https://github.com/WebAudio/web-audio-api/issues/113#issuecomment-24244602
Received on Wednesday, 11 September 2013 14:37:29 UTC