- From: Olivier Thereaux <notifications@github.com>
- Date: Wed, 11 Sep 2013 07:30:27 -0700
- To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Received on Wednesday, 11 September 2013 14:37:53 UTC
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17389#1) by Srikumar Subramanian (Kumar) on W3C Bugzilla. Wed, 17 Oct 2012 12:28:29 GMT I just realized that from a resource perspective, it might be much better for an offline audio context to provide periodic JS callbacks with buffers of a fixed duration rather than provide the whole render via a single oncomplete callback - sort of like a JS audio destination node. This will let us stream the rendered audio to a file using the local file system API instead of holding it all in memory, or send it to a WebRTC encode+transmit pipe. (I confess I haven't used the current prototype in webkit and therefore may have some misunderstandings.) --- Reply to this email directly or view it on GitHub: https://github.com/WebAudio/web-audio-api/issues/222#issuecomment-24244858
Received on Wednesday, 11 September 2013 14:37:53 UTC