- From: Olivier Thereaux <notifications@github.com>
- Date: Wed, 11 Sep 2013 07:27:41 -0700
- To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
- Message-ID: <WebAudio/web-audio-api/issues/21@github.com>
> Originally reported on W3C Bugzilla [ISSUE-21311](https://www.w3.org/Bugs/Public/show_bug.cgi?id=21311) Sat, 16 Mar 2013 17:58:03 GMT > Reported by Joe Berkovitz / NF > Assigned to Reference from mailing list: post: http://lists.w3.org/Archives/Public/public-audio/2013JanMar/0395.html author: Russell McClellan <russell@motu.com> "[OfflineAudioContext] really should provide some way to receive data block-by-block rather than in a single "oncomplete" callback. Otherwise, the memory footprint grows quite quickly with the rendering time. I don't think this would a major burden to implementors, and it would make the API tremendously more useful. Currently it's just not feasible to mix down even a minute or so. If this is ever going to be used for musical applications, this has to change." Chris Rogers stated in teleconference 14 Mar 2013 that it is in fact feasible to mix down typical track lengths of several minutes with the single oncomplete call. A discussion of block size suggested that any breaking of audio rendering into chunks should be fairly large to avoid overhead of switching threads and passing data. --- Reply to this email directly or view it on GitHub: https://github.com/WebAudio/web-audio-api/issues/21
Received on Wednesday, 11 September 2013 14:29:01 UTC