W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

[web-audio-api] OfflineAudioContext needs a way to handle audio of arbitrary duration (#21)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:27:41 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/21@github.com>
> Originally reported on W3C Bugzilla [ISSUE-21311](https://www.w3.org/Bugs/Public/show_bug.cgi?id=21311) Sat, 16 Mar 2013 17:58:03 GMT
> Reported by Joe Berkovitz / NF
> Assigned to 

Reference from mailing list:
  post: http://lists.w3.org/Archives/Public/public-audio/2013JanMar/0395.html
  author: Russell McClellan <russell@motu.com> 

"[OfflineAudioContext] really should provide some way to receive data block-by-block rather than in a single "oncomplete" callback.  Otherwise, the memory footprint grows quite quickly with the rendering time.  I don't think this would a major burden to implementors, and it would make the API tremendously more useful.  Currently it's just not feasible to mix down even a minute or so.  If this is ever going to be used for musical applications, this has to change."

Chris Rogers stated in teleconference 14 Mar 2013 that it is in fact feasible to mix down typical track lengths of several minutes with the single oncomplete call. A discussion of block size suggested that any breaking of audio rendering into chunks should be fairly large to avoid overhead of switching threads and passing data.

Reply to this email directly or view it on GitHub:
Received on Wednesday, 11 September 2013 14:29:01 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC