On Tue, Jun 18, 2013 at 6:55 AM, Joe Berkovitz <joe@noteflight.com> wrote:
>
> Actually, as co-editor of the use case document I am very interested in
> understanding why the arbitrary concatenation of buffers is important. When
> would this technique be used by a game? Is this for stitching together
> prerecorded backgrounds?
>
The vast majority of video games that natively render audio do so by
polling the current state of the read head in the audio output driver, or
by broadcasting and acting on callbacks when a frame of audio has been
consumed by the audio output driver.
In other words, "arbitrary concatenation of buffers" is done by every
pre-existing bit of game audio tech out there, and to integrate Web Audio
with any of them, you'll need an object which supports it.
Integrating pre-existing audio engines with Web Audio will require the
existence of a Web Audio object that double or triple buffers audio input
and sends callbacks or, at the absolute minimum, information on the current
state of read and write heads, when a particular frame of data is consumed.
As designed, the AudioBufferSourceNode interface nor the
MediaElementAudioSourceNode interface seems to handle this use case.
--
---
John Byrd
Gigantic Software
2102 Business Center Drive
Suite 210-D
Irvine, CA 92612-1001
http://www.giganticsoftware.com
T: (949) 892-3526 F: (206) 309-0850