W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2013

Re: How to play back synthesized 22kHz audio in a glitch-free manner?

From: Chris Rogers <crogers@google.com>
Date: Tue, 18 Jun 2013 10:31:17 -0700
Message-ID: <CA+EzO0n5qwnDjaC5_f1Q0G2pW+uqwR5Sx4vd2==Cj4SQrin+TA@mail.gmail.com>
To: John Byrd <jbyrd@giganticsoftware.com>
Cc: Joe Berkovitz <joe@noteflight.com>, Kevin Gadd <kevin.gadd@gmail.com>, "Robert O'Callahan" <robert@ocallahan.org>, Jukka Jylänki <jujjyl@gmail.com>, "public-audio@w3.org" <public-audio@w3.org>
On Tue, Jun 18, 2013 at 9:40 AM, John Byrd <jbyrd@giganticsoftware.com>wrote:

>
> On Tue, Jun 18, 2013 at 6:55 AM, Joe Berkovitz <joe@noteflight.com> wrote:
>
>>
>> Actually, as co-editor of the use case document I am very interested in
>> understanding why the arbitrary concatenation of buffers is important. When
>> would this technique be used by a game? Is this for stitching together
>> prerecorded backgrounds?
>>
>
> The vast majority of video games that natively render audio do so by
> polling the current state of the read head in the audio output driver, or
> by broadcasting and acting on callbacks when a frame of audio has been
> consumed by the audio output driver.
>
> In other words, "arbitrary concatenation of buffers" is done by every
> pre-existing bit of game audio tech out there, and to integrate Web Audio
> with any of them, you'll need an object which supports it.
>
> Integrating pre-existing audio engines with Web Audio will require the
> existence of a Web Audio object that double or triple buffers audio input
> and sends callbacks or, at the absolute minimum, information on the current
> state of read and write heads, when a particular frame of data is consumed.
>

The AudioBufferSourceNode *is* designed to stich together snippets of audio
with sample accuracy *if* the sample-rate/playback-rate of the sample data
is the same as the context.

Also, the ScriptProcessorNode (with a suitably large buffer size), *is* a
callback based system which delivers semi-regular callbacks.

There is another API called the Media Source API, which allows audio data
to be pushed into an <audio> or <video> element.

There has been some discussion of allowing an AudioContext to be created at
a specific sample-rate, which can handle some of these sample-rate cases
when used in conjunction with AudioBufferSourceNode.

Chris



>
> As designed, the AudioBufferSourceNode interface nor the
> MediaElementAudioSourceNode interface seems to handle this use case.
>
> --
> ---
>
> John Byrd
> Gigantic Software
> 2102 Business Center Drive
> Suite 210-D
> Irvine, CA   92612-1001
> http://www.giganticsoftware.com
> T: (949) 892-3526 F: (206) 309-0850
>
Received on Tuesday, 18 June 2013 17:31:45 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:18 UTC