W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2012

Re: AudioBuffer mutability

From: Joseph Berkovitz <joe@noteflight.com>
Date: Tue, 30 Oct 2012 18:38:29 -0400
Cc: Joseph Berkovitz <joe@noteflight.com>, Ehsan Akhgari <ehsan.akhgari@gmail.com>, Chris Rogers <crogers@google.com>, public-audio@w3.org
Message-Id: <C7F9DC36-DB1D-437E-BBC7-EC1783E782AD@noteflight.com>
To: robert@ocallahan.org

On Oct 30, 2012, at 4:44 PM, Robert O'Callahan <robert@ocallahan.org> wrote:

> On Wed, Oct 31, 2012 at 1:49 AM, Joseph Berkovitz <joe@noteflight.com> wrote:
> 1. Load an AudioBuffer from a media file (note that we didn't generate it so we don't have the samples on hand somewhere else)
> 2. Play the AudioBuffer via various AudioBufferSourceNodes
> 3. During live playback, dynamically determine that we need to play a programmatic transformation of the originally loaded media
> 4. Copy the contents of the AudioBuffer to a new AudioBuffer, applying said programmatic transform to the data
> 5. Use the new AudioBuffer to generate the transformed output via other AudioBufferSourceNodes
> OK. My latest proposal would handle this efficiently. You wouldn't need to create an extra AudioBuffer; you would modify the data and already-initialized AudioBufferSourceNodes would not be affected.
> By the way, why are you doing the programmatic transformation with JS instead of Web Audio itself?

I assume you mean, "why are you not doing transformation with JS in a JavaScriptProcessing Node?" Because the transformation yields a new sound that itself is to be played repeatedly in many identical copies, much like the original unmodified AudioBuffer.  This is not a pipeline-type transformation like a reverb effect, but a one-time modification yielding a new reusable waveform that may or may not require further modification.

Received on Tuesday, 30 October 2012 22:38:59 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:14 UTC