Re: New proposal for fixing race conditions

On Wed, Jul 24, 2013 at 4:06 AM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

> On Tue, Jul 23, 2013 at 11:10 PM, Chris Wilson <cwilso@google.com> wrote:
>
>> OK.  I want to load an audio file, perform some custom analysis on it
>> (e.g. determine average volume), perform some custom (offline) processing
>> on the buffer based on that analysis (e.g. soft limiting), and then play
>> the resulting buffer.
>>
> This is a symptom of another problem with the API. In this scenario your
> biggest problem with the API is far from the copy happening here, instead
> it is that the method for decoding audio has the wrong input and output for
> most cases. What the decodeAudioData assumes currently is that what you
> have is a binary buffer containing the encoded audio data and you want a
> high-level construct representing the audio data (an AudioBuffer) out of
> it. Your case (and a common case anyway), however, is that you have a URL
> to an audio resource and you want a list of Float32Arrays out. Why does
> decodeAudioData (async) return an AudioBuffer in the first place?
>

Hmm.  Well, the channels being handled as an array of Float32Arrays would
be less structurally obvious.  Since the it is resampled to the
AudioContext rate anyway, the meta data is less interesting - although I'd
ideally like our decoding to have more metadata expressing the internals,
rather than less (e.g. original sample rate, any tempo tags, etc.).

Received on Friday, 26 July 2013 16:23:39 UTC