On Tue, Jul 23, 2013 at 1:10 PM, Chris Wilson <cwilso@google.com> wrote:
> On Tue, Jul 23, 2013 at 11:00 AM, Marcus Geelnard <mage@opera.com> wrote:
>
>> If you're talking about pre-rendering sound into an AudioBuffer (in a way
>> that can't be done using an OfflineAudioContext), I doubt that memcpy will
>> do much harm. Again (if this is the case), could you please provide an
>> exanple?
>>
>
> OK. I want to load an audio file, perform some custom analysis on it
> (e.g. determine average volume), perform some custom (offline) processing
> on the buffer based on that analysis (e.g. soft limiting), and then play
> the resulting buffer.
>
And I can add two more realistic use cases here:
* at any stage analyze and display the resulting buffer as a waveform on
the screen for an audio editor
* generate AudioBuffer PCM data directly in JavaScript which then gets
played back using an AudioBufferSourceNode
Being required to copy large buffers of data is very inefficient.
> If I understand it, under ROC's original proposal, this would result in
> the the entire buffer being copied one extra time (other than the initial
> AudioBuffer creation by decodeAudioData), under Jer's recent proposal I
> would have to copy it twice. "I doubt that memcpy will do much harm" is a
> bit of an odd statement in favor of - as you yourself said, I don't think
> that "it's usually not a problem" is a strong enough argument. I don't see
> the inherent raciness as a shortcoming we have to paper over; this isn't a
> design flaw, it's a memory-efficient design. The audio system should have
> efficient access to audio buffers, and it needs to function in a decoupled
> way in order to provide glitch-free audio when at all possible.
>
>
>
>