W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: New proposal for fixing race conditions

From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Wed, 24 Jul 2013 14:06:56 +0300
Message-ID: <CAJhzemXWpnXKeyLc-jZ14QokYFEygs0nWM7h59Yt3M_9gyhJ4g@mail.gmail.com>
To: Chris Wilson <cwilso@google.com>
Cc: Marcus Geelnard <mage@opera.com>, Ehsan Akhgari <ehsan.akhgari@gmail.com>, "Robert O'Callahan" <robert@ocallahan.org>, Jer Noble <jer.noble@apple.com>, Russell McClellan <russell@motu.com>, WG <public-audio@w3.org>
On Tue, Jul 23, 2013 at 11:10 PM, Chris Wilson <cwilso@google.com> wrote:

> OK.  I want to load an audio file, perform some custom analysis on it
> (e.g. determine average volume), perform some custom (offline) processing
> on the buffer based on that analysis (e.g. soft limiting), and then play
> the resulting buffer.
> If I understand it, under ROC's original proposal, this would result in
> the the entire buffer being copied one extra time (other than the initial
> AudioBuffer creation by decodeAudioData), under Jer's recent proposal I
> would have to copy it twice.  "I doubt that memcpy will do much harm" is a
> bit of an odd statement in favor of - as you yourself said, I don't think
> that "it's usually not a problem" is a strong enough argument.  I don't see
> the inherent raciness as a shortcoming we have to paper over; this isn't a
> design flaw, it's a memory-efficient design.  The audio system should have
> efficient access to audio buffers, and it needs to function in a decoupled
> way in order to provide glitch-free audio when at all possible.

This is a symptom of another problem with the API. In this scenario your
biggest problem with the API is far from the copy happening here, instead
it is that the method for decoding audio has the wrong input and output for
most cases. What the decodeAudioData assumes currently is that what you
have is a binary buffer containing the encoded audio data and you want a
high-level construct representing the audio data (an AudioBuffer) out of
it. Your case (and a common case anyway), however, is that you have a URL
to an audio resource and you want a list of Float32Arrays out. Why does
decodeAudioData (async) return an AudioBuffer in the first place?

Let's say decodeAudioData took in a URL and gave you an array of
Float32Arrays. If then, like I've suggested, we'd have a way of creating an
AudioBuffer from an array of Float32Arrays where those arrays would be
neutered, there would be no copy, it would be a lot less lines of code,
there would be no need for the overhead of binary XHR where the encoded
audio data is first copied to the JS thread as an ArrayBuffer just to be
copied back to another thread again, so this approach would probably
perform better than what we have now.

Received on Wednesday, 24 July 2013 11:07:24 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:23 UTC