W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: New proposal for fixing race conditions

From: Chris Wilson <cwilso@google.com>
Date: Tue, 23 Jul 2013 13:10:05 -0700
Message-ID: <CAJK2wqVZ23YvAR7tALi1g872SnzbAR8w5975URgXSF3P3JWu_g@mail.gmail.com>
To: Marcus Geelnard <mage@opera.com>
Cc: Ehsan Akhgari <ehsan.akhgari@gmail.com>, "Robert O'Callahan" <robert@ocallahan.org>, Jer Noble <jer.noble@apple.com>, Russell McClellan <russell@motu.com>, WG <public-audio@w3.org>
On Tue, Jul 23, 2013 at 11:00 AM, Marcus Geelnard <mage@opera.com> wrote:

> If you're talking about pre-rendering sound into an AudioBuffer (in a way
> that can't be done using an OfflineAudioContext), I doubt that memcpy will
> do much harm. Again (if this is the case), could you please provide an
> exanple?

OK.  I want to load an audio file, perform some custom analysis on it (e.g.
determine average volume), perform some custom (offline) processing on the
buffer based on that analysis (e.g. soft limiting), and then play the
resulting buffer.

If I understand it, under ROC's original proposal, this would result in the
the entire buffer being copied one extra time (other than the initial
AudioBuffer creation by decodeAudioData), under Jer's recent proposal I
would have to copy it twice.  "I doubt that memcpy will do much harm" is a
bit of an odd statement in favor of - as you yourself said, I don't think
that "it's usually not a problem" is a strong enough argument.  I don't see
the inherent raciness as a shortcoming we have to paper over; this isn't a
design flaw, it's a memory-efficient design.  The audio system should have
efficient access to audio buffers, and it needs to function in a decoupled
way in order to provide glitch-free audio when at all possible.
Received on Tuesday, 23 July 2013 20:10:32 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:23 UTC