W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: New proposal for fixing race conditions

From: Jer Noble <jer.noble@apple.com>
Date: Mon, 22 Jul 2013 08:35:42 -0700
Cc: Russell McClellan <russell@motu.com>, public-audio@w3.org, WG <public-audio@w3.org>
Message-id: <FAE54244-9713-433F-A014-805935F05F79@apple.com>
To: Marcus Geelnard <mage@opera.com>

On Jul 21, 2013, at 2:56 AM, Marcus Geelnard <mage@opera.com> wrote:

> ...actually, after doing some more thinking, I figured that the AudioProcessingEvent does not have to provide AudioBuffers for the input/output. The information provided by the AudioBuffer is already known (sampleRate must be the same as AudioContext.sampleRate, and duration - well, you know the size of the buffer to process).
> Instead of using AudioBuffers, we could use typed arrays directly, like this:
> interface AudioProcessingEvent : Event {
>     readonly attribute double playbackTime;
>     readonly attribute sequence<Float32Array> input;
>     readonly attribute sequence<Float32Array> output;
> };
> Then I think that it would be possible to define the event based on ownership transfer, i.e. the input/output arrays are valid only during the event, and once the event handler finishes the arrays are neutered. That would eliminate the need for memcpy and provide performance that should be on par with the current (racy) solution.

If the only thing your ScriptProcessorNode is doing is ownership-passing pre-rendered buffers, why aren't you using an AudioBufferSourceNode?

Basically, if your event handler is doing any computation, the results of that computation will need to be copied into an output buffer. There should be no benefit to memcpy'ing to a local buffer then ownership passing over memcpy'ing directly to the ouput buffer.


Received on Monday, 22 July 2013 15:36:06 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:23 UTC