Re: Proposal for fixing race conditions

Kumar:
> Large files (even 5mins?) would be unusable with such an editor

As pointed out earlier on this thread, 5 mins is ~100MB so it's clear that
the premise of the scenario is wrong already. At some point (maybe still?)
even the spec said that you shouldn't make long audio buffers. As far as I
know, all major DAWs stream media items on tracks, anything else would be
almost certain death (10x5min tracks would be 1GB!!! And some DAWs even use
64bit mixing so it would be 2GB, although I believe the track would still
be represented in 32bit). For the Web Audio case we already have a good
audio streaming source (MediaElementSourceNode). As for the presentation of
the track, this is hardly ever done in real time anyway, for example Reaper
generates a ReaPeaks file every time you import a media file for displaying
the peaks of media item. Usually you don't want to display the data as is
and also not do the manipulation in real time (for 5 minutes of audio,
calculating the peaks with a good algorithm probably takes a few seconds on
a reasonably new computer).

Ehsan:
> the only cases where the current proposals suggest blind copying are
AudioParam.setValueCurveAtTime and WaveShaperNode.curve.

I think that unless there's a very good reason not to (i.e. the neutering
issue Jer mentioned is proven to apply to all engines) just neuter in these
cases we should neuter. If the user needs to maintain the original, the
user can make an explicit copy. This way we avoid copy unless the user
wants a copy, very similarly to for example JS's Array#sort() which doesn't
copy, but if you want to preserve the original you can just do
myArray.slice().sort(). Copying a Float32Array is just
myNewArray.set(myOldArray) anyway, which should use memcpy so it would
impose no significant performance loss for the copy case either.

Jer:
> So how about a JavaScript-based Opus decoder that uses an
AudioBuffer-backed ring buffer to do memcopy-free decoding and playback by
decoding directly into a mutable AudioBuffer.

I'm not sure I understood what you're saying here correctly, but you seem
to be implying that the decoder would run in the main thread which is quite
suboptimal. What we do in aurora.js (the JS audio decoding framework) is
that a worker handles the decoding and streaming to avoid clogging the main
thread + lower the risk of glitches and the worker uses transferable
ArrayBuffers to avoid copying when juggling between threads. What the main
thread does is feed that data into a ScriptProcessorNode and IIRC passing
the ArrayBuffer back to the worker. The worker at no point even knows what
and AudioBuffer is. So far this seems like the most optimal solution, but
it already includes a copy of the data.

Jer:
> but "except when creating an AudioBuffer" is a very large caveat.

I think this should be a case of neutering too. What I have in mind is that
creating an AudioBuffer out of a Float32Arrays would be an asynchronous
operation where the arrays become de-neutered when the operation is
complete, e.g.
Promise AudioContext.createBuffer(sequence<Float32Array>, sampleRate)
Note that I also suggest the method is static, I don't see why AudioBuffers
need to be linked to a specific AudioContext, it makes things harder for
libraries that need to know the AudioContext instance in order to create an
AudioBuffer.

Cheers,
Jussi




On Thu, Jul 18, 2013 at 12:43 AM, Marcus Geelnard <mage@opera.com> wrote:

>
> On Wed, Jul 17, 2013 at 11:34 PM, Chris Rogers <crogers@google.com> wrote:
>
>>
>>
>>
>> On Wed, Jul 17, 2013 at 2:28 PM, Marcus Geelnard <mage@opera.com> wrote:
>>
>>> Just a few comments:
>>>
>>> 0) First, let me re-iterate that I think that it's unacceptable for us
>>> to move forward with a specification that allows for "shared mutable state
>>> without locks" (as Jens Nockert so concisely put it). I really think that
>>> we have to (and should be able to) find a solution to this.
>>>
>>> 1) memcpy is really, really fast on any modern CPU architecture (you'll
>>> find it's *the* most optimized routine, both in software and in hardware).
>>> Having hand optimized graphics rasterization loops in assembler for ARM
>>> I've learned that it's impossible to get even close to its speed even when
>>> only doing trivial stuff, such as adding a constant value to a buffer or so.
>>>
>>
>> Marcus, this isn't so much an issue of how fast memcpy() is, although
>> that could be a concern too.  It's about the overhead of the additional
>> memory footprint (caused by the extra mallocs).
>>
>>
> Even so, I'm not convinced that it's a real-world problem.
>
> In most cases, I think that an application will simply upload audio data
> to an AudioBuffer, and be done with it (i.e. the source data is GC:ed),
> meaning no extra memory usage (except during a brief period during the copy
> operation).
>
> In some cases you would like to keep a JS side copy around (e.g. for an
> audio editor), but then I think we're talking about an application that
> would require quite much memory to begin with, and a factor 2x in RAM
> consumption is not uncommon in content editing software (i.e, I don't
> really see the problem).
>
> /Marcus
>
>

Received on Thursday, 18 July 2013 08:07:57 UTC