Re: New proposal for fixing race conditions

I've been hanging back from this discussion a bit, but I feel the need to
express my own take (since I come at the API from a very different
perspective than Chris).

I understand (and support) Robert's initial introduction of this issue
(first line of
http://lists.w3.org/Archives/Public/public-audio/2013AprJun/0644.html) - we
should avoid having internal implementation details affect observable
output.  However, that's not the same thing as "we must prevent any
possible race conditions" - in this case, the race condition is between the
Web Audio "thread" and the main execution thread.  This is not so much
about internal implementation details as it is about the fact that Web
Audio developers need to have their expectations set around interactions
with the audio "thread".

AFAICT, all the proposals made so far - Jer's included - put quite a heavy
weight on interacting with audio buffer data. For the purposes of
synthesizing my own audio data, this will require a memcpy.  In mobile
scenarios, and desktop scenarios with large buffers of data (i.e. a DAW),
this will put a significantly destructive additional burden on the
environment required to play (and likely record/process) audio.  This seems
like an awfully big deal to me, so I have to question - what's the benefit?
 It is not to my knowledge required to avoid crashes or other potential
security issues; the only downside is if an author modifies a playing audio
buffer, they could get differing playback results depending on precise
timing.  That doesn't seem any different, to me, than what happens with
small timing differences in event delivery today, or setting audio times
that are too close to "now" - if you want the full power of the audio
system, you have to learn how to work closely with the system and adapt to
environments.  As Chris pointed out, there is some experience working with
the API as it is today, and I haven't heard of (or personally experienced)
any problems traced to this issue.

Also, correct me if I'm mistaken, but I don't believe this is equal to
"browser x will operate differently than browser y" - timing is everything
in this scenario anyway, and actually even Jer's proposal could enable
different behavior across browsers/environments, it would just be replacing
the entire buffer instead of a portion.

I feel designing the API around prevent race conditions everywhere is 1)
ultimately not going to be successful anyway, and 2) is like wrapping
everything with bubble wrap.  It will prevent some minor bruises, but it
will also make it quite a bit more costly (in memory and time) to get the
tasks needed done.

Olivier, to answer your question, I believe this would currently be an
Objection.

-Chris


On Tue, Jul 23, 2013 at 8:07 AM, Ehsan Akhgari <ehsan.akhgari@gmail.com>wrote:

> On Mon, Jul 22, 2013 at 6:44 PM, Robert O'Callahan <robert@ocallahan.org>wrote:
>
>> On Tue, Jul 23, 2013 at 5:44 AM, Marcus Geelnard <mage@opera.com> wrote:
>>
>>> My guess is that this is very similar to the current solution in gecko
>>> (Ehsan?).
>>>
>>
>> It's close to what we do. We neuter and recycle the output buffers, but
>> currently we don't neuter and recycle the input buffers. I think that's a
>> good idea though.
>>
>
> We also lazily create the input buffers when JS first accesses the
> inputBuffer property, to optimize the case where the ScriptProcessorNode is
> only used for synthesis, not as a filter.
>
> Cheers,
> Ehsan
>

Received on Tuesday, 23 July 2013 16:11:53 UTC