Re: Proposal for fixing race conditions

Chris, would you mind sharing a demo/benchmark which demonstrates your
performance concerns?  We have demos written by Marcus and Robert (and
perhaps others) which demonstrates the problem with the existing APIs, so
it would be really helpful if you could provide a demo which would be
impossible to make work efficiently using Robert's or Jer's proposals,
specifically about AudioParam.setValueCurveAtTime and
AudioBuffer.getChannelData.  Hopefully with that in hand we can make
progress here.

Thanks!
--
Ehsan
<http://ehsanakhgari.org/>


On Thu, Jul 4, 2013 at 6:04 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Fri, Jul 5, 2013 at 7:21 AM, Chris Rogers <crogers@google.com> wrote:
>
>> This is an extremely contrived example, and is not even close to
>> real-world calling patterns and use cases.
>>
>
> It's close; it just has some code in the wrong place. Experience shows
> that over time Web developers will write almost any code that appears to
> work for them. (Often by cutting and pasting code they don't understand.)
>
> What about the other example I posted?
>
>
>> With any of the racy APIs I mentioned above, there are dozens of ways to
>> write buggy code that make assumptions about timing and order of async
>> callbacks which produce unpredictable behavior.
>>
>
> Yes, we have problems with other APIs too.
>
>
>>  In the two years since the Web Audio API has been used by many
>> developers large and small, this type of issue you're describing has simply
>> never come up.  We're talking about developer experience on a range of
>> devices, browsers, and multiple operating systems:
>> Mac/Windows/Linux/ChromeOS/Android/iOS
>>
>
> We're also talking about early-adopter-type developers who are likely to
> be savvier.
>
>  The spec doesn't say what happens in this situation.
>>>
>>
>> First of all I repeat that this is not a sensible calling pattern that
>> developers ever use, but what's the worst that can happen?  A jangled audio
>> stream emanating from the speakers?  Yes, this is true, but there are
>> uncountable ways that any API including this one can be misused to create a
>> mess of sound.
>>
>
> The worst that can happen is that code that usually works suddenly starts
> failing due to browser changes that should be perfectly innocuous.
>
>
>>
>>>  That's probably because with the Webkit/Blink implementation, as I
>>> understand it, almost anything can happen. On a fast-enough machine, most
>>> of the time, the sound will probably play as expected. On a slow machine,
>>> or if you hit a GC or a page fault, or if the memory subsystem is
>>> particularly lazy, you could get zeroes interspersed with the data you
>>> wanted. The unpredictability of this code (especially given it "usually
>>> works"), is a big problem.
>>>
>>
>> We're talking about real-time systems here.  Performance issues can come
>> up already with the ScriptProcessorNode and the Mozilla audio data API.
>>  Depending on how fast the machine is and what other activities it's doing,
>> GC, etc. there can be gaps, stutters, and glitches with small buffer sizes,
>> etc.  Additionally, if you mix in setTimeout with a ScriptProcessorNode or
>> the audio data API you can get all kinds of raciness and jankiness with
>> regards to the timing of musical events.  I consider that to be a mis-use
>> of the APIs and a bad way to write audio code, but there's nothing stopping
>> developers from mixing these APIs together and creating these kinds of
>> messes.
>>
>
> I agree, and we shouldn't be creating new footguns.
>
> In their defense, ScriptProcessorNode and Audio Data were designed (in
> part) to satisfy the requirement that audio samples are generated on the
> main thread and must be played somehow --- e.g. Grant Galitz's Gamecube
> emulator. With that requirement you're pretty much stuck with the
> possibility of performance-induced main-thread glitching. When that
> requirement is not present, we shouldn't be using it as an excuse to
> introduce unnecessary failure modes.
>
> Also note that Gecko's ScriptProcessorNode adaptively increases latency to
> avoid glitching, so in the steady state (if there is one), we don't glitch.
>
> In the end, all the way down at the driver level which all browsers must
>> talk to, a continuous audio stream is supposed to be delivered to the
>> hardware.  But there are potentially racy things that can happen here
>> because we're dealing with producer/consumer models with DMA engines, ring
>> or double buffers, with client code feeding the buffer which the DMA engine
>> is right on the cutting edge of consuming.  Yes glitches can happen here
>> too and can vary depending on system stress, memory paging activities, etc.
>>
>
> Those are all quality-of-implementation issues, which are under the
> control of the browser and can be improved over time since an ideal
> behavior can be defined. (In Gecko, roughly, we take a snapshot of the Web
> Audio DOM state at each HTML5 stable state, and the ideal rendering is a
> function of the sequence of those snapshots with their timestamps.) With
> the racy-buffers problem, the ideal behavior has no workable definition. If
> you disagree, please try defining it.
>
> Like I said earlier, I do feel we're at an impasse. I feel like I'm
> wasting my time here, and probably yours too --- sorry about that. If
> someone else wants to carry on this conversation, go ahead.
>
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w  *
> *
>

Received on Friday, 5 July 2013 20:19:56 UTC