Re: Proposal for fixing race conditions

I agree with Robert that those are real concerns. The current design of 
AudioBuffers:

1) Introduces unpredictability to the Web platform in ways that are 
unacceptable.
2) Makes the API hard to implement/optimize on certain architectures or 
using certain techniques.

As far as I have seen on this list, there is strong support for this 
stand point from several individuals representing different vendors.

The only real concern I have seen so far for *not* fixing the API is 
performance. IMO that is of lower priority (e.g. compare it to not using 
atomic operations in multi-threaded C++ applications for the sake of 
speed), but of course it is still a valid concern.

In other words, I think that we *need* to find a solution to the racy 
parts of the API, but we have to make sure that:

* The solution imposes minimal performance regressions for critical use 
cases.
* The solution is easy to understand for a Web developer (which I think 
is NOT the case for the current API).

Can we agree on this?

/Marcus


2013-07-04 07:09, Robert O'Callahan skrev:
> On Thu, Jul 4, 2013 at 12:14 PM, Chris Rogers <crogers@google.com 
> <mailto:crogers@google.com>> wrote:
>
>     For any practical and real-world use cases using the API, there
>     aren't really any problems any more than already exist today with
>     raciness of asynchronous callbacks/event-handlers.  We already
>     live in a world where XHR completions, setInterval/setTimeout,
>     requestAnimationFrame, file/blob async requests, receiving
>     responses from web workers, events from HTMLMediaElement, and many
>     others can all occur in unpredictable orders and at unpredictable
>     times.  Even something like rendering the contents of a video
>     stream from <video> into a canvas and then reading the pixels back
>     is going to involve lots of raciness in terms of exactly what
>     frame you're reading at any given time.
>
>
> The nondeterminism arising from those APIs is limited in carefully 
> specified ways. For example, when drawing a <video> into a canvas, the 
> frame may be chosen nondeterministically, but you never get half of 
> one frame and half of the next, or whatever else happens to be in the 
> framebuffer while you're reading it. Likewise, the ordering of HTML5 
> task dispatch is nondeterministic, but also carefully constrained. In 
> particular, each task executes as an atomic unit and is not 
> interleaved with other tasks.
>
>     We should discuss very specific real-world use cases because I
>     think we're in pretty good shape here.
>
>
> Suppose someone writes the following code:
>   var audioBuffer = 
> audioContext.createBuffer(1,10000,audioContext.sampleRate);
>   var audioBufferSourceNode = audioContext.createBufferSource();
>   audioBufferSourceNode.start(audioContext.currentTime + 0.1);
>   for (var i  = 0; i < 10000; ++i) {
>     audioBuffer.getChannelData(0)[i] = ...;
>   }
> The spec doesn't say what happens in this situation. That's probably 
> because with the Webkit/Blink implementation, as I understand it, 
> almost anything can happen. On a fast-enough machine, most of the 
> time, the sound will probably play as expected. On a slow machine, or 
> if you hit a GC or a page fault, or if the memory subsystem is 
> particularly lazy, you could get zeroes interspersed with the data you 
> wanted. The unpredictability of this code (especially given it 
> "usually works"), is a big problem.
>
> Now suppose we want to implement a Web browser on a multiprocessor 
> architecture where there is no general-purpose shared memory (only 
> message passing), or general-purpose shared memory is very expensive. 
> Then it is very difficult to implement Web Audio in a way that the 
> above code could ever work. (This is only barely hypothetical, since 
> Mozilla is actually working on a browser with this architecture.) You 
> could say it's OK to break such poorly-written Web applications, but 
> browser development is all about getting poorly written applications 
> to work well.
>
> Alternatively, suppose you want the Javascript engine to allow 
> ArrayBuffer contents to be moved by a compacting garbage collector. 
> That is very difficult to do while the audio thread has concurrent 
> access to the ArrayBuffer contents, so you'll force the JS engine to 
> support pinning, which is a real pain.
>
> Against issues like these, the arguments for freely-shared memory seem 
> very weak to me. "So that Webkit/Blink can keep running legacy demos 
> with slightly higher performance and a lower-complexity 
> implementation", as far as I can tell.
>
> Perhaps, because these are hypothetical situations, you feel they 
> aren't "real-world use cases" and therefore don't matter. But 
> unfortunately we do have to design Web APIs with an eye to the future. 
> Saying "this design isn't currently causing me any pain, so let's lock 
> it in for eternity" has caused vast problems on the Web.
>
> Rob
> -- 
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus eanuttehrotraiitny  
> eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha 
> iids  teoa stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  
> tfaokreg iyvoeunr, 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt 
> hwea lmka'n?  gBoutt  uIp  waanndt  wyeonut  thoo mken.o w *
> *


-- 
Marcus Geelnard
Technical Lead, Mobile Infrastructure
Opera Software

Received on Thursday, 4 July 2013 08:04:23 UTC