- From: Marcus Geelnard <mage@opera.com>
- Date: Fri, 05 Jul 2013 09:54:05 +0200
- To: Chris Rogers <crogers@google.com>
- CC: Robert O'Callahan <robert@ocallahan.org>, Joseph Berkovitz <joe@noteflight.com>, "public-audio@w3.org" <public-audio@w3.org>, Dave Herman <dherman@ccs.neu.edu>, "Mark S. Miller" <erights@google.com>
- Message-ID: <51D67B9D.5090705@opera.com>
Chris, I still fail to see why we shouldn't try to remove the raciness
of AudioBuffers (I don't think that "it's usually not a problem" is a
strong enough argument). Could you please explain?
As I see it, we're currently developing a Web standard and we have every
opportunity to fix any shortcomings right now.
/Marcus
2013-07-04 21:21, Chris Rogers skrev:
>
>
>
> On Wed, Jul 3, 2013 at 10:09 PM, Robert O'Callahan
> <robert@ocallahan.org <mailto:robert@ocallahan.org>> wrote:
>
> On Thu, Jul 4, 2013 at 12:14 PM, Chris Rogers <crogers@google.com
> <mailto:crogers@google.com>> wrote:
>
> For any practical and real-world use cases using the API,
> there aren't really any problems any more than already exist
> today with raciness of asynchronous callbacks/event-handlers.
> We already live in a world where XHR completions,
> setInterval/setTimeout, requestAnimationFrame, file/blob async
> requests, receiving responses from web workers, events from
> HTMLMediaElement, and many others can all occur in
> unpredictable orders and at unpredictable times. Even
> something like rendering the contents of a video stream from
> <video> into a canvas and then reading the pixels back is
> going to involve lots of raciness in terms of exactly what
> frame you're reading at any given time.
>
>
> The nondeterminism arising from those APIs is limited in carefully
> specified ways. For example, when drawing a <video> into a canvas,
> the frame may be chosen nondeterministically, but you never get
> half of one frame and half of the next, or whatever else happens
> to be in the framebuffer while you're reading it. Likewise, the
> ordering of HTML5 task dispatch is nondeterministic, but also
> carefully constrained. In particular, each task executes as an
> atomic unit and is not interleaved with other tasks.
>
> We should discuss very specific real-world use cases because I
> think we're in pretty good shape here.
>
>
> Suppose someone writes the following code:
> var audioBuffer =
> audioContext.createBuffer(1,10000,audioContext.sampleRate);
> var audioBufferSourceNode = audioContext.createBufferSource();
> audioBufferSourceNode.start(audioContext.currentTime + 0.1);
> for (var i = 0; i < 10000; ++i) {
> audioBuffer.getChannelData(0)[i] = ...;
> }
>
>
> This is an extremely contrived example, and is not even close to
> real-world calling patterns and use cases. With any of the racy APIs
> I mentioned above, there are dozens of ways to write buggy code that
> make assumptions about timing and order of async callbacks which
> produce unpredictable behavior.
>
> I would be much more concerned if the example you showed actually
> illustrated a calling pattern that anybody would actually use in the
> real-world.
>
> In the two years since the Web Audio API has been used by many
> developers large and small, this type of issue you're describing has
> simply never come up. We're talking about developer experience on a
> range of devices, browsers, and multiple operating systems:
> Mac/Windows/Linux/ChromeOS/Android/iOS
>
>
> The spec doesn't say what happens in this situation.
>
>
> First of all I repeat that this is not a sensible calling pattern that
> developers ever use, but what's the worst that can happen? A jangled
> audio stream emanating from the speakers? Yes, this is true, but
> there are uncountable ways that any API including this one can be
> misused to create a mess of sound.
>
> That's probably because with the Webkit/Blink implementation, as I
> understand it, almost anything can happen. On a fast-enough
> machine, most of the time, the sound will probably play as
> expected. On a slow machine, or if you hit a GC or a page fault,
> or if the memory subsystem is particularly lazy, you could get
> zeroes interspersed with the data you wanted. The unpredictability
> of this code (especially given it "usually works"), is a big problem.
>
>
> We're talking about real-time systems here. Performance issues can
> come up already with the ScriptProcessorNode and the Mozilla audio
> data API. Depending on how fast the machine is and what other
> activities it's doing, GC, etc. there can be gaps, stutters, and
> glitches with small buffer sizes, etc. Additionally, if you mix in
> setTimeout with a ScriptProcessorNode or the audio data API you can
> get all kinds of raciness and jankiness with regards to the timing of
> musical events. I consider that to be a mis-use of the APIs and a bad
> way to write audio code, but there's nothing stopping developers from
> mixing these APIs together and creating these kinds of messes.
>
> Actually, that's why the Web Audio API is mostly based around
> processing in its own thread using native code, to minimize these
> types of issues which occur with the ScriptProcessorNode and the
> Mozilla audio data API. It helps insulate the developer from GC and
> other unpredictable activity on the main JS thread. I think that's a
> strong aspect of its design.
>
> In the end, all the way down at the driver level which all browsers
> must talk to, a continuous audio stream is supposed to be delivered to
> the hardware. But there are potentially racy things that can happen
> here because we're dealing with producer/consumer models with DMA
> engines, ring or double buffers, with client code feeding the buffer
> which the DMA engine is right on the cutting edge of consuming. Yes
> glitches can happen here too and can vary depending on system stress,
> memory paging activities, etc.
>
--
Marcus Geelnard
Technical Lead, Mobile Infrastructure
Opera Software
Received on Friday, 5 July 2013 07:55:15 UTC