Re: Exclusive access to audio hardware

On May 8, 2012, at 11:31 AM, Chris Rogers <crogers@google.com> wrote:

> Hi Jer, I'm not opposed to something like the proposed . renderState attribute and the event for state changes.  I'm still trying to understand if we need startRendering() or not.  I understand that in some cases when a context is first created it may not be able to start playing, at least right away, because the device might be in the middle of a phone call, for example.  I'm just wondering if there are any alternatives to a startRendering() method?  Can't the developer query the .renderState attribute early on to know if rendering will be able to start just after the AudioContext has been created.  Maybe the startRendering() method is the best choice.  I'm just trying to get a better understanding of why it's needed.

The purpose would be to give the developer more control about when to restart audio processing.  Take for example a web-based game: during an interruption, the script may want to pause the game (if the audio is important to the game).  It seems to be a common convention that, in this situation, the game will wait for user interaction before restarting.  Rather than explicitly tearing down the node graph during the interruption and rebuilding it after resuming, it would be much easier to leave the graph in place and resume by calling startRendering().

-Jer

Received on Tuesday, 8 May 2012 21:22:04 UTC