W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Exclusive access to audio hardware

From: Jer Noble <jer.noble@apple.com>
Date: Tue, 08 May 2012 14:21:28 -0700
Cc: "public-audio@w3.org" <public-audio@w3.org>
Message-id: <C48D1B79-A298-4BF7-837F-9C12AEF21CB5@apple.com>
To: Chris Rogers <crogers@google.com>

On May 8, 2012, at 11:31 AM, Chris Rogers <crogers@google.com> wrote:

> Hi Jer, I'm not opposed to something like the proposed . renderState attribute and the event for state changes.  I'm still trying to understand if we need startRendering() or not.  I understand that in some cases when a context is first created it may not be able to start playing, at least right away, because the device might be in the middle of a phone call, for example.  I'm just wondering if there are any alternatives to a startRendering() method?  Can't the developer query the .renderState attribute early on to know if rendering will be able to start just after the AudioContext has been created.  Maybe the startRendering() method is the best choice.  I'm just trying to get a better understanding of why it's needed.

The purpose would be to give the developer more control about when to restart audio processing.  Take for example a web-based game: during an interruption, the script may want to pause the game (if the audio is important to the game).  It seems to be a common convention that, in this situation, the game will wait for user interaction before restarting.  Rather than explicitly tearing down the node graph during the interruption and rebuilding it after resuming, it would be much easier to leave the graph in place and resume by calling startRendering().


Received on Tuesday, 8 May 2012 21:22:04 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:04 UTC