Exclusive access to audio hardware

Hi all,

I'd like to bring up a conceptial problem with the WebAudio API as it currently stands.  The WebAudio API generally assumes non-exlusive to audio hardware.  Any number of AudioContexts can operate simultaneously along with any number of non-browser audio hardware clients, with the only constraint being CPU speed.

However, there may be platforms which may want to adopt the API, but either cannot support non-exclusive access to audio hardware (a hardware limitation, say), or do not allow non-exclusive access (a softaware limitation), or allow both exclusive and non-exclusive access.

This introduces at least three edge cases:

Failure to receive access:
A script creates an WebAudio graph, but another client on the system has non-interruptable, exclusive access to the audio hardware. (E.g. on a mobile phone during a call.)
Interruption of access:
A running WebAudio graph has its audio hardware access revoked when a higher priority, exclusive access client requests it. (E.g. a mobile phone receives a phone call while browsing.)
Resumption of access:
A previously exclusive client releases its lock on the audio hardware, and the WebAudio graph can now safely resume. (E.g. the phone call ends.)

As it stands, there is no mechanism to relay the information about these states back to script authors.  So I propose the following additions to the spec:

Add the AudioContext.startRendering() method to the spec.
Currently, AudioContext.startRendering() exists in the WebKit implementation, but is intended for offline use.  However, due to the edge cases above, rendering may fail to start, may be interrupted, or may need to be restart.  
It is will be automatically called when the first AudioNode is added to an AudioContext, but that call may fail.
Add a new AudioContext.renderState property to the spec.
The property would have the following possible values:
RENDERING - The AudioContext successfully requested access to the audio hardware and is rendering.
INTERRUPTED - Another audio hardware client has been granted exclusive access to the audio hardware.
IDLE - The client which had been previously granted exclusive access has released the hardware.
The state would default to IDLE.  Calling AudioContext.startRendering() would move the renderState from IDLE to RENDERING, and if the audio hardware was unavailable for any reason, the state would asynchronously move from RENDERING to INTERRUPTED.  Once that exclusive access was finished, the state would asynchronously move from INTERRUPTED to IDLE.
Add a new simple event, onrenderstatechange, which is fired at the AudioContext when its renderState property changes.

The end result would be that, for platforms which have universal non-exclusive access to audio hardware, nothing would change.  Rendering would automatically start once nodes were added to an AudioContext.  However, script authors would be able to detect and handle interruptions on other more limited platforms.

Thoughts?

-Jer

Received on Tuesday, 8 May 2012 16:10:53 UTC