Discussion on Audio API in Audio WG (was: Re: Call for API contributions to WEBRTC WG)

On 06/15/2011 05:14 PM, Harald Alvestrand wrote:
> Team,
>
> as stated in our charter, our deliverables say:
>
> The working group will deliver specifications that cover at least the
> following functions, unless they are found to be fully specified
> within other working groups' finished results:
>
> Media Stream Functions - API functions to manipulate media streams for
> interactive real-time communications, connecting various processing
> functions to each other, and to media devices and network connections,
> including media manipulation functions for e.g. allowing to
> synchronize streams.
> Audio Stream Functions - An extension of the Media Stream Functions to
> process audio streams, to enable features such as automatic gain
> control, mute functions and echo cancellation.

There is an active discussion in the Audio WG right now [1] around what document to use as initial input for the Audio WG. Two proposals are discussed for the time being:

1/ The Stream Processing API
  proposed by Robert O'Callahan, Mozilla
  http://hg.mozilla.org/users/rocallahan_mozilla.com/specs/raw-file/tip/StreamProcessing/StreamProcessing.html
It is based on the Stream API as defined in WHATWG, extended to cover more requirements:
  http://www.whatwg.org/specs/web-apps/current-work/webrtc.html#stream-api

2/ the Web Audio API
  proposed by Chris Rogers, Google
  http://chromium.googlecode.com/svn/trunk/samples/audio/specification/specification.html#AudioDestinationNode-section
This proposal designs a modular audio processing graph that allows "connecting various processing functions to each other" to quote the charter extract Harald mentioned.

Both proposals cover at least some of our needs.

Francois.

[1] see thread starting at: http://lists.w3.org/Archives/Public/public-audio/2011AprJun/0102.html

Received on Friday, 17 June 2011 10:19:49 UTC