W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2011

Re: webrtc telco

From: Chris Rogers <crogers@google.com>
Date: Wed, 28 Sep 2011 10:40:43 -0700
Message-ID: <CA+EzO0=BhBjhBiQe83MvJb9AN346RTpZXOvbNqcFsT4Rucw6sw@mail.gmail.com>
To: Stefan Håkansson LK <stefan.lk.hakansson@ericsson.com>
Cc: "public-audio@w3.org" <public-audio@w3.org>
Hi Stefan, good to hear from you.  I've also been hearing a lot of interest
from people about use cases integrating effects into WebRTC.  The Web Audio
API provides exactly these types of effects and can integrate cleanly with
the emerging WebRTC API.  It has rich support for arbitrary processing
graphs with effects such as panning/spatialization, mixing, dynamic range
compression, equalization, etc.  It is my understanding that a MediaStream
can represents both a source or a sink (destination) for audio.  So, it's
natural to consider connecting a source MediaStream into a Web Audio API
processing graph which feeds into a MediaStream "sink".  I believe it would
be as simple as a few lines of JavaScript to connect the source and sink
AudioNode objects from MediaStreams.

It's exciting to think of the potential in using these kinds of processing
effects in WebRTC applications, multi-player games, etc.!

Best Regards,

On Wed, Sep 28, 2011 at 6:14 AM, Stefan Håkansson LK <
stefan.lk.hakansson@ericsson.com> wrote:

> Dear Audio WG,
> in an earlier chat between the webrtc and the audio chairs, it was decided
> that the audio wg should be invited to the next telco of the webrtc wg. That
> telco will take place next week (Wed). Details are available here: <
> http://lists.w3.org/Archives/**Public/public-webrtc/2011Sep/**0099.html<http://lists.w3.org/Archives/Public/public-webrtc/2011Sep/0099.html>
> >.
> To give some background:
> In webrtc/rtcweb we have the following use case and requirement document: <
> http://datatracker.ietf.org/**doc/draft-ietf-rtcweb-use-**
> cases-and-requirements/?**include_text=1<http://datatracker.ietf.org/doc/draft-ietf-rtcweb-use-cases-and-requirements/?include_text=1>
> >.
> I think the use cases
>  4.2.7.  Multiparty video communication . . . . . . . . . . . .  7
>  4.2.8.  Multiparty on-line game with voice communication . . .  8
> are most relevant to the Audio WG. These requirements are derived from
> them:
>   F13             The browser MUST be able to pan, mix and render
>                   several concurrent audio streams.
>   ------------------------------**------------------------------**----
>   F15             The browser MUST be able to process and mix
>                   sound objects (media that is retrieved from another
>                   source than the established media stream(s) with the
>                   peer(s) with audio streams).
> There are API requirements as well:
>   A14             The Web API MUST provide means for the web
>                   application to control panning, mixing and
>                   other processing for streams.
> We're also about to add a requirement on determining the level/activity in
> audio streams (useful for speaker indication, level corrections, detecting
> noise sources).
> To me this sounds a lot like Audio WG territory!
> In the current API proposal (<http://dev.w3.org/2011/**
> webrtc/editor/webrtc.html<http://dev.w3.org/2011/webrtc/editor/webrtc.html>>)
> there is something called "MediaStream", and if nothing changes this is the
> kind of object that we would like to be able apply mixing, spatialization,
> level/activity measurement/setting to.
> Stefan (for Harald and Stefan, chairs of the webrtc WG)
Received on Wednesday, 28 September 2011 17:41:12 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:49:56 UTC