webrtc telco

Dear Audio WG,

in an earlier chat between the webrtc and the audio chairs, it was 
decided that the audio wg should be invited to the next telco of the 
webrtc wg. That telco will take place next week (Wed). Details are 
available here: 
<http://lists.w3.org/Archives/Public/public-webrtc/2011Sep/0099.html>.

To give some background:

In webrtc/rtcweb we have the following use case and requirement 
document: 
<http://datatracker.ietf.org/doc/draft-ietf-rtcweb-use-cases-and-requirements/?include_text=1>.

I think the use cases

  4.2.7.  Multiparty video communication . . . . . . . . . . . .  7
  4.2.8.  Multiparty on-line game with voice communication . . .  8

are most relevant to the Audio WG. These requirements are derived from them:

    F13             The browser MUST be able to pan, mix and render
                    several concurrent audio streams.
    ----------------------------------------------------------------
    F15             The browser MUST be able to process and mix
                    sound objects (media that is retrieved from another
                    source than the established media stream(s) with the
                    peer(s) with audio streams).

There are API requirements as well:
    A14             The Web API MUST provide means for the web
                    application to control panning, mixing and
                    other processing for streams.

We're also about to add a requirement on determining the level/activity 
in audio streams (useful for speaker indication, level corrections, 
detecting noise sources).

To me this sounds a lot like Audio WG territory!

In the current API proposal 
(<http://dev.w3.org/2011/webrtc/editor/webrtc.html>) there is something 
called "MediaStream", and if nothing changes this is the kind of object 
that we would like to be able apply mixing, spatialization, 
level/activity measurement/setting to.



Stefan (for Harald and Stefan, chairs of the webrtc WG)

Received on Wednesday, 28 September 2011 13:15:16 UTC