Re: webrtc reqs related to the Audio WG

On Oct 6, 2011, at 1:25 , Stefan Håkansson wrote:

> Dear Audio WG (cc webrtc),
> 
> in the latest version of the use-cases - and - requirements document (<http://datatracker.ietf.org/doc/draft-ietf-rtcweb-use-cases-and-requirements/?include_text=1>) for webrtc the requirements on audio processing have been changed:
> 
>   ----------------------------------------------------------------
> 
>   A13             The Web API MUST provide means for the web
>                   application to apply spatialization effects to
>                   audio streams.

I hope this is a requirement on some API other than the one WebRTC is doing. 


>   ----------------------------------------------------------------
>   A14             The Web API MUST provide means for the web
>                   application to detect the level in audio
>                   streams.
agree

>   ----------------------------------------------------------------
>   A15             The Web API MUST provide means for the web
>                   application to adjust the level in audio
>                   streams.
I'd prefer that the requirement was that it needed to be able to tell it to be normalized. What I don't want to see is a the only thing the JS gets is a gain control - that will be very hard to use. 


>   ----------------------------------------------------------------
> 
>   A16             The Web API MUST provide means for the web
>                   application to mix audio streams.

Again, hope that is some other API than one WebRTC is going. Be nice to say more about the scope of this mixing. 

>   ----------------------------------------------------------------
> 
> The term "audio stream" was selected at an early stage; I would say it corresponds a "Track" in the MediaStream object that is currently in the API draft (<http://dev.w3.org/2011/webrtc/editor/webrtc-20111004.html>).
> 
> Anyway, feedback on these requirements is welcome (I'm not sure I'm using good wording).
> 
> A14 and A15 are in the use-cases motivated by the need to equalize levels between audio streams (Tracks) coming from different participants in a multiparty session.
> But I can see other uses of A14: display the level in a meter locally to calibrate mic settings before a session, detect silence, detect noise generating party in a multiparty session etc.
> 
> As said,
> feedback would be appreciated.
> 
> Stefan
> 
> 

Received on Thursday, 20 October 2011 00:46:35 UTC