- From: Stefan Håkansson LK <stefan.lk.hakansson@ericsson.com>
- Date: Fri, 21 Oct 2011 13:51:51 +0200
- To: Cullen Jennings <fluffy@cisco.com>
- CC: "public-audio@w3.org" <public-audio@w3.org>, "public-webrtc@w3.org" <public-webrtc@w3.org>
On 10/20/2011 02:45 AM, Cullen Jennings wrote: > > On Oct 6, 2011, at 1:25 , Stefan Håkansson wrote: > >> Dear Audio WG (cc webrtc), >> >> in the latest version of the use-cases - and - requirements document (<http://datatracker.ietf.org/doc/draft-ietf-rtcweb-use-cases-and-requirements/?include_text=1>) for webrtc the requirements on audio processing have been changed: >> >> ---------------------------------------------------------------- >> >> A13 The Web API MUST provide means for the web >> application to apply spatialization effects to >> audio streams. > > I hope this is a requirement on some API other than the one WebRTC is doing. > > >> ---------------------------------------------------------------- >> A14 The Web API MUST provide means for the web >> application to detect the level in audio >> streams. > agree > >> ---------------------------------------------------------------- >> A15 The Web API MUST provide means for the web >> application to adjust the level in audio >> streams. > I'd prefer that the requirement was that it needed to be able to tell it to be normalized. What I don't want to see is a the only thing the JS gets is a gain control - that will be very hard to use. I think Randell and Harald responded to your comments on A13 - A15 in line with my view, so I will not reiterate. > > >> ---------------------------------------------------------------- >> >> A16 The Web API MUST provide means for the web >> application to mix audio streams. > > Again, hope that is some other API than one WebRTC is going. Be nice to say more about the scope of this mixing. Actually, with the latest iteration of the API draft (http://dev.w3.org/2011/webrtc/editor/webrtc-20111017.html) this is already supported. The update introduced the possibility to create a new MediaStream from tracks of other MediaStream's, so you can create a new MediaStream containing all audio tracks that you'd like to mix. When you play a MediaStream using a media element, all the enabled audio tracks should be mixed (if I read the spec correctly). > >> ---------------------------------------------------------------- >> >> The term "audio stream" was selected at an early stage; I would say it corresponds a "Track" in the MediaStream object that is currently in the API draft (<http://dev.w3.org/2011/webrtc/editor/webrtc-20111004.html>). >> >> Anyway, feedback on these requirements is welcome (I'm not sure I'm using good wording). >> >> A14 and A15 are in the use-cases motivated by the need to equalize levels between audio streams (Tracks) coming from different participants in a multiparty session. >> But I can see other uses of A14: display the level in a meter locally to calibrate mic settings before a session, detect silence, detect noise generating party in a multiparty session etc. >> >> As said, >> feedback would be appreciated. >> >> Stefan >> >> >
Received on Friday, 21 October 2011 11:52:17 UTC