- From: Stefan Håkansson <stefan.lk.hakansson@ericsson.com>
- Date: Thu, 6 Oct 2011 09:25:57 +0200
- To: public-audio@w3.org
- CC: "public-webrtc@w3.org" <public-webrtc@w3.org>
Dear Audio WG (cc webrtc),
in the latest version of the use-cases - and - requirements document
(<http://datatracker.ietf.org/doc/draft-ietf-rtcweb-use-cases-and-requirements/?include_text=1>)
for webrtc the requirements on audio processing have been changed:
----------------------------------------------------------------
A13 The Web API MUST provide means for the web
application to apply spatialization effects to
audio streams.
----------------------------------------------------------------
A14 The Web API MUST provide means for the web
application to detect the level in audio
streams.
----------------------------------------------------------------
A15 The Web API MUST provide means for the web
application to adjust the level in audio
streams.
----------------------------------------------------------------
A16 The Web API MUST provide means for the web
application to mix audio streams.
----------------------------------------------------------------
The term "audio stream" was selected at an early stage; I would say it
corresponds a "Track" in the MediaStream object that is currently in the
API draft (<http://dev.w3.org/2011/webrtc/editor/webrtc-20111004.html>).
Anyway, feedback on these requirements is welcome (I'm not sure I'm
using good wording).
A14 and A15 are in the use-cases motivated by the need to equalize
levels between audio streams (Tracks) coming from different participants
in a multiparty session.
But I can see other uses of A14: display the level in a meter locally to
calibrate mic settings before a session, detect silence, detect noise
generating party in a multiparty session etc.
As said,
feedback would be appreciated.
Stefan
Received on Thursday, 6 October 2011 07:26:46 UTC