- From: <bugzilla@jessica.w3.org>
- Date: Mon, 23 Jul 2012 15:47:05 +0000
- To: public-audio@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17695 Olivier Thereaux <olivier.thereaux@bbc.co.uk> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED AssignedTo|dave.null@w3.org |olivier.thereaux@bbc.co.uk --- Comment #1 from Olivier Thereaux <olivier.thereaux@bbc.co.uk> 2012-07-23 15:47:05 UTC --- I think the underlying requirement is to be able to apply any kind of processing to several sources at the same time - designating sources or nodes or channels or tracks as related is really just one way to fullfil such a requirement. The Web Audio API - by virtue of being graph-based, does this naturally - you can mix several sources and apply any processing (including gain control) only to that group. Mentioning this requirement with accessibility in mind would very much fit our Use Case 4 - Radio Broadcast. A typical complaint about radio (or indeed TV) sound mixes is that one source of sound (say, background music and sound effects) is too loud and covers too much of another (say, dialogue). Giving people control over the mix (bombastic vs subdued music and effects) would be an interesting application of a web-based audio processing API. -- Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the QA contact for the bug.
Received on Monday, 23 July 2012 15:47:16 UTC