- From: Stefan Håkansson LK <stefan.lk.hakansson@ericsson.com>
- Date: Wed, 13 Jul 2011 15:33:32 +0200
- To: "Timothy B. Terriberry" <tterriberry@mozilla.com>
- CC: "public-webrtc@w3.org" <public-webrtc@w3.org>
>> could help the codec perform at its optimum). And this set could be >> irrelevant for a new generation of codecs. "audio" vs "voip" is just one >> example, and it is specific for one codec. I think the general trend also is > >On the contrary, things like AGC and noise suppression are independent >of _any_ codec (at least they are in the WebRTC stack Google >open-sourced). Opus implements a few more things internally, but there's >no reason in principle why those things couldn't be done outside the >codec as well. The point is that this switch is the difference between, >"Please actively distort the input I give you to make it sound >'better'," vs. "Please preserve the original input as closely as >possible," and that semantic has little to do with the actual codec. I still think we should not go in this direction - at least not initially. Let's add it later if there is a clear need. More and more can be done by analysing the input signal (e.g. determining if it is speech or music), so perhaps there will be no need for API support. Stefan
Received on Wednesday, 13 July 2011 13:34:05 UTC