- From: Emil Ivov <emcho@jitsi.org>
- Date: Tue, 28 Jan 2014 18:37:40 +0100
- To: Gustavo Garcia <ggb@tokbox.com>
- Cc: Iņaki Baz Castillo <ibc@aliax.net>, public-orca <public-orca@w3.org>
On Tue, Jan 28, 2014 at 6:31 PM, Gustavo Garcia <ggb@tokbox.com> wrote: > 1) The audio level is sent by Chrome in the corresponding RTP > extension header, but AFAIK that information is not used by the > browser receiving it. Yes. Those come from RFC6464 and they are primarily meant for audio mixers ... mostly to spare them the need to decode silence. > 2) You have access to the audio level of the tracks received with the > getStats API (in Chrome) Just to make it clear: my original question was about mixed streams. That is, streams where all contributors would be rendered within the same track. Emil > On Tue, Jan 28, 2014 at 5:51 AM, Iņaki Baz Castillo <ibc@aliax.net> wrote: >> 2014-01-28 Emil Ivov <emcho@jitsi.org>: >>> One requirement that we often bump against is the possibility to >>> extract active speaker information from an incoming *mixed* audio >>> stream. Acquiring the CSRC list from RTP would be a good start. Audio >>> levels as per RFC6465 would be even better. >> >> Question (related): >> >> In case of a bridge server that does not mix audio channels but just >> relays them as separate tracks, how can the WebRTC client/browser know >> about the activity of each received audio track? >> >> >> -- >> Iņaki Baz Castillo >> <ibc@aliax.net> >> -- https://jitsi.org
Received on Tuesday, 28 January 2014 17:38:27 UTC