W3C home > Mailing lists > Public > public-webrtc@w3.org > February 2014

Re: active speaker information in mixed streams

From: Emil Ivov <emcho@jitsi.org>
Date: Wed, 12 Feb 2014 23:34:19 +0100
Message-ID: <CAPvvaaKL6WAp+BiOJd3+WQc9SFScZdvx9=F7NgRtGddVLrAzcQ@mail.gmail.com>
To: Bernard Aboba <Bernard.Aboba@microsoft.com>
Cc: Tim Panton new <thp@westhawk.co.uk>, Harald Alvestrand <hta@google.com>, "public-webrtc@w3.org" <public-webrtc@w3.org>
On Wed, Feb 12, 2014 at 6:31 PM, Emil Ivov <emcho@jitsi.org> wrote:
> On Wed, Feb 12, 2014 at 6:24 PM, Bernard Aboba
> <Bernard.Aboba@microsoft.com> wrote:
>> [BA] That is my take, at least for "dominant speaker" identification.   To my mind, CSRCs and averaged levels are only useful for indicating which sources are providing sound (or noise, as the case may be).
>>
>> If the goal is to enable switching video to the dominant speaker, then you actually need to figure out who is speaking (as opposed to typing on their keyboard, having their dog bark, etc.).  The web audio API is much better suited for that.
>
> The web audio API would be great if you actually have access to the
> individual audio streams. This is not the case when the browser is
> only getting a single, mixed audio stream. CSRC levels is the only
> option one has there.

Another thought about this. If consensus is that 1.0 is too far ahead
for CSRC audio levels then maybe we could at least add support for
CSRCs?

Adding access to those would at least allow mixers to detect dominant
speakers and indicate them to participants.

Besides, CSRCs are, after all, native to RFC3550.

Emil
-- 
https://jitsi.org
Received on Wednesday, 12 February 2014 22:35:07 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:38 UTC