Re: active speaker information in mixed streams

I guess it could continue in both.  The ORCA  CG might be quicker to
integrate something into the API than the WebRTC WG.

My question is the same: exactly what info do you want available in
the JS?  The CSRCs?

On Tue, Jan 28, 2014 at 2:38 PM, Emil Ivov <emcho@jitsi.org> wrote:
> I am not sure whether this discussion should only continue on one of
> the lists but until we figure that out I am going to answer here as
> well
>
> Sync isn't really the issue here. It's mostly about the fact that the
> mixer is not a WebRTC entity. This means that it most likely doesn't
> even know what SCTP is, it doesn't necessarily have access to
> signalling and above all, the mix is likely to also contain audio from
> non-webrtc endpoints. Using DataChannels in such situations would
> likely turn out to be quite convoluted.
>
> Emil
>
> On Tue, Jan 28, 2014 at 10:18 PM, Peter Thatcher <pthatcher@google.com> wrote:
>> Over there, I suggested that you could simply send the audio levels
>> over an unordered data channel.  If you're using one
>> IceTransport/DtlsTransport pair for both your RTP and SCTP, it would
>> probably stay very closely in sync.
>>
>> On Tue, Jan 28, 2014 at 5:44 AM, Emil Ivov <emcho@jitsi.org> wrote:
>>> Hey all,
>>>
>>> I just posted this to the WebRTC list here:
>>>
>>> http://lists.w3.org/Archives/Public/public-webrtc/2014Jan/0256.html
>>>
>>> But I believe it's a question that is also very much worth resolving
>>> for ORTC, so I am also asking it here:
>>>
>>> One requirement that we often bump against is the possibility to
>>> extract active speaker information from an incoming *mixed* audio
>>> stream. Acquiring the CSRC list from RTP would be a good start. Audio
>>> levels as per RFC6465 would be even better.
>>>
>>> Thoughts?
>>>
>>> Emil
>
> --
> https://jitsi.org

Received on Tuesday, 28 January 2014 22:42:49 UTC