Re: active speaker information in mixed streams

On Thu, Feb 13, 2014 at 3:59 PM, Tim Panton new <thp@westhawk.co.uk> wrote:
>
> On 13 Feb 2014, at 14:43, Iņaki Baz Castillo <ibc@aliax.net> wrote:
>
>> 2014-02-13 15:38 GMT+01:00 Emil Ivov <emcho@jitsi.org>:
>>>> May I understand how the WebAudio API could be aware of RTP fields?
>>>
>>> I don't believe anyone is suggesting that CSRCs be surfaced through
>>> the WebAudio API. My understanding is that the current discussion is
>>> about whether or not such details could be bubbled through the WebRTC
>>> API (1.0).
>>
>> Right, the subject of this thread is "active speaker information in
>> mixed streams" so clearly we need the CSRC values inspection in client
>> side. That can only be achieved by the WebRTC API (and the WebAudio
>> API is totally useless for this subject).
>>
>
> You SIP guys are so funny, you insist that the benefit of SIP is that it decouples
> signalling from media, then you go adding more and more "media-meta-data" to the media channel,
> till it looks like a signalling channel.
> Sigh.

You mean, kind of like the WebRTC guys who decided that WebRTC would
be a signalling agnostic solution but then went on and mandated use of
SDP for not only codec and transport negotiation but even stream
management. Yeah, these things happen :).

> The 'correct' solution to this is to re-engineer your mixer so it sends active speaker info
> over the data channel,

Ah! I have been looking to the spec that describes 'correct' solutions
for years now. Now that you've apparently found it, could you please
point me to it? :)

More seriously, there's already a significant amount of metadata
travelling in RTP and RTP payloads already. I don't see us arguing
about every aspect and that's simply because we've agreed that we are
using a standard protocol. CSRCs are a native part of it.

> not to further delay the standard by adding more VoIP specific legacy features.

I really think this is a significant exaggeration. Adding CSRC support
is trivial implementation-wise. It's simply there and you don't even
need to signal it in SDP. I don't see any problems specification-wise
either. This is a fairly standalone piece that would neither be a
dependency for anything else nor bring any of its own.

Emil

> WebAudio cropped up, because I originally assumed we were discussing in an un-mixed peer (i.e. the prime p2p usecase of WebRTC) ,
> the where peer browser could use webAudio to generate the relevant info and send it down the datachannel.
>
> T.
>
>>
>>
>> --
>> Iņaki Baz Castillo
>> <ibc@aliax.net>

-- 
https://jitsi.org

Received on Thursday, 13 February 2014 15:46:13 UTC