W3C home > Mailing lists > Public > public-webrtc@w3.org > May 2018

Re: Use cases / requirements for raw data access functions

From: Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>
Date: Tue, 22 May 2018 00:42:53 +0200
To: Peter Thatcher <pthatcher@google.com>
Cc: public-webrtc@w3.org
Message-ID: <b1c33b0f-fa87-6d92-da33-0a800a08632b@gmail.com>
On 22/05/2018 0:26, Peter Thatcher wrote:
> On Mon, May 21, 2018 at 2:42 PM Sergio Garcia Murillo 
> <sergio.garcia.murillo@gmail.com 
> <mailto:sergio.garcia.murillo@gmail.com>> wrote:
>     On 21/05/2018 23:10, Peter Thatcher wrote:
>>     On Mon, May 21, 2018 at 1:54 PM Harald Alvestrand
>>     <harald@alvestrand.no <mailto:harald@alvestrand.no>> wrote:
>>         On 05/21/2018 08:35 PM, Peter Thatcher wrote:
>     In IOT is quite common to have telemetry that needs to be in sync
>     with the media. Note that in this use case the metadata should be
>     attached to the media frame before being encoded not after.
> Why does it have to be attached before encoding?
If you want to have metadata-to-frame accuracy, you have to put the 
metadata on the frame that is currently captured, if you put it on the 
encoded frame, it may be the previous one. In any case, it doesn't add 
any complexity, as we should just need to push the metadata to the 
mediatrack, which would pash it to the encoder which will only have to 
copy it to the encoded media frame.

>>         - Taking more directly control over stream control signaling
>>         (mute state, key frame requests, end-of-stream), etc is much
>>         easier if the control plane is controlled by the application
>>         and can be integrated with the flow of media, unlike today
>>         with RTCP (not under control of app) or data channels (not
>>         integrated with flow of media).
>>         What's the application that would need more direct control
>>         over the stream's state?
>>     An SFU might want to say to a receiver "there is no audio here
>>     right now; no comfort noise, no nothing; don't mix it until you
>>     hear differently from me" instead of having a jitter buffer
>>     actively mixing silence for all silent streams.
>     RTCP PAUSED indication. I have tried a couple of times to get it
>     supported already.
> Which I have to wait until all browsers support.  And then when I have 
> a use case that's not supported quite by RTCP PAUSED, I have to go get 
> an extension standardized in the IETF, and then get it supported in 
> all the browsers.
> With a low-level control API, I wouldn't have this problem.

Again, the value of RTCP is exactly that, knowing that everyone will 
support it and in the same way. If I want an app level event/command, I 
should use metadata or dc/quic instead.

>>     Is this a requirement that would equally well be satisified by a
>>     direct API to the RTCP?
>> Yes, especially if new RTCP messages types can be added.
>     The only benefit of using RTCP is interoperability.  I don't see
>     any benefit of adding any custom app RTCP messages compared to
>     send an app message via DC or adding metadata to the media frames.
> I agree that I'd rather not use RTCP and make my own protocol.  But if 
> someone else wants to use RTCP, I'm fine with them doing so.

I don't think there cost of supporting compensate the benefit as the 
same use case can be by implemented in alternative (and better) ways.

Best regards
Received on Monday, 21 May 2018 22:42:41 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:18:41 UTC