W3C home > Mailing lists > Public > public-webrtc-logs@w3.org > September 2019

Re: [webrtc-pc] Add adaptivePtime to RTCConfiguration (#2309)

From: Roman Shpount via GitHub <sysbot+gh@w3.org>
Date: Sat, 28 Sep 2019 00:16:28 +0000
To: public-webrtc-logs@w3.org
Message-ID: <issue_comment.created-536132845-1569629787-sysbot+gh@w3.org>
> We have tested and seen that existing versions of Chrome, Firefox and Safari already de-facto support receiving adaptive frame lengths. 

What exactly did you test?
That these browsers will receive variable frame rate and do not blow up?
That audio quality scores do not degrade with variable frame lengths under normal network conditions?
That audio quality would not degrade with variable frame lengths under adverse network conditions (delay spikes, packets going over two different routes with different delays, clock skew, high random jitter)?

You probably need a fairly extensive test using fuzzing plus recorded network delay profiles  and objective audio quality evaluation algorithm to verify this. I do not think you have done this even for your regular jitter buffer implementation with fixed frames.

>By default, an Opus frame is produced every 20ms. That's 50 times per second. We want to be able >to respond to network changes immediately when they are perceived. We cannot call getStats 50 >times per second and run client code that frequently (not to mention bake it into all JS apps). We >can do that in the UA quite easily, though. And we can optimize the code for that, and get all of the >Web to enjoy it, rather than force everyone to reinvent this wheel in JS.

Are you actually planning to update frame length 50 times per second? Bandwidth estimate does not change 50 times per second. Furthermore, the bandwidth estimate is actually obtained by receiver which needs to send it to sender which in turn will adapt the frame length. There is a natural delay to this.

>Imagine then that a change of frame-length from 20ms to 120ms is desired as soon as the BWE 
>drops below 60kbps.

First of all, most web application are quite useless at 60kbps.
Second, using 120 ms frame results in very high delay making communication feel more like broadcast then actual interactive conversation.

> (1) the time required to respond to changes,

Time is naturally required to do bandwidth estimate and transmit it to the other party. Making frame rate size changes too quickly results in higher quality degrade then waiting few hundred ms before doing this allowing connection to recover.

> (2) the need to bake this important functionality into all JS applications. BWE and target-bitrate adaptation, for example, are done by the UA. It would be bad for the Web to require JS apps to bake that into their JS code.

I am of the opposite opinion. Browser should do less. It is better for the web that if something can be implemented in JS, it should be implemented there vs dealing with 20 different versions of 4 different implementations in different browsers each doing something incomprehensible.

P.S. I have some experience building and operating a VoIP application that was designed to work under low bitrate under adverse network conditions. All we needed to do, in addition to codec bitrate adaption, was to change ptime using regular offer/answer between 20/30/40/60 ms roughly 1-2 seconds after major network change was detected. This network change often corresponded to IP address change for the client, so offer/answer came naturally.

-- 
GitHub Notification of comment by rshpount
Please view or discuss this issue at https://github.com/w3c/webrtc-pc/pull/2309#issuecomment-536132845 using your GitHub account
Received on Saturday, 28 September 2019 00:16:29 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:22:29 UTC