W3C home > Mailing lists > Public > public-webrtc-logs@w3.org > October 2019

Re: [webrtc-pc] Add adaptivePtime to RTCConfiguration (#2309)

From: Roman Shpount via GitHub <sysbot+gh@w3.org>
Date: Wed, 02 Oct 2019 21:01:43 +0000
To: public-webrtc-logs@w3.org
Message-ID: <issue_comment.created-537679065-1570050102-sysbot+gh@w3.org>
> I think your suggestion of any-value-divisible-by-20-up-to-ptime is good. Modified slightly, it would be something along the lines of "the UA may use any value divisible by 20, but no more than maxptime (if specified)." If Henrik is also happy with this, I think we should go with it.

I would be happy with current proposal with this limitation. I just don't want to figure out the implication of supporting any possible Opus or, much worse, G.711 audio frame length. Something sending 17.125 ms frames make me shiver.

> There are other cases out there, though I have to confess ignorance of their prevalence.

So far TCP in combination with mis-configured routers was the biggest offender. Things you commonly see disrupting connection are things like loading a modern web page resulting in load of ten 2 MB background images and five 1 MB font files. You also see somebody starting a torrent transfer and provider punishing the customer for this with high packet loss. Of cause, once HTTP/3 will be more widely deployed, non TCP related traffic patterns will likely emerge, but better queuing in HTTP/3 might help dealing with low bandwidth links more gracefully.

I did mention both short and long bursts of side TCP traffic. In case of large router buffers, short TCP burst results in short delay spike, which jitter buffer should be able to handle using "spike" mode. Long TCP burst results in longer delay spike which results in disrupted connection and requires some sort of recovery mechanism with some sort of UI. In case of smaller or smarter router buffers, long and short side TCP traffic results in short or long periods of RTP packet loss.

Some of your other scenarios do no result in bandwidth changes during the call. They simply result in calls running for its entire duration on the congested network.

> I also wonder (perhaps you know?) what the network use patterns are for streaming services like Netflix.

Most of video providers, like Netflix, use TCP based protocols such as DASH. This also changes with companies like Peer5 building delivery networks on top of DataChannel. Biggest issue with video providers is that they can affect not only home Internet connections, but entire service providers or backbone links. For instance, we saw quality of real time media served by Amazon cloud degrade on snow days due to more people staying home and streaming Netflix.

GitHub Notification of comment by rshpount
Please view or discuss this issue at https://github.com/w3c/webrtc-pc/pull/2309#issuecomment-537679065 using your GitHub account
Received on Wednesday, 2 October 2019 21:01:45 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:22:29 UTC