- From: Eldar Rello via GitHub <sysbot+gh@w3.org>
- Date: Tue, 19 Mar 2024 14:33:13 +0000
- To: public-webrtc-logs@w3.org
> Could you explain in more detail why removing a delay constraint is not possible? > In the example above the streams are already out of sync? I would need to experiment with it again to be more specific, but it is not new idea and ruled out in early stages of feature development. Basically if you let one or more jitter buffers change delay freely, it first takes time to measure playout delay, then communicate that to other jitter buffers and then it takes time until target is reached. During that time the playout is out of sync and AEC fails to cancel out echo. The feature with synchronised playbacks is publicly available in a product and it is possible to play with it if there is interest. I am pretty sure that the results what can be achieved with using only jitterBufferTarget with current NetEq behaviour have hit the limits and that is why there is effort to find ways to improve it. One more example: Lets say only one receiver gets network glitch for 1 sec which can easily happen in WIFI networks, while other receivers keep operating normally. Now for one having the glitch the buffering delay jumps to 1 sec. It wouldn't make sense to try to adapt the rest of the receivers to the same 1 sec delay. Here is where jitterBufferMaximuDelay attribute comes to the rescue. > Network latency / jitter and audio device latency will be different for different devices and can change over time, so the delay needs to be adjusted continuously and independently (as in different streams will have different delay settings to achieve sync) anyway? Yes. Correct. -- GitHub Notification of comment by eldarrello Please view or discuss this issue at https://github.com/w3c/webrtc-extensions/issues/199#issuecomment-2007343547 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Tuesday, 19 March 2024 14:33:14 UTC