Re: [webrtc-extensions] How does a developer decide on a value for `playoutDelay` ? (#46)

> My two cents.
> 
> So, having a numeric value makes sense, at least for us.

Yes, but in your case you've answered the question: you're using the rtt to change the value of the jitter buffer depth. For your use-case it makes sense. With your changes that make it converge faster, it's probably OK for A/V sync as well (if needed). What is missing is a way for regular apps to determine the best value for the jitter buffer depth, based on something (network condition, machine load, etc.), and to know the duration of the jitter buffer, for A/V sync. This is what this issue is about.

If it's fixed for an app (as it seems to be for e.g. Meet), then a numerical value is bad and an enum is superior.

> On a side note, I would be awesome if we could add more parameters to control the jitter buffer behavior (or even completely replace it) as, at least NetEq, is not tuned correctly for several use cases.

This will have to happen as a natural consequence of the abstraction level lowering that seem to happen in 2.0.



-- 
GitHub Notification of comment by padenot
Please view or discuss this issue at https://github.com/w3c/webrtc-extensions/issues/46#issuecomment-661779681 using your GitHub account

Received on Tuesday, 21 July 2020 10:40:09 UTC