RE: ICE freezing and Bandwidth Estimation


Hi,

If you are going to remove freezing, I think there are some things to consider:

Q1:

Even if each media stream has its own ICE agent, the standard still define usage of freezing when a foundation contains multiple components (one component is set to Waiting while the rest are Frozen). Is the assumption that there 

Q2:

ICE defines procedures where a successful connectivity check may impact the state of a pair associated with another stream. How would that be impacted if each stream has its own ICE agent?

Q3:

Even if different ICE agents are used, there is still a virtual "global Ta value". Assuming each ICE agent will order its foundations in the same way, this would impact when pairs for lower-priority foundations are tested.


Q4:

While the WebRTC device may use different ICE agents for each stream, the remote peer may not. Not sure how/if exactly that would impact things, but it needs to be considered.



Regards,

Christer






-----Original Message-----
From: Bernard Aboba [mailto:Bernard.Aboba@microsoft.com] 
Sent: 19 May 2018 02:25
To: Peter Thatcher <pthatcher@google.com>
Cc: public-webrtc@w3.org
Subject: Re: ICE freezing and Bandwidth Estimation

On May 18, 2018, at 16:24, Peter Thatcher <pthatcher@google.com> wrote:
> 
> I think for WebRTC NV it would be better to simplify and say that all IceTransports are separate ICE agents and no freezing happens.  I'm pretty sure most browsers don't, and never will, implement freezing behavior anyway.  And I don't think any applications really care anyway about the theoretical benefit that would come from freezing.  It's more likely they care about things like controlling when relay server is used or not, or if backup candidates pairs are used and that sort of thing.  

[BA] +1. 

In Edge ORTC we largely ignore freezing which was one of the reasons we did not initially see value in IceTransportController.  So far that does not seem to have resulted in any ICE interoperability issues.  So I agree we don’t need to worry about freezing.

Peter also said: 

“ICE-based BWE makes no sense to me.  It's the wrong layer for that.  I suppose if the app could control the timing of checks and know when responses come back (as with SLICE), then an app could do some form of BWE, but limited by the global ICE pacer/limiter. “

[BA] I understand the concept of estimating throughput or latency for a candidate pair and can think of some uses for that (like choosing the selected pair), though I’m not clear how much of this needs to be exposed to the application. 

There *might* be a use case for estimating or restricting bandwidth in a mesh conference (e.g. imposing a collective limit on multiple RtpSenders).  But as you say, that would seem to be motivation for an RtpController, not an IceTransportController object. 

So overall, I’m left scratching my head.  What were we thinking??

> If we want BWE exposed to the app, expose a way to get a BWE value from the SctpTransport, QuicTransport, and/or RtpTransport.  That would be useful.

[BA] I can certainly see value in that from a statistical point of view.  Is there reason to consider this outside the Stats API?

Received on Wednesday, 23 May 2018 09:33:16 UTC