RE: ICE freezing and Bandwidth Estimation

Hi,

>That depends on what you mean by the "ICE stack" in relation to a web API.  With PeerConnection, the entire "ICE stack" is buried within the >PeerConnection and there's a clear 1:1 relationship between PeerConnection and ICE agent.  With lower-level APIs like ORTC or the webrtc-ice extension >spec, the "ICE stack" has a division of labor between the browser and the web app.  This gives more control to the app, but it also means that it's more >complicated to coordinate freezing because basically the web app has to tell the browser which things should be in the same ICE agent (and have freezing) >and which should not.  There's no implicit 1:1 mapping like there is with PeerConnection.  The easiest thing to do is to say that each ICE thing in ORTC or >WebRTC 2.0 is a separate ICE agent with one data stream, like a PeerConnection when bundling.    Which means there is no freezing because there is only >one data stream in the ICE agent.

But, are we sure that it’s really what we want?

With freezing, you ensure that at least one pair for each foundation gets tested as early as possible (as one pair for each foundation is initially set to Waiting).

With a few media streams it may not matter much, but since day 1 of WebRTC people have been talking about use cases with tens/hundreds of streams…

>There are ways to "fix" this, but they are painful and of low value.   And no one would bother to implement it, just like almost no one bothered to >implement freezing.

With “almost no one”, are you referring to browser vendors, or ICE vendors in general?

We also need to keep in mind that the check list procedures were simplified/modified in 5245bis.

Regards,

Christer



On Thu, May 31, 2018 at 12:10 AM Christer Holmberg <christer.holmberg@ericsson.com<mailto:christer.holmberg@ericsson.com>> wrote:
Hi,

>I'm not saying we "remove" freezing. I'm saying we don't do anything in the WebRTC 2.0/NV API to support it because it's not worth the complexity.

Why would you need to do something in the API? Freezing is taken care of by the ICE stack, isn’t it?

Or, is the idea to allow the user to override what is going on in the check lists?

Regards,

Christer




On Mon, May 28, 2018 at 4:33 AM Christer Holmberg <christer.holmberg@ericsson.com<mailto:christer.holmberg@ericsson.com>> wrote:
Hi,

I still fail to see how this is related to removing freezing. Sure, you can avoid freezing by using a separate PC for each stream, or to mandate BUNDLE, but why would it have to be removed?

Regards,

Christer

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com<mailto:silviapfeiffer1@gmail.com>>
Date: Thursday 24 May 2018 at 22:59
To: Bernard Aboba <Bernard.Aboba@microsoft.com<mailto:Bernard.Aboba@microsoft.com>>
Cc: "pthatcher@google.com<mailto:pthatcher@google.com>" <pthatcher@google.com<mailto:pthatcher@google.com>>, Harald Alvestrand <harald@alvestrand.no<mailto:harald@alvestrand.no>>, "public-webrtc@w3.org<mailto:public-webrtc@w3.org>" <public-webrtc@w3.org<mailto:public-webrtc@w3.org>>
Subject: Re: ICE freezing and Bandwidth Estimation
Resent-From: "public-webrtc@w3.org<mailto:public-webrtc@w3.org>" <public-webrtc@w3.org<mailto:public-webrtc@w3.org>>
Resent-Date: Thursday 24 May 2018 at 23:00


On Thu., 24 May 2018, 11:14 am Bernard Aboba, <Bernard.Aboba@microsoft.com<mailto:Bernard.Aboba@microsoft.com>> wrote:
On May 23, 2018, at 16:49, Peter Thatcher <pthatcher@google.com<mailto:pthatcher@google.com>> wrote:
>
> I'm not saying anyone would do it, because it's kind of madness.  But it's a theoretically-possible madness.
>
> Here's a simple proof:  we know that unbundled m-lines can have transports connected to different hosts (with different DTLS certs, etc).  And those hosts can be browsers, and those browsers would be different PeerConnections.
>
> The congestion control might not work well, but ICE will.

[BA] In a small conference use case, it is common for a browser to have multiple PeerConnections, each to a different mesh participant.  Since many conferences are in practice small (less than 4 participants), this is quite a practical scenario.

An expeditious way to establish such a mesh conference is to send an Offer that can be responded to by multiple mesh participants (e.g. broadcast the Offer to a Room, and have other participants respond to the Offerer) so that the conference can get started in a single RTT.  Requirements:

a. Support for parallel forking.  This is what lead to the IceGather/IceTransport separation in ORTC.
b. Need to be able to construct multiple DtlsTransports from the same local certificate.
c. Need to restrict aggregate outgoing bandwidth for all mesh connections.

What's the advantage of this over sending multiple offers in parallel in JavaScript (asynch)? In my opinion this is needlessly complicating things - you need to manage the connection with each endpoint separately anyway.

Just my 2c worth...

Cheers,
Silvia.

Received on Thursday, 31 May 2018 18:27:44 UTC