- From: Robin Raymond <robin@hookflash.com>
- Date: Sat, 7 Feb 2015 10:39:11 -0500
- To: Bernard Aboba <bernard.aboba@microsoft.com>
- Cc: "public-ortc@w3.org" <public-ortc@w3.org>
- Message-ID: <etPan.54d6319f.333ab105.191@Robins-iMac.local>
Ok. Lots of questions and I’m game! Let me try to answer these questions inline as to what my preferences would be... On February 7, 2015 at 8:34:12 AM, Bernard Aboba (bernard.aboba@microsoft.com) wrote: Some questions about the prune timeout: Is it the incremental time (in seconds?) after which all candidates (host, server reflexive, relay) are removed from the IceGatherer? I would prefer that the engine has the option to prune after that time but is not mandated to prune after the timeout. This would allow host candidates to remain viable in the event of failure on mobility (will post an issue on this one shortly). Likewise the engine would know best of the costs for keeping some interfaces alive versus other. Might even be nice to have options to control the warmth of interfaces / reflexive / relay but I understand that might be too much to ask. Or does it only apply to some portion of the candidates (e.g. host candidates from live interfaces do not timeout)? I do not see a reason to shutdown host candidates. For example, on 3G/4G interfaces the IP can remain active but in a lower power state if no packets are transmitting and thus become a viable backup (especially in mobile IPv6 case). This is related to issue #176. Is there a minimum or maximum value? I don’t see a reason for a minimal. I guess one could create a Gatherer that would immediately timeout (and prune). I don’t see a reason to disallow. This might be that they want to have the object ready but are not ready to use it at that time. Is there an event on expiration of the prune timeout? I would hope so! I’m not really sure a “prune” event is needed. I think firing candidate removal events (or even a candidates “changed” event) would be advantageous and sufficient. How is it expected to be used in communication between peers? This is all about the offer. This would allow developers to get replies in from all forks on their own timing schedule and likewise bring back up interfaces when new forks might be introduced later. But this could also be used in the case of an ICE restart to ensure all interfaces were gathered again and active during a restart scenario. For example, would a local peer communicate the prune timeout to the remote peer? If we do allow candidates to remain active beyond the pruning state [my preference], then I would like the option to transmit the pruning of candidates as they happen (or at the very least tell the remote party how long I expect a candidate to be viable so it can remove it at the appropriate time frame). I think this allows for some nice application level decisions if we can have a candidate removal event but we can also know / control the expected lifetime of candidates. How is the time between IceGatherer construction and receipt of the Answer by the remote peer adjusted for? Via a timer set on the local peer? I’m not sure I understand this particular question. ________________________________________ From: Robin Raymond [robin@hookflash.com] Sent: Friday, February 06, 2015 7:06 PM To: Bernard Aboba; Roman Shpount Cc: public-ortc@w3.org Subject: Re: Issue 174: When is an IceGatherer allowed to prune host, reflexive and relay candidates? +1. I’m for (b) with BA’s actor suggestion and agree w/ Roman’s reasoning.. -Robin
Received on Saturday, 7 February 2015 15:39:40 UTC