- From: Adam Bergkvist <adam.bergkvist@ericsson.com>
- Date: Mon, 27 May 2013 11:00:28 +0200
- To: Jim Barnett <Jim.Barnett@genesyslab.com>
- CC: Eric Rescorla <ekr@rtfm.com>, Adam Roach <adam@nostrum.com>, "public-webrtc@w3.org" <public-webrtc@w3.org>
On 2013-05-23 14:23, Jim Barnett wrote: > If you split the queueable operations into two, wouldn't the second, > queued, operation also have to check the state before it executed? > (And raise an asynch error if the state is wrong.) Another operation > could have changed the state between the time the operation is > queued and the time it executes. If that's the case, there's nothing > wrong with checking state before we queue the operation, but it isn't > really necessary. Yes, depending on how we describe this we might have to check the state again, abort and possibly report an error. We have some algorithms that describe how the UA reacts to external events and runs some operations (incoming stream, new ice candidate, ...), and in those cases it simply aborts if the PeerConnection is closed. Picking up a previously queued task from the operations queue is quite similar I think. I believe the reason to not report an error in those cases is that the script should already have been notified that the PeerConnection has been closed down and we don't want every single error handler we have to start firing after that point; the PeerConnection is closed and done. So my vote is to check for the closed state before queuing the task and then stop processing tasks when the PeerConnection is closed. That would be similar to aborting queued tasks if the PeerConnection is closed. Note that this doesn't rule out checking for other states, beside closed, when a task is picked up and run. /Adam
Received on Monday, 27 May 2013 09:01:09 UTC