W3C home > Mailing lists > Public > public-webrtc@w3.org > May 2013

Re: Operations in invalid states: Exceptions don't make sense.

From: Harald Alvestrand <harald@alvestrand.no>
Date: Tue, 28 May 2013 11:23:56 +0200
Message-ID: <51A477AC.6030901@alvestrand.no>
To: public-webrtc@w3.org
On 05/28/2013 11:09 AM, Adam Bergkvist wrote:
> On 2013-05-27 14:31, Harald Alvestrand wrote:
>> On 05/27/2013 11:00 AM, Adam Bergkvist wrote:
>>> On 2013-05-23 14:23, Jim Barnett wrote:
>>>> If you split the queueable operations into two, wouldn't the second,
>>>> queued, operation also have to check the state before it executed?
>>>> (And raise an asynch error if the state is wrong.)  Another operation
>>>> could  have changed the state between the time  the operation is
>>>> queued and the time it executes.  If that's the case, there's nothing
>>>> wrong with checking state before we queue the operation, but it isn't
>>>> really necessary.
>>>
>>> Yes, depending on how we describe this we might have to check the
>>> state again, abort and possibly report an error. We have some
>>> algorithms that describe how the UA reacts to external events and runs
>>> some operations (incoming stream, new ice candidate, ...), and in
>>> those cases it simply aborts if the PeerConnection is closed. Picking
>>> up a previously queued task from the operations queue is quite similar
>>> I think. I believe the reason to not report an error in those cases is
>>> that the script should already have been notified that the
>>> PeerConnection has been closed down and we don't want every single
>>> error handler we have to start firing after that point; the
>>> PeerConnection is closed and done.
>>>
>>> So my vote is to check for the closed state before queuing the task
>>> and then stop processing tasks when the PeerConnection is closed. That
>>> would be similar to aborting queued tasks if the PeerConnection is
>>> closed. Note that this doesn't rule out checking for other states,
>>> beside closed, when a task is picked up and run.
>>
>> I just want to make sure we guarantee the success/failure property:
>>
>> When you call an <action>, one out of 3 things happen:
>>
>> - You get an exception thrown
>> - The call returns normally, and later an error callback is called
>> - The call returns normally, and later a success callback is called
>>
>> The language you're proposing sounds as if we'll violate that property
>> if the connection is closed after the call returns normally, but before
>> the task is dequeued.
>>
>> I don't think we should violate that property.
>
> Yes, in some sense it's a violation of these properties, but under 
> special circumstances (the connection is closed and thereafter 
> unusable). At it's defense, it's more aligned with how we and other 
> specs handle queued tasks when the object is closed before the task is 
> executed. For example the tasks queued when WebSocket and EventSource 
> are ready to dispatch the message event as a response to incoming data.
>
> On the other hand, our case could be seen as a bit different since the 
> queued task originates from a JavaScript call done locally.
>
> The question boils down to: should we fire error callbacks from queued 
> tasks after the PeerConnection has been closed?

Counter-question: Does it hurt anything to do so? It could be seen as 
part of "cleaning house".

>
> The motivation for my view is that I think the PeerConnection is 
> closed and done and should stop all it's operations after the script 
> has been notified thereof.
>
> This might be a case where we should ask for advice from some external 
> group (e.g. public-script-coord) so we don't violate some Web Platform 
> level properties.

Sounds like something to ask Anne on Wednesday.
I *think* futures are an example of the "must succeed or fail" contract, 
I don't know what the state of a future is if neither happens before the 
context is destroyed.

>
> /Adam
>
Received on Tuesday, 28 May 2013 09:24:38 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:33 UTC