RE: Operations in invalid states: Exceptions don't make sense.

In the standard state machine model, firing the event handler is part of handling the event, so the next operation wouldn't be pulled off the queue until the event handler completed. So the state can't change while the event handler is operating. Thus the 'wrong state' problem you're worried about can't occur.  For this model to work, though, there have to be limitations on what can go in an event handler.  It can queue  other asynch operations, but it can't grab the thread and not release it.

- Jim
P.S.  Of course, it can still be the case that the state is correct when you queue the operation, but has changed by the time  you pull the event off the queue, which is why you have to check state when you execute the operation, not when you queue it.  
-----Original Message-----
From: Adam Bergkvist [mailto:adam.bergkvist@ericsson.com] 
Sent: Wednesday, June 26, 2013 2:52 AM
To: Gili
Cc: public-webrtc@w3.org
Subject: Re: Operations in invalid states: Exceptions don't make sense.

On 2013-06-26 03:55, Gili wrote:
> On 6/25/2013 11:11 AM, Adam Bergkvist wrote:
>> On 2013-06-24 18:04, cowwoc wrote:
>>> On 24/06/2013 8:47 AM, Adam Bergkvist wrote:
>>>> On 2013-06-24 13:44, Jim Barnett wrote:
>>>>> Yes, but are you going to signal an error if the developer makes a 
>>>>> call when you're in a processing state?  In that case, you'll end 
>>>>> up with a lot of polling code, sitting  around waiting for the 
>>>>> state to change.  That's an ugly programming model.  Now if it's 
>>>>> the case that some operations can succeed when you're in the 
>>>>> processing state, then that's a good argument for having a 
>>>>> processing state, since it now behaves like a first-class state, 
>>>>> with a differentiated response to different events.  But if all 
>>>>> operations are going to fail until the processing is done, the queuing model is cleaner.
>>>>>
>>>>
>>>> Yes, an API call in the wrong state should result in a state error.
>>>> Regarding polling the state, we already got state verification with 
>>>> the queuing model; the difference is that it's done async (for some 
>>>> operations). It's usually not a problem since this kind of state is 
>>>> mostly based on application context. For example, the 
>>>> PeerConnection will be in a processing state after a call to 
>>>> setLocalDescription() and until the success or error callback fires.
>>>>
>>>> Code that uses the success and error callbacks will look the same.
>>>> It's only questionable code like (Jan-Ivar's example):
>>>>
>>>>     // Bad code. state=have_local_offer
>>>>     pc.setRemoteDescription(answer1, success, mayfail);
>>>>     pc.setRemoteDescription(answer2, success, mayfail);
>>>>
>>>> that will behave differently. The second call will *always* throw 
>>>> an exception because the PeerConnection is in a processing state as 
>>>> a result of the first call. With a queue, the behavior is derived 
>>>> from rules that depends on what happens to the first call.
>>>>
>>>> The processing states are real states. You can do anything beside 
>>>> call some the sensitive operations we currently queue.
>>>>
>>>> /Adam
>>>>
>>>
>>>      Adam, you're wrong to assume that users won't receive multiple 
>>> events in parallel. Why? Because events can come from two sources:
>>>
>>>   * The PeerConnection
>>>   * The server (used during the bootstrap process)
>>>
>>> For example:
>>>
>>>   * PeerConnection is processing a command, createAnswer(), updateIce(),
>>>     etc.
>>>   * The remote peer disconnects or sends an ICE candidate, requiring me
>>>     to invoke PeerConnection.close() or addIceCandidate()
>>>
>>> I'm already been forced to implement an application-level queue in 
>>> the opposite direction because the server may return HTTP 503 at any 
>>> time, requiring me to sleep and repeat the operation. This means 
>>> that when PeerConnection fires an event I cannot simply send it to 
>>> the server: I have to queue it and send it at a later time.
>>
>> Are there any reasons to fail these calls because of the signaling 
>> state? It's not in the spec but a while ago we came to the conclusion 
>> that close() should always work (regardless of state; no-op when it 
>> has nothing to close).
>
>  1. Even if close() always works, we need to define what happens to the
>     operation that is in progress when close() is invoked in parallel.
>  2. What about the other functions? What happens when the server sends
>     an SDP offer followed by multiple ICE candidates? Is WebRTC expected
>     to process these in parallel or should users queue incoming server
>     messages and only process them one at a time?
>
>      The more you dig into this, the more you'll realize that you must 
> have a queue between the server and WebRTC and vice versa.
>

I'm not arguing against basic queuing of operations in general. We have a lot of API calls that spawns async tasks before they return, and there's no sane reason why an async task created by a later call should be processed before the async task of a previous call (if they belong to the same category of operations or "queue").

What I do believe is that relying too much on the queue to solve everything gives us some consequences unusual for a web API. For example, when the app gets a signalingstatechange event that says that the current state is "x", it's not certain that a function that is valid in state "x" will at least pass the state check, when called in the event handler, since there might be, or have been, an operation in the queue that changed the state before my operation was picked up. 
InvalidStateError is usually thrown as an exception, and exceptions in JavaScript usually means programming errors. I'm not familiar with any other web APIs that uses async InvalidStateError like we would have to.

I agree with you that having PeerConnection handle as much as possible in parallel (or queued), like incoming ICE candidates, is desirable.

/Adam

Received on Wednesday, 26 June 2013 12:49:25 UTC