Re: Operations in invalid states: Exceptions don't make sense.

On 6/25/2013 11:11 AM, Adam Bergkvist wrote:
> On 2013-06-24 18:04, cowwoc wrote:
>> On 24/06/2013 8:47 AM, Adam Bergkvist wrote:
>>> On 2013-06-24 13:44, Jim Barnett wrote:
>>>> Yes, but are you going to signal an error if the developer makes a
>>>> call when you're in a processing state?  In that case, you'll end up
>>>> with a lot of polling code, sitting  around waiting for the state to
>>>> change.  That's an ugly programming model.  Now if it's the case that
>>>> some operations can succeed when you're in the processing state, then
>>>> that's a good argument for having a processing state, since it now
>>>> behaves like a first-class state, with a differentiated response to
>>>> different events.  But if all operations are going to fail until the
>>>> processing is done, the queuing model is cleaner.
>>>>
>>>
>>> Yes, an API call in the wrong state should result in a state error.
>>> Regarding polling the state, we already got state verification with
>>> the queuing model; the difference is that it's done async (for some
>>> operations). It's usually not a problem since this kind of state is
>>> mostly based on application context. For example, the PeerConnection
>>> will be in a processing state after a call to setLocalDescription()
>>> and until the success or error callback fires.
>>>
>>> Code that uses the success and error callbacks will look the same.
>>> It's only questionable code like (Jan-Ivar's example):
>>>
>>>     // Bad code. state=have_local_offer
>>>     pc.setRemoteDescription(answer1, success, mayfail);
>>>     pc.setRemoteDescription(answer2, success, mayfail);
>>>
>>> that will behave differently. The second call will *always* throw an
>>> exception because the PeerConnection is in a processing state as a
>>> result of the first call. With a queue, the behavior is derived from
>>> rules that depends on what happens to the first call.
>>>
>>> The processing states are real states. You can do anything beside call
>>> some the sensitive operations we currently queue.
>>>
>>> /Adam
>>>
>>
>>      Adam, you're wrong to assume that users won't receive multiple
>> events in parallel. Why? Because events can come from two sources:
>>
>>   * The PeerConnection
>>   * The server (used during the bootstrap process)
>>
>> For example:
>>
>>   * PeerConnection is processing a command, createAnswer(), updateIce(),
>>     etc.
>>   * The remote peer disconnects or sends an ICE candidate, requiring me
>>     to invoke PeerConnection.close() or addIceCandidate()
>>
>> I'm already been forced to implement an application-level queue in the
>> opposite direction because the server may return HTTP 503 at any time,
>> requiring me to sleep and repeat the operation. This means that when
>> PeerConnection fires an event I cannot simply send it to the server: I
>> have to queue it and send it at a later time.
>
> Are there any reasons to fail these calls because of the signaling 
> state? It's not in the spec but a while ago we came to the conclusion 
> that close() should always work (regardless of state; no-op when it 
> has nothing to close).

 1. Even if close() always works, we need to define what happens to the
    operation that is in progress when close() is invoked in parallel.
 2. What about the other functions? What happens when the server sends
    an SDP offer followed by multiple ICE candidates? Is WebRTC expected
    to process these in parallel or should users queue incoming server
    messages and only process them one at a time?

     The more you dig into this, the more you'll realize that you must 
have a queue between the server and WebRTC and vice versa.

Gili

Received on Wednesday, 26 June 2013 01:56:24 UTC