[whatwg] Combining the DedicatedWorker and SharedWorker interfaces

Nov 14, 2008, ? 3:59 AM, Ian Hickson ???????(?):

>> For the sake of completeness, a connect/startConversation method on a
>> worker really should automatically open the receiving port - this is
>> what examples posted so far implied, and it would cause a lot of
>> aggravation if it didn't. I know I'm often forgetting to open the  
>> port
>> when writing my tests, and it's not a very easy mistake to spot.
>
> What do you mean by "open the port"? Do you mean calling start()? If  
> so,
> that should happen automatically when you set onmessage the first  
> time,
> per spec.


Oh, that's my mistake - I totally didn't expect that it could have  
such side effect. It seems weird that addEventListener("message", ...)  
does not have such effect, does it?

>> In an async processing model, there is simply no way for the  
>> receiver to
>> have a list of all objects that were posted to it - it's exactly the
>> reason for the existence of the queue that events are delivered
>> asynchronously and cannot be peeked before being delivered. For  
>> example,
>> in a multi-process implementation, these events may still be across
>> process boundary.
>
> It actually doesn't really matter if there is something that has been
> posted but not yet received, because that is indistinguishable (as  
> far as
> I can tell) from the case of the worker having shut down a split  
> second
> before that object was posted.

I'm not sure what state you mean by "shut down" here - the spec does  
not define this, and shutting down a side of an async communication  
channel is complicated (see e.g. a TCP/IP state diagram). Anyway, the  
contents of "the worker's ports" is used for defining "active needed  
worker" and "suspendable worker" further on, which are concepts that  
are very important for worker lifetime definition. If the ports in  
event queue are not important, then the spec should not say that they  
are included in "the worker's ports". This would resolve the  
concurrency problem, but I don't think that the resulting behavior  
would be desirable.

>> It is not possible to have a symmetric relationship in an  
>> asynchronous
>> messaging model - we need a multi-step entagling/unentangling  
>> protocol,
>> so the relationship is necessarily asymmetric. One can't freeze  
>> another
>> process (or really, even another thread) to change something in it
>> synchronously.
>
> The above is not a requirement, it's just a description of the  
> concept. I
> don't think anything actually depends on it being symmetric; all the  
> parts
> that actually entangle ports have (or, are intended to have, maybe I
> missed some) pretty well-defined synchronisation points.

OK, say there is a pair of entangled ports in different threads/ 
processes, portA and portB. We concurrently post both with  
postMessage, which causes the ports to be cloned. From the point of  
view of first thread, PortA is now unentangled, and portA' is  
entangled with portB. From the point of view of second thread, PortB  
is unentangled, and portB' is entangled with portA.

Next, threads send asynchronous notifications to each other, asking to  
update entangling information. First thread's notification asks portB  
to become entangled with portA'. So, portB will need to forward this  
notification to portB' (and possibly further, because portB' may have  
been posted and cloned again). This already is unduly complicated.

Now consider that all these ports need to have destroyed sooner or  
later, but not too soon. This basically means that we now have a many- 
to-many distributed GC system. It was bad enough when we had to  
garbage protect ports between threads, because this required  
modification of the JavaScript interpreter to support a certain case  
of distributed GC. But this example basically shows that we need a  
full-blown distributed GC system in order to implement port cloning.

> For example, any method that entangles two ports blocks until both  
> threads are synchronised
> and entangled.

This will cause deadlocks - if portB' is sent to the first thread as  
portB'' in the above scheme, the lock will not let synchronization  
ever finish.

> (The spec is somewhat implicit about this, but the intent is that  
> workers
> really be implemented either as two system threads, one doing
> communication and one running the JS, or by one system thread that  
> runs
> the JS in an interruptible fashion. In particular, doing something  
> that
> synchronises with a worker isn't expected to have to wait for that  
> worker
> to finish running its current JS.)

The JS thread will need to be interrupted in any case - we certainly  
don't want it to read a half-written pointer from memory or something.  
Adding memory barriers around access to data that can be modified  
externally is not sufficient, because MessagePort algorithms are not  
designed in a lock-free fashion (lock-free algorithms that only rely  
on read/write atomicity do exist, but these aren't such). Locking  
around all MessagePort functions will cause deadlocks, as demonstrated  
above, and is generally against best practices. A middle ground may  
exist, but it may not, and it's definitely hard to find.

I don't think that pursuing a design that relies on locking is  
particularly promising - for the same reason that workers do not  
expose shared data to JS programmers, it is highly desirable to not  
rely on shared data in implementations, too (except for a few well  
understood constructs, such as an event queue). So, I think that the  
specs (Web Workers and HTML5 channel messaging) should be cleaned up  
from anything that mentions synchronous access to entangled port's  
data structures to really be verified for correctness. This is not  
straightforward, and may seriously affect the API - e.g., I doubt that  
passing MessagePorts around is implementable with reasonable  
complexity, and there is not a lot of use in MessagePorts if they  
cannot be passed around.

- WBR, Alexey Proskuryakov

Received on Friday, 14 November 2008 01:43:10 UTC