W3C home > Mailing lists > Public > public-webapps@w3.org > July to September 2012

Re: Sync API for workers

From: Jonas Sicking <jonas@sicking.cc>
Date: Mon, 3 Sep 2012 19:30:52 -0700
Message-ID: <CA+c2ei_FLZhibejpcSGn7OEzKb5QWu-fzPOf-awVaGNmOHaEWA@mail.gmail.com>
To: Glenn Maynard <glenn@zewt.org>, Andrea Marchesini <amarchesini@mozilla.com>
Cc: David Bruant <bruant.d@gmail.com>, "public-webapps@w3.org" <public-webapps@w3.org>
On Mon, Sep 3, 2012 at 4:47 PM, Glenn Maynard <glenn@zewt.org> wrote:
> On Mon, Sep 3, 2012 at 4:32 PM, Jonas Sicking <jonas@sicking.cc> wrote:
>> It seems hard to ensure that deadlocks can't happen if we try to allow
>> blocking calls on generic MessagePorts, this is why we haven't been
>> interested in doing that. I'm not saying it's impossible, but if
>> someone wants to propose this, please keep in mind that we're not
>> interested in proposals which allow deadlocks, so you'll need to prove
>> that your proposal can't cause deadlocks.
> (See below.)
>> Another problem you have is that the A, B and C events aren't run from
>> the event loop like normal events. They are instead run from whatever
>> callstack existed when someone decided to make synchronous call to the
>> parent. This will give web developers exactly the same problem as
>> we've had with Gecko code spinning the event loop. When doing
>> something like that, you have to be absolutely sure that all code
>> which exists up your call stack can deal with all of these messages
>> getting dispatched. And all of those messages have to be able to deal
>> with being dispatched under the existing callstack.
> I think all of the problems you're describing only happen if there's just
> one channel that you can post messages to, eg. if you can't block on
> MessagePort but only the global port.  I think we can find a solution for
> the MessagePort problem.  Once you can block on specific MessagePorts, you
> no longer have the confusion of getMessage() returning messages meant for
> other APIs.  (After all, isn't that what MessageChannels are for?)
> Conceptually, I think this is possible.  You should only be able to perform
> a blocking getMessage if the other side of the port is in a dedicated worker
> who is a descendant of the current thread.  Here's an attempt:
> - Add an internal flag to MessagePort, "blocking permitted", which is
> initially set.
> - When a MessagePort "port" is transferred from source to dest,
>     - If source is an ancestor of dest, the "blocking permitted" flag of
> "port" is cleared.  (This is a "down" transfer.)
>     - Otherwise, if source is a descendent of dest, the "blocking permitted"
> flag of "port"'s entangled port is cleared.  (This is an "up" transfer.)
>     - Otherwise, if source == dest, do nothing.
>     - Otherwise, the "blocking permitted" flag of both "port" and its
> entangled port are cleared.  (For example, a port was transferred to a
> shared worker.)
> - When the "blocking permitted" flag of any MessagePort is cleared, any
> getMessage calls blocking on that port throw an exception.
> - Calling getMessage on a port (with a nonzero timeout) whose "blocking
> permitted" flag is cleared throws the same exception.
> - Additionally, calling getMessage on a port (with a nonzero timeout) when
> neither it nor its entangled port has ever been transferred to another
> thread throws an exception.  (Blocking for data when the current thread
> holds both sides of the port guarantees a deadlock.)
> In other words, if a port is transferred "up" the thread tree, then it's
> allowed to block downwards, but any port that's ever been transferred "down"
> can not.  If you transfer a port down and then back up, then neither side
> can ever block on the port (the flag has been cleared on both sides).  (The
> "clear the entangled port's flag" would presumably actually mean sending a
> control message over the pipe, telling the other side to clear the flag.)
> This works for dedicated workers, where the ancestor/descendant concepts
> make sense.  This wouldn't work for shared workers, which would never be
> able to block.  (That's hard, since shared workers create cycles.  I don't
> think any current proposal can support shared workers while also disallowing
> deadlocks.)
> Now, this approach can go one of two ways: we can either allow blocking up
> the tree or down the tree, but we'd have to pick one or the other.  I'm
> inclined to recommend blocking *down* the tree, since that allows use cases
> like the ones you mentioned, eg. starting a thread to do IndexedDB calls,
> which you (the parent) can then block on.

We can't generically block on children since we can't let the main
window block on a child. That would effectively permit synchronous IO
from the main thread which is not something that we want to allow.

So if we're only choosing one direction (which is definitely the
simpler thing to do), then it has to be that you can only block "up"
the tree.

Also, the last "Otherwise, the "blocking permitted" flag of both
"port" and its entangled port are cleared." has to apply any time when
sending a port through a generic port rather than through a dedicated
worker parent/child? When communicating with a generic port we never
have any idea what is on the receiving end. And what is on the
receiving end can change between the time when a message is sent, and
when it is received.

>> 1.1 is nifty in that it allows us to use events while dealing with
>> replies from multiple handlers. But it seems like it adds a feature
>> that doesn't have any good use cases (at least I haven't heard any),
>> solely for the purpose of giving us a good reason for using events.
>> The result is both more code for us, and more API surface and syntax
>> for developers.
>> So I strongly prefer doing proposal 1 or 2 instead.
> I believe what those proposals are effectively doing is creating a single
> separate messaging channel for sync messages.  That's basically the same
> effect as above, except in a way that introduces more APIs to the platform,
> separates synchronous messaging from async messaging more than necessary,
> and loses a lot of the flexibility of MessagePorts.

Yes, you are correct that 1 and 2 create a separate channel for
synchronous messages.

Your proposal makes it possible for pages to avoid the problems
described in my email by setting up a separate channel used for
synchronous messages. But some of the problems still remain. As soon
as a message channel is used for both synchronous and asynchronous
messages you can easily get into trouble. If someone calls the
blocking waitForMessage() function and receive a message which was
intended to be delivered asynchronously there is no good recourse.
Basically any time that happens there are only bad options available,
many of which have subtle problems that only happen intermittently
like the ones I described in my initial email.

Since that is the case, I think the best solution is to always force
separate channels to be used for synchronous and asynchronous

However we technically could still allow synchronous message channels
other than the ones proposed in proposals 1 and 2. Something like the
following would work, though is fairly complex:

Introduce a new SyncMessageChannel object. When created it has two
properties, syncPort and asyncPort. The syncPort object is like a
normal MessagePort object, but has a blocking waitForMessage function
*instead of* the onmessage attribute. asyncPort looks like a normal
MessagePort, (possibly with postMessage replaced with postSyncMessage
for clarity).

The syncPort object can only be sent through an asyncPort object, a
clone thereof, or through the implicit port of a

The asyncPort object can only be sent through a syncPort object, a
clone therof, or through the implicit port of a dedicated Worker

All in all this is a much more complicated setup though. I think it'd
be worth keeping the simpler API like the 1 or 2 proposals even if we
do introduce SyncMessageChannel since that likely covers the majority
of use cases.

/ Jonas
Received on Tuesday, 4 September 2012 02:31:51 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:13:38 UTC