W3C home > Mailing lists > Public > whatwg@whatwg.org > February 2014

Re: [whatwg] onclose events for MessagePort

From: Jonas Sicking <jonas@sicking.cc>
Date: Mon, 3 Feb 2014 11:53:14 -0800
Message-ID: <CA+c2ei_Yuywghk12GO9Rdhe=_TGrtX2d_f1_-sfuWskifoqiNQ@mail.gmail.com>
To: Ian Hickson <ian@hixie.ch>
Cc: whatwg <whatwg@lists.whatwg.org>, Ehsan Akhgari <ehsan.akhgari@gmail.com>
On Thu, Jan 30, 2014 at 11:41 AM, Ian Hickson <ian@hixie.ch> wrote:
> On Fri, 13 Dec 2013, Jonas Sicking wrote:
>> On Fri, Dec 13, 2013 at 3:29 PM, Ian Hickson <ian@hixie.ch> wrote:
>> > On Wed, 11 Dec 2013, Jonas Sicking wrote:
>> >> No sync IPC needed. When a port is pinned, you send an async message
>> >> to the process which contains the page for the "other side". When
>> >> that process receives the message you check if the page is currently
>> >> being displayed.
>> >>
>> >> If the page has been completely torn down then you send a message
>> >> back saying that the page is dead and that the promise created during
>> >> pinning should be rejected.
>> >>
>> >> If the page is sitting in the bfcache, you remove it from the bfcache
>> >> and send a message back saying that the page is dead and that the
>> >> promise created during pinning should be rejected.
>> >>
>> >> If the page is displayed, then you add a flag indicating that if the
>> >> page is navigated away from, it should not go into the bfcache and
>> >> that we should send a signal to reject the promise.
>> >>
>> >> Obviously if the process had crashed before we were able to process
>> >> the event, you send a message back to reject the promise.
>> >>
>> >> The same thing is done when unpinning. You send a message to the
>> >> other side saying that it's getting unpinned.
>> >
>> > This means that it's possible to get a lock, have the other side
>> > navigate then go back, then have the other side receive the
>> > notification for the lock. It's this that you need blocking IPC to
>> > prevent. But I guess we could live with that just being possible.
>> Indeed. The idea with bfcache is that going back/forward should be
>> largely transparent to the page itself. So I think it's fine that it's
>> transparent also to the page that's talking to it in this instance.
>> Kicking the page out of bfcache isn't a goal in and of itself. The goal
>> is to prevent other pages from waiting unduly long for a message.
> This basically boils down to being able to flip a switch on a MessagePort
> saying that the port should not be GC'ed and should prevent its owner's
> pages from being bfcached, right? I'm still very uncomfortable with this
> idea of preventing either of these. Having a way to prevent GC on objects
> that would otherwise get GC'ed seems like it would result in leaks, and
> preventing bfcache seems like it would defeat the entire bfcache idea
> (consider what happens if some ad networks start using shared workers to
> make obtaining and showing ads more efficient, and they communicate with
> the host pages with ports that they block bfcache on -- suddently large
> parts of the Web would have bfcache defeated).

I agree that being able to prevent an object from getting GCed isn't
great, however any solution in this space is going to require the UA
to retain a bit more memory. The reason that we need to retain the
MessagePort object in the solutions discussed so far is that we've
tried to fire the event on the MessagePort object itself. If we
instead fired an event on the global indicating "the other side has
gone away", then a MessagePort doesn't need to be retained.

However such a solution requires a different way to identify which
communication channel was severed. We could allow naming ports and
then the name of the port would be included in the "lost connection to
an other side". Of course, that requires keeping that name in memory
which is arguably as leaky. Or we could come up with numeric port
identifiers, in which case less memory is "leaked".

Another way to reduce leaks would be to not have explicit API for
locking and releasing a port. Instead you could indicate with a sent
message that a response is expected, and then allow the other side to
explicitly respond to the message, at which case the port would be
automatically released. This is in theory just as leaky, however the
syntax might encourage fewer leaks in practice.

Another thing that could help is to expose a property somewhere which
contains a list of the currently un-GCable ports. This way a page
could on a regular basis inspect if it has leaked any ports and clean
them up.

I'm happy to explore any of these options. Ultimately though trying to
aim for "no leaks" isn't really a well defined goal.

The bfcache issue is solvable as discussed.

>> We could allow bfcaching here by introducing the discussed separate
>> features which allow the page to signal that it's fine with being
>> bfcached even though the error was signaled. That way if the page is
>> revived from bfcache it can send a message on the channel saying "I'm
>> back now" at which point they can resume communicating.
> How would this work when a page has several libraries (e.g. ads, Facebook
> like button, Twitter tweet button), each talking to their shared worker,
> and each with different opinions about whether or not they can handle a
> bfcache situation?

As soon as any of them can't deal with bfcache the page won't be
bfcached. I.e. you could add an API which indicates "I'm currently
talking through this MessagePort object. But still let me go into
bfcache." If all MessagePorts are accounted for, the page can go into
bfcache. If a single port is not accounted for, the page does not get

> What the spec has now (MessagePort.onerror) only helps for the limited
> case of a process getting killed by the system; it doesn't help for any of
> the features of the Web platform like navigation, Worker.terminate(), etc.
> Maybe the right thing here is for you (browser vendors) to experiment with
> different approaches to this, and for me to just spec whatever comes out
> of that.

I'm happy to experiment. In the meantime I would ask that
MessagePort.onerror is removed as I don't think any browser vendor has
expressed a desire to implement.

/ Jonas
Received on Monday, 3 February 2014 19:54:09 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 17:00:15 UTC