W3C home > Mailing lists > Public > public-webapps@w3.org > July to September 2015

Re: The key custom elements question: custom constructors?

From: Jonas Sicking <jonas@sicking.cc>
Date: Thu, 16 Jul 2015 12:07:50 -0700
Message-ID: <CA+c2ei_NXMW6htHRc8QdN4s0tD5-ZJ2vc4jT=-qqmwTJs=VkoA@mail.gmail.com>
To: Domenic Denicola <d@domenic.me>
Cc: Anne van Kesteren <annevk@annevk.nl>, Olli Pettay <olli@pettay.fi>, Travis Leithead <travis.leithead@microsoft.com>, public-webapps <public-webapps@w3.org>
On Thu, Jul 16, 2015 at 9:49 AM, Domenic Denicola <d@domenic.me> wrote:
> From: Anne van Kesteren [mailto:annevk@annevk.nl]
>> I think the problem is that nobody has yet tried to figure out what invariants
>> that would break and how we could solve them. I'm not too worried about
>> the parser as it already has script synchronization, but cloneNode(), ranges,
>> and editing, do seem problematic. If there is a clear processing model,
>> Mozilla might be fine with running JavaScript during those operations.
> Even if it can be specced/implemented, should it? I.e., why would this be OK where MutationEvents are not?

I think there were two big problems with MutationEvents.

>From an implementation point of view, the big problem was that we
couldn't use an implementation strategy like:

1. Perform requested task
2. Get all internal datastructures and invariants updated.
3. Fire MutationEvents callback.
4. Return to JS.

Since step 4 can run arbitrary webpage logic, it's fine that step 3,
which is run right before, does as well. I.e. we could essentially
treat step 3 and 4 as the same.

This was particularly a problem for DOMNodeRemoved since it was
required to run *before* the required task was supposed to be done.
But it was also somewhat a problem for DOMNodeInserted since it could
be interpreted as something that should be done interweaved with other
operations, for example when a single DOM API call caused multiple
nodes to be inserted.

Like Anne says, if it was better defined when the callbacks should
happen, and that it was defined that they all happen after all
internal datastructures had been updated, but before the API call
returns, then that would have been much easier to implement.

The second problem is that it causes webpages to have to deal with
reentrancy issues. Synchronous callbacks are arguably just as big of a
problem for webpages as it is for browser engines. It meant that the
callback which is synchronously called when a node is inserted might
remove that node. Or might remove some other node, or do a ton of
other changes.

Callbacks which are called synchronously have a huge responsibility to
not do "crazy things". This gets quite complex as code bases grow. A
synchronous callback might do something that seems safe in and of
itself, but that in turn triggers a couple of other synchronous
callbacks, which trigger yet more callbacks, which reenters something

The only way to deal with this is for webpages to do the absolute
minium thing they can in the synchronous callback, and do anything
else asynchronously. That is what implementations tries to do. The
code that's run during element construction tries to only touch a
minimal number of things in the rest of the outside world, ideally

This is a problem inherent with synchronous callbacks and I can't
think of a way to improve specifications or implementations to help
here. It's entirely the responsibility of web authors to deal with
this complexity.

/ Jonas
Received on Thursday, 16 July 2015 19:08:47 UTC

This archive was generated by hypermail 2.3.1 : Friday, 27 October 2017 07:27:34 UTC