Re: [D3E] Possible Changes to Mutation Events

On Wed, 16 Jul 2008 16:18:39 -0500, Jonas Sicking <jonas@sicking.cc> wrote:
> 
> Laurens Holst wrote:
> >
> > I see, so the motivation for the change request to DOMNodeRemoved is 
> > that the second change request (throwing events at the end, after all 
> > operations) is be impossible to do if events are not always thrown at 
> > the end. And the motivation for throwing events at the end seems to be 
> > for a specific kind of optimisation called ‘queuing of events’. I would 
> > appreciate if someone could describe this optimisation.
> 
> Here is the problem we are struggling with is that the current design is
> very complex to implement. Any code we have that somewhere inside the
> code requires the DOM to be mutated mean that after the mutation we
> recheck all invariants after each mutation. This is because during the
> mutation a mutation event could have fired which has completely changed
> the world under us. These changes can be as severe as totally changing
> the structure of the whole DOM, navigating away to another webpage
> altogether and/or closing the current window.
> 

I understand your concerns, and while your proposed solution would solve your problem, it pushes this exact same burden onto web authors. Say we go ahead change the spec so that all the events are queued up and fired at the end of a compound operation. Now listeners that receive these events cannot be sure the DOM hasn't changed out from under *them* as part of a compound operation. Consider the following example:

<html><body>
 <style>.lastLink { color: red }</style>
 <a href="http://example.org">i want to be the last link</a>
 <div id="emptyMe">
  <a href="http://example.org">example two</a>
  <a class="lastLink" href="http://example.org">example three</a>
 </div>
 <script type="text/javascript">
    var numLinks = document.links.length;
    document.addEventListener( "DOMNodeRemovedFromDocument", function(e) {
        if (e.target.nodeName == 'A') { // or e.relatedNode.nodeName as the case may be
            if (--numLinks > 0) {
                document.links[ numLinks - 1 ].className = 'lastLink';
            }
        }
    }, true );
 </script>
</body></html>

If you did something like document.getElementById('emptyMe').innerHTML = '' and considered it a compound operation, the code above, which works with current implementations, will die because numLinks will be out of sync with document.links.length, and the array indexing will fail. To avoid this scenario, the code has to be rewritten to re-query document.links.length instead of assuming numLinks will always be valid. This is exactly the same problem you're currently having - the DOM is changing under the code unexpectedly, forcing it to recheck assumptions.

You could argue that this example is contrived (and it is), but I think it still illustrates the point. The current interleaving of mutations and events is bad for (some) implementations and good for web authors. Your proposed interleaving is good for (some) implementations and bad for web authors. In both cases it's for the same reason - being able to make assumptions simplifies code, so the side that gets to make those assumptions is better off, and the other side has to revalidate their assumptions.

I also consider this entire problem to be more of an implementation detail than anything else. The current spec can pose a security risk if not properly implemented, but that's true of any spec. The security risk identified is only a problem on C/C++ implementations. Speaking as a Java implementor, I prefer the spec as it stands now. It is far easier to simpler for me to assume listeners don't mutate the DOM, and then catch any exceptions that get thrown when that assumption is violated. With the proposed changes, I would have to implement some complicated queuing solution that increases memory requirements dramatically.

Cheers,
kats

Received on Thursday, 17 July 2008 05:33:54 UTC