- From: Jonas Sicking <jonas@sicking.cc>
- Date: Thu, 17 Jul 2008 16:11:33 -0700
- To: Kartikaya Gupta <lists.webapps@stakface.com>
- CC: Doug Schepers <schepers@w3.org>, public-webapps@w3.org
Kartikaya Gupta wrote:
> On Wed, 16 Jul 2008 16:18:39 -0500, Jonas Sicking <jonas@sicking.cc> wrote:
>> Laurens Holst wrote:
>>> I see, so the motivation for the change request to DOMNodeRemoved is
>>> that the second change request (throwing events at the end, after all
>>> operations) is be impossible to do if events are not always thrown at
>>> the end. And the motivation for throwing events at the end seems to be
>>> for a specific kind of optimisation called âqueuing of eventsâ. I would
>>> appreciate if someone could describe this optimisation.
>> Here is the problem we are struggling with is that the current design is
>> very complex to implement. Any code we have that somewhere inside the
>> code requires the DOM to be mutated mean that after the mutation we
>> recheck all invariants after each mutation. This is because during the
>> mutation a mutation event could have fired which has completely changed
>> the world under us. These changes can be as severe as totally changing
>> the structure of the whole DOM, navigating away to another webpage
>> altogether and/or closing the current window.
>>
>
> I understand your concerns, and while your proposed solution would
> solve your problem, it pushes this exact same burden onto web authors.
> Say we go ahead change the spec so that all the events are queued up
> and fired at the end of a compound operation. Now listeners that
> receive these events cannot be sure the DOM hasn't changed out from
> under *them* as part of a compound operation.
In the case where there are multiple people listening to mutation events
for a DOM, and occationally mutating the DOM during those listeners,
mutation events are already useless.
There is no way you can know that by the time you get the event is
represents reality at all. The node you just got a remove-event for
might already be inserted in exactly the same place again.
If this isn't the case, i.e. where one person writes all listeners, or
listeners don't mutate the DOM, then I don't see that we are pushing the
problem onto authors. Yes, the DOM will look different by the time
handler fires, but I don't see that it should significantly harder to
deal with.
> Consider the following
> example:
>
> <html><body>
> <style>.lastLink { color: red }</style>
> <a href="http://example.org">i want to be the last link</a>
> <div id="emptyMe">
> <a href="http://example.org">example two</a>
> <a class="lastLink" href="http://example.org">example three</a>
> </div>
> <script type="text/javascript">
> var numLinks = document.links.length;
> document.addEventListener( "DOMNodeRemovedFromDocument", function(e) {
> if (e.target.nodeName == 'A') { // or e.relatedNode.nodeName as the case may be
> if (--numLinks > 0) {
> document.links[ numLinks - 1 ].className = 'lastLink';
> }
> }
> }, true );
> </script>
> </body></html>
The above would be trivial to rewrite to use
document.links[document.links.length - 1].className = 'lastLink';
> If you did something like document.getElementById('emptyMe').innerHTML
> = '' and considered it a compound operation, the code above, which
> works with current implementations, will die because numLinks will be
> out of sync with document.links.length, and the array indexing will
> fail. To avoid this scenario, the code has to be rewritten to re-query
> document.links.length instead of assuming numLinks will always be
> valid. This is exactly the same problem you're currently having - the
> DOM is changing under the code unexpectedly, forcing it to recheck
> assumptions.
Note that there is nothing in the spec that says that this isn't already
the case. For example, an implementation would be totally allowed to in
the case of
document.getElementById('emptyMe').innerHTML = '';
fire all DOMNodeRemovedFromDocument events before doing any mutations to
the DOM. It could then do all removals while firing no events. This
seems like it would break your code.
The fact is, it would be extremely hard to define how implementations
should behave in all the various specs that cause mutations to occur, if
you also have to define exactly how to behave if mutation listeners
mutate the DOM.
If instead we instead allowed those specs to state what is a compound
operation, it can allow the speced behavior to happen inside a compound
operation, and then mutation listeners are dealt with afterwards.
/ Jonas
Received on Thursday, 17 July 2008 23:13:05 UTC