Re: [D3E] Possible Changes to Mutation Events

On Jul 16, 2008, at 6:36 AM, Laurens Holst wrote:

> Hi Doug,
> Doug Schepers schreef:
>> Sergey Ilinsky wrote (on 7/15/08 6:39 AM):
>>> Doug Schepers wrote:
>>>> 1. DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired  
>>>> after the mutation rather than before
>>>> 2. DOM operations that perform multiple sub-operations (such as  
>>>> moving an element) would be dispatched (in order of operation)  
>>>> after all the sub-operations are complete.
>>> General concerns:
>>> 1) Clearly defined use cases seem to be missing from the proposal,  
>>> would it be possible to bring them all to the table?
>> That's a reasonable request
>> I think that Jonas and Maciej described some of the use cases (from  
>> an implementor's point of view) in their discussion:
>> 1. optimizing based on queuing of events
>> 2. reduction of code
>> 3. consistency and predictability of behavior
>> 4. interoperability on the issue of when the events fire (currently  
>> the spec says, "Many single modifications of the tree can cause  
>> multiple mutation events to be dispatched. Rather than attempt to  
>> specify the ordering of mutation events due to every possible  
>> modification of the tree, the ordering of these events is left to  
>> the implementation." [1])
> I see, so the motivation for the change request to DOMNodeRemoved is  
> that the second change request (throwing events at the end, after  
> all operations) is be impossible to do if events are not always  
> thrown at the end. And the motivation for throwing events at the end  
> seems to be for a specific kind of optimisation called ‘queuing of  
> events’. I would appreciate if someone could describe this  
> optimisation.

The purpose is not optimization, but rather reducing code complexity  
and risk. DOM mutation events can make arbitrary changes to the DOM,  
including ones that may invalidate the rest of the operation. Let's  
say you call parent.replaceChild(old, new). If the DOMNodeRemoved  
notification is fired before the removal of old, or even between the  
removal and the insertion, it might remove old from parent and moved  
elsewhere in the document. The remove notification for new (if it  
already had a parent) could also move old, or new, or parent. There's  
no particularly valid reason to do this, but Web-facing  
implementations must be robust in the face of broken or malicious  
code. This means that at every stage of a multistep operation, the  
implementation has to recheck its assumptions. In WebKit and Gecko,  
the code for many of the basic DOM operations often is more than 50%  
code to dispatch mutation events, re-check assumptions and abort if  
needed. Dispatching mutation events at the end of a compound operation  
doesn't have this problem - there is no need to re-check assumptions  
because the operation is complete.

> Even ignoring the serious backwards compatibility issues that Sergey  
> described, I do not think this is a good idea. By defining that all  
> events have to be fired at the end of the operation, e.g.  
> Document.renameNode can never be implemented by just calling  
> existing DOM operations; the implementation would need to call some  
> internal event-less version of the methods (e.g. removeNodeNoEvent()).

First of all, this is not a big deal for implementations. Second, it  
seems to me this is true whether removeNode fires the event first or  

> It seems to me that such a definition would possibly make  
> implementations more complex if not impossible (in case the  
> implementation provides no access to events-less methods), and put  
> more constraints on the underlying implementation, as the  
> implementation would now be required to throw the events separately  
> from the actual operations (which I do not think would be good  
> design).

No, it would make implementations much simpler by removing all the  
code that handles the very unlikely case of the mutation event  
listener modifying the DOM in a way that invalidates the operation. I  
know for sure this is the case for WebKit's DOM implementation, and  
Mozilla folks have told me the same is true for Gecko.


> I do not care so much about backwards compatibility with earlier  
> revisions of the DOM level 3 spec (although I hope there won’t be  
> really big changes :)), however this concerns compatibility with DOM  
> level 2 which has been a REC since 2000. As far as I know (and if  
> Appendix B: Changes is not omitting anything), DOM level 3 has so  
> far not introduced any backwards incompatibilities with DOM level 2.  
> Doing this would set a very bad precedent.
> If you introduce incompatible behaviour with regard to DOM level 2,  
> there is no way to prevent existing applications from breaking  
> either, because DOM does not provide a version mechanism that knows  
> an older version is expected and could provide backwards compatible  
> behaviour. And either way, I think having to branch code (or worse,  
> providing different implementations) based on version is undesirable.
> I think you should be very, very reluctant to break backwards  
> compatibility with an 8-year old REC. The DOM specifications are a  
> core part of XML technologies, and if those standardised core  
> technologies can not be trusted to be compatible with previous  
> Recommendations, requiring application-level changes, maybe XML  
> technologies aren’t as reliable as we all thought.

Unfortunately back in 2000 we did not have the kind of robust feedback  
cycle between standards orgs and implementors that we do today, so it  
was not realized at the time that the design of DOM mutation events  
required so much implementation complexity.


Received on Wednesday, 16 July 2008 17:14:22 UTC