Re: [D3E] Possible Changes to Mutation Events

Laurens Holst wrote:
> Hi Doug,
> 
> Doug Schepers schreef:
>> Sergey Ilinsky wrote (on 7/15/08 6:39 AM):
>>> Doug Schepers wrote:
>>>> 1. DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired 
>>>> after the mutation rather than before
>>>> 2. DOM operations that perform multiple sub-operations (such as 
>>>> moving an element) would be dispatched (in order of operation) after 
>>>> all the sub-operations are complete.

Note that the second request here is not a request for a change, but
rather for a clarification. There is nothing in the spec today that
state *when* the events that fire after the mutation are supposed to
fire. In theory you could queue up all such events and just check once
an hour if anything needs firing and do so at that point.

The first request is a request for an incompatible change though. As
others have already noted.

>>> General concerns:
>>> 1) Clearly defined use cases seem to be missing from the proposal, 
>>> would it be possible to bring them all to the table?
>>
>> That's a reasonable request
>>
>> I think that Jonas and Maciej described some of the use cases (from an 
>> implementor's point of view) in their discussion:
>> 1. optimizing based on queuing of events
>> 2. reduction of code
>> 3. consistency and predictability of behavior
>> 4. interoperability on the issue of when the events fire (currently 
>> the spec says, "Many single modifications of the tree can cause 
>> multiple mutation events to be dispatched. Rather than attempt to 
>> specify the ordering of mutation events due to every possible 
>> modification of the tree, the ordering of these events is left to the 
>> implementation." [1])
> 
> I see, so the motivation for the change request to DOMNodeRemoved is 
> that the second change request (throwing events at the end, after all 
> operations) is be impossible to do if events are not always thrown at 
> the end. And the motivation for throwing events at the end seems to be 
> for a specific kind of optimisation called ‘queuing of events’. I would 
> appreciate if someone could describe this optimisation.

Here is the problem we are struggling with is that the current design is
very complex to implement. Any code we have that somewhere inside the
code requires the DOM to be mutated mean that after the mutation we
recheck all invariants after each mutation. This is because during the
mutation a mutation event could have fired which has completely changed
the world under us. These changes can be as severe as totally changing
the structure of the whole DOM, navigating away to another webpage
altogether and/or closing the current window.

To take an example:
Implementing XBL requires that when an attribute is change on one
element, a set of other attribute mutations might need to happen. Today,
we need to after each such mutation need to check that no other mutation
has happened that we need to react to. For the next attribute we need to 
check if the element still is in the tree, if the XBL binding is still 
attached, if the original attribute that was set is still set, and to 
the same value, if there's something else that has changed about the 
binding that makes the attribute-setting no longer needed, if the user 
is still on the same page or if we've started tearing down the DOM, etc.

In languages like C++ this is even worse since its possible that all the 
pointers we use in the algorithm could now be pointing to deleted 
objects. Dereferencing such pointers can lead to crashes which can be 
exploitable.

Especially the last part is something that is a big concern. It is sort 
of ok if doing really weird things during a mutation event results in 
weird results. It's worse, but still mostly ok if it leads to crashes. 
Authors are unlikely to be bothered much if totally revamping the DOM 
during a mutation event leads to any of this. They can just do whatever 
they need to do later, if they need to do it at all.  But if it leads to 
calling functions on deleted objects, then the result is exploitable 
crashes with all the stolen data and botnets that come with that.

 From a security point of view implementing mutation events just does 
not make sense to us. It has led to more security problems than is 
justifiable given the value of the feature.

For us the question at this point is just as much "should we keep 
mutation events at all" as it is "are we ok with changing the 
implementation to be incompatible with the spec".

> Even ignoring the serious backwards compatibility issues that Sergey 
> described, I do not think this is a good idea. By defining that all 
> events have to be fired at the end of the operation, e.g. 
> Document.renameNode can never be implemented by just calling existing 
> DOM operations; the implementation would need to call some internal 
> event-less version of the methods (e.g. removeNodeNoEvent()).

Actually, it's the contrary. Implementing document.renameNode right now 
is a royal pain since it involves multiple mutations to DOM nodes. After 
each mutation we need to check that everything is in a state where we 
can continue, or if we need to update all local variables since the 
world changed around us. It is currently easier to implement renameNode 
using other methods that don't fire mutation events, and then fire any 
needed events manually afterwards.

If we changed the spec as described, the implementation can just keep a 
flag stating "i'm inside a DOM operation, queue events instead of firing 
them", and then call a generic "clear flag and fire any queued events at 
the end" function at the end.

> It seems to me that such a definition would possibly make 
> implementations more complex if not impossible (in case the 
> implementation provides no access to events-less methods), and put more 
> constraints on the underlying implementation, as the implementation 
> would now be required to throw the events separately from the actual 
> operations (which I do not think would be good design).

As someone that is working a DOM implementation that is used for the 
general web on a daily basis, my findings is the opposite.

> To provide a more concrete example, a DOM (e.g. the Mozilla DOM) could 
> never be extended with a custom document.renameNode method that performs 
> the operations as described in the DOM level 3 specification, because 
> the events would fire too soon. Not that Backbase implements events like 
> this, by the way.

I do agree that it would be impossible to implement certain DOM features 
in javascript. However I've always found this a pretty small price to 
pay. The DOM should mostly be implemented by DOM implementations, not by 
javascript libraries on top of the DOM implementation. Javascript 
libraries can implement whatever APIs they want, the need for 
interoperability between them isn't as great.

I do agree that it's neat when javascript libraries can fix deficiencies 
when implementations are lacking, but I consider interoperability 
between implementations more important.

>> It would be nice to have more use case outlined.
>>
>> This knife cuts both ways, of course.  Can you cite some cases 
>> (preferably in use today) that rely on keeping things the way they are?
> 
> In our product we have several controls that either access the 
> parentNode property (which would no longer be accessible) or have an 
> event listener for DOMNodeRemoved on their parent. Also, it is fair to 
> assume that there will be several customer projects expecting the 
> currently defined functionality.

I think we can provide the same set amount of data through new properties.

For example we could fire a new event on the old parent of the removed 
node. This event could have the .relatedNode be the removed child, and a 
new .relatedIndex property hold the index the node was removed from.

> It is not so much that with the requested changes the desired 
> functionality could not be achieved (although DOMNodeRemovedFromDocument 
> would need to provide the relatedNode property, and we would have to 
> attach DOMNodeRemoved event handlers to all child nodes instead of their 
> parent). The objection is rather that backwards compatibility with a REC 
> from 2000 is broken, for reasons that remains largely unclear. The 
> defined behaviour of DOMNodeRemoved makes sense, and is useful.

I hope I made the reasons more clear.

/ Jonas

Received on Wednesday, 16 July 2008 22:30:58 UTC