W3C home > Mailing lists > Public > public-webapps@w3.org > July to September 2008

Re: [D3E] Possible Changes to Mutation Events

From: Boris Zbarsky <bzbarsky@MIT.EDU>
Date: Thu, 17 Jul 2008 01:46:55 -0400
Message-ID: <487EDCCF.3080908@mit.edu>
To: Kartikaya Gupta <lists.webapps@stakface.com>
CC: public-webapps@w3.org

Kartikaya Gupta wrote:
> I understand your concerns, and while your proposed solution would
> solve your problem, it pushes this exact same burden onto web
> authors. Say we go ahead change the spec so that all the events are
> queued up and fired at the end of a compound operation. Now listeners
> that receive these events cannot be sure the DOM hasn't changed out
> from under *them* as part of a compound operation.

Note that this problem is already present for web authors with the 
existing setup.  Any time you're not the only one writing mutation event 
handlers, you can't rely on what your mutation event handlers report, 
much less on any cached information about the DOM.

> If you did something like
> document.getElementById('emptyMe').innerHTML = '' and considered it a
> compound operation, the code above, which works with current
> implementations, will die because numLinks will be out of sync with
> document.links.length, and the array indexing will fail. To avoid
> this scenario, the code has to be rewritten to re-query
> document.links.length instead of assuming numLinks will always be
> valid. This is exactly the same problem you're currently having

With the difference that it's a lot easier to re-query possibly-stale 
cached information than it is to validate DOM state, sort of.

I should also note that your code, as written, is wrong (gets the wrong 
indexing in various cases, starting with <a name="">), and that writing 
it correctly is more work than just not caching the length to start with 
(and therefore being correct by default).

 > The current interleaving of
> mutations and events is bad for (some) implementations and good for
> web authors.

It's not very good for web authors either, is the thing....

> Your proposed interleaving is good for (some)
> implementations and bad for web authors.

Honestly, I don't think it's any worse for web authors than the status quo.

You might also want to ask web authors whether they prefer being able to 
write fragile code like in your example or have all their code run 
faster, of course.  ;)

> In both cases it's for the
> same reason - being able to make assumptions simplifies code, so the
> side that gets to make those assumptions is better off, and the other
> side has to revalidate their assumptions.

Being able to make assumptions _can_ simplify code.  It can also make it 
more fragile if those assumptions are wrong.  There is a tradeoff 
between simplicity and robustness here, of course.

> I also consider this entire problem to be more of an implementation
> detail than anything else.

I'm not sure what you mean by that.  Difficulty of implementation is an 
important consideration in spec-writing, generally.

> The current spec can pose a security risk
> if not properly implemented, but that's true of any spec. The
> security risk identified is only a problem on C/C++ implementations.

Thing is, those are the implementations that actually end up being used 
by users to browse the web.  Of course that brings up constituency 
issues, etc....

-Boris
Received on Thursday, 17 July 2008 05:47:53 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:27 GMT