Re: indexed properties on NodeLists and HTMLCollections

On May 5, 2011, at 12:24 PM, Boris Zbarsky wrote:

> On 5/5/11 3:00 PM, Allen Wirfs-Brock wrote:
>> Ideally (from a JavaScript perspective), we should be designing web APIs
>> which only depend upon the standard native JavaScript object semantics
>> and which don't require behavioral intercession.
> The problem is that it's common for at least past web APIs for an object to reflect some part of the DOM and automagically mutate when that part of the DOM changes.
> This is the case that's a pain to represent with native object semantics.

But native JS objects can also dynamically mutate in apparently arbitrary ways:

var myGlobal = { };
alert(myGlobal.toString);  //built-in toString function
alert(myGlobal.toString);   //displays some other value because a toString property was added to the myGlobal obj by the function
alert(myGlobal.toString);  //built-in toString function because the function deleted the property the new toString property from the obj

The only way for a JavaScript programmer to ensure this doesn't happen is via Object.seal, etc.  However, I don't believe that JS programmers actually see this sort of potential mutation as a real problem and few will actually use the new ES5 features that allow them to prevent it.

> At the same time, this seems like something web authors want in many cases...

If this is true (they want something that is not expressible using native JS objects) then it should apply to much more than just the libraries defined by the W3C WebApps WG.  It is essentially a statement that ES is deficient, so we should expect to see web authors clamoring for ES features of this sort. There are plenty of ways that ES can be enhanced and we do get some clamoring but the sort of things we are talking about in this thread don't seem to be the target of most of the clamoring.

From a JS programmers perspective, the DOM is just a library/framework that provides a (dynamic) object model of the rendered display output currently presented by the application.  Increasingly, web applications make use of libraries/frameworks built using native JS objects that provide object models of other complex aspects of the application domain.  In many cases now and even more so in the future the JS programmer doesn't and shouldn't care which object model is provided by the User Agent implementor and which was provided by a third party.  Yet we seem to have two sets of rules for designing libraries/frameworks. One set of rules (WebIDL) for libraries/frameworks designed by the WebApps WG and implemented by UAs. Another set of rules (the ES spec.) for libraries/frameworks designed/implemented by everybody else.  Why? How can this possibly be good for web authors.

>> In terms of prioritization of specification techniques within the
>> ES/WebIDL binding I suggest:
>> 1) normal ES data property and method invocation semantics
>> 2) use of ES accessor properties in possibly creative ways
>> ------------------- stop here for all new APIs
> I think the only way to do that is to disallow [OverrideBuiltins] in new APIs.
> Of course the non-[[OverrideBuiltins]] behavior is even weirder...
> Perhaps the only way to do that is to disallow name/index getters/setters altogether.  But web authors seem to want them.

Well, the [OverrideBuiltins] behavior sounds like the normal native JS behavior.

The fundamental problem is that this is a design that intermingles data (node names) with program structure (method names) in the same namespace. Collisions are inevitable.  The fix for new APIs should be to simply not do this.  ES5 allow creation of objects with a null prototype value.  You can use such objects as  name/value maps with no worry about inheritance conflict with methods define by the prototype (because there isn't one).  That's essentially what you do in other languages and it is essentially what you have to do in native JS libraries so why shouldn't new web apis follow the same rules.

I realize that this isn't practical for certain legacy DOM APIs.  In those situations we need to do whatever is necessary to maintain compatibility with the legacy web. But anywhere there is disagreement (and hence lack of interoperability among major browsers) we should try to pick a path that is as close to native JS object semantics as possible.

> ...
>> The situation would be different if list was a NamedNodeMap. In that
>> case "a" (or any valid nodeName value) could be the name of a "live"
>> property that could be dynamically added to the list. You need to
>> specify the desired semantics for such a live property when its property
>> name already exists
> The desired semantics from my point of view as a UA implementor is to not have to worry about whether it exists, because checking that is slow....

Language implementors  generally don't get to change the language specification as a way to improve their benchmark scores.  They are expected to implement the language in accordance to its specification and if necessary invent clever ways to make it fast while continuing to conform to the spec. I don't see why UA implementors shouldn't be held to the same expectations. 


Received on Thursday, 5 May 2011 23:18:22 UTC