W3C home > Mailing lists > Public > public-script-coord@w3.org > January to March 2013

Re: Interaction between WebIDL operations and ES6 proxies

From: David Bruant <bruant.d@gmail.com>
Date: Fri, 01 Feb 2013 10:36:48 +0100
Message-ID: <510B8CB0.409@gmail.com>
To: Anne van Kesteren <annevk@annevk.nl>
CC: "public-script-coord@w3.org" <public-script-coord@w3.org>
Le 01/02/2013 10:09, Anne van Kesteren a écrit :
> On Thu, Jan 31, 2013 at 5:54 PM, David Bruant <bruant.d@gmail.com> wrote:
>> You can observed that Firefox first pulls all array values, (then runs an
>> algorithm of its own), then puts them back sorted in one batch. Before
>> proxies, this kind of things couldn't be observed.
> Is the expectation that these scripts yield the same result across
> implementations? E.g. what if an implementation detects the array is
> already sorted and does not need modification, should it still set?
Hmm... I took the wrong example, because .sort is 
implementation-specific, you can already largely observe its behavior 
via the compareFn argument, that's laregly accepted and people don't 
write code that depend on the order of comparisons.
However, take the Array.prototype.splice algorithm. I'm afraid that if 
implementations don't do the exact same sequence of 
[[Get]]/[[HasProperty]]/[[Delete]]/[[DefineOwnProperty]], it will lead 
to interoperability issues. So I think this kind of scripts would have 
to yield the same result across implementations.

>> I would like to believe this is an exception, but probably not. I think
>> every spec will have to specify how the algorithms interact when being
>> passed proxies to the object they're supposed to interact with.
> I thought a large part of the motivation for proxies was about being
> able to do the kind of things that happen in API land.
One motivation for proxies is self-hosting, that is enabling 
implementing "DOM"/"web platform" algorithms and object with only 
constructs defined in the ECMAScript spec. Among the benefits, it means 
that if a browser screws up, authors can compensate by polyfilling. For 
instance, if there is a bug in NodeList, without proxies, I don't think 
it'll be possible for authors to faithfully fix the bug.
However, it does not necessarily mean that proxies have to be accepted 
as replacement of native objects anytime they wrap them.

For instance, imagine:
     var e = new Proxy(document.create('div'), handler)
     document.body.appendChild(e);

Suddenly, depending on how the different algorithms acting on the DOM 
tree are defined, these algorithms will be observable. For selector 
matching, that could become a problem. First because of performance, 
second because a buggy proxy may insert nodes during the selector 
matching process.
It could be considered that built-in algorithms unwrap the proxy 
systematically, but in that case, what's the point of putting a proxy in 
the tree?

> If that is not
> feasible, or at least not in all cases, what is left?
The caretaker pattern, confinement with membranes, polyfilling complex 
objects (apparently element.style may acquire a getter to accept custom 
properties or something), fixing browser bugs.

As I said, not accepting a proxy may be the exception. Each spec will 
have to make its own call.

David
Received on Friday, 1 February 2013 09:37:20 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 8 May 2013 19:30:09 UTC