- From: Boris Zbarsky <bzbarsky@MIT.EDU>
- Date: Fri, 13 Sep 2013 12:38:06 -0400
- To: Domenic Denicola <domenic@domenicdenicola.com>
- CC: Anne van Kesteren <annevk@annevk.nl>, WebApps WG <public-webapps@w3.org>
On 9/13/13 12:01 PM, Domenic Denicola wrote: > Really? Argument defaulting and destructuring, at the very least Is typically not used in web specs until very recently. > As has been defining classes with constructors and classes that can be inherited from. Classes with constructors, agreed. Most web specs so far have not defined classes that can be inherited from, so it hasn't been a pitfall, exactly, more of an annoyance. > What common pitfalls would you be thinking of? Not defining order of argument coercions/checks. Not defining interoperable handling of dictionary and arraylike inputs. Not defining behavior of various arraylikes and hashlikes. > Certainly. There's no desire to create something that's not useful to implementers. But, how would you declaratively specify something like "iterate over `iterable`, performing the following steps"? That depends. WebIDL has http://dev.w3.org/2006/webapi/WebIDL/#es-sequence for the special case of "iterate over arraylike, performing type coercions". What you might want in practice is prose describing the iteration (as here) and then declarative syntax for invoking that prose (e.g. sequence<Foo> means you do the sequence steps, with the Foo steps at the point where the sequence steps are parametrized over Foo). That's obviously more limited than being able to invoke an arbitary algorithm, but also addresses a slightly different use case. Which one is more common, we'll see. > Or "let `returnValue` be `new this.constructor(1, 2, 3)`"? I'd need to see this in the context of an actual algorithm to answer this question. It depends. > As their side effects are observable, they often need to occur in the middle of algorithms, not up front. Why not up front? This is a serious question. The way type coercions in WebIDL work right now is that they're done up front, and then operations mostly occur on objects without observable side-effects of any sort. This is also how jquery handles things in many cases; see the extend({}, argument) stuff sprinkled all over it at the starts of methods. This obviously doesn't work in all cases, but it works in quite a number of them. And it reduces the chance of algorithms or data structures ending up in inconsistent states, since any operation that has side effects can typically throw, and if your algorithm can throw at random points in the middle you have to be a _lot_ more careful how you write it. > - What common pitfalls does WebIDL help you avoid, that fall outside the realm of class definitions/argument defaulting/argument destructuring/type coercions? 1) Return value creation. For example, naive code people write to return a dictionary-like things or arrays sets properties on the return value via [[Set]] instead of defineProperty. 2) Clear definitions of the behavior for things with named/indexed getters/setters. Maybe you include that under "class definitions"? But the key here is that it's just one keyword to opt into a pretty complicated class of behavior, whereas doing this with an ES6 class, even is possible, would require per-class boilerplate. But the meat of the benefits is in what you call "type coercions". There's more to it than just "end up with the right type". E.g. if I have a method that takes a Node, then after I'm done with the type coercion I have an object that I can then operate on without triggering random side-effects on property access or whatnot, so my .textContent getter doesn't have to worry that getting the firstChild will do something wacky. This is especially important if you have code that has elevated privileges (whether it be C++ or Rust or privileged JS) operating on these objects: in that situation it's critical to present a sanitized view of the object, which can only behave in certain very limited ways, to the code in question. Otherwise you end up with confused-deputy security bugs all over the place. Basically, my goal as an implementor, based on years of experience observing other implementors, is that someone implementing a DOM method should not have to know either the ins and outs of the ES spec or the details of the APIs for the browser's JS implementation. Especially because both are in fairly constant flux. Any time an implementor has to deal with those two (fairly large) cognitive burdens in addition to the (already large) cognitive burden of whatever the browser's internal representation for DOM objects is, they end up getting it wrong. That includes fairly experienced implementors, not just new folks. A good rule of thumb is that if a method implementor has to manually invoke anything in [[]] they will probably get it wrong. Furthermore, privileged code should never be working with raw page-provided ES objects, because doing that makes confused-deputy scenarios impossible to avoid in practice. For example, dictionaries that will be operated on by privileged script first need to be coerced to a new clean object with a sane proto chain, only value properties, and the values themselves coerced to be safe to work with. To the extent that we do not have a way to specify or perform such a coercion, we have a problem. Does that help? > - How would you propose creating declarative forms for observable algorithmic steps? It really depends on the steps. Sometimes it may be impossible. I would need to see specific examples, in actual algorithms, to comment on this intelligently. In practice, what you can do is take _common_ algorithmic steps and provide declarative ways of triggering them. -Boris
Received on Friday, 13 September 2013 16:38:40 UTC