Re: [Bug 23933] Proposal: Change constraints to use WebIDL dictionaries

On 06/12/2013 2:59 AM, Jan-Ivar Bruaroey wrote:
>> I don't think you understood my point. There are two ways to support 
>> "future proofing": One is to ignore unknown keys, and the other is to 
>> support a mechanism for looking up supported keys. If you've got one, 
>> you don't need the other. Anyone who wants to "future proof" the 
>> dictionary just needs to use the function that returns all known 
>> constraints as a filter (meaning, remove any key in the dictionary 
>> that is not returned by the function). That's it, that's all. As 
>> such, There is no reason to ignore unknown keys.
>
> I'm glad we agree on having a mechanism for looking up supported keys, 
> and given such a mechanism, you're right, we could go either way. 
> However, picking your way:
>
>   * does not fix the footgun,
>   * is a missed opportunity to fix our bastardized webidl,
>

I replied to this below.

>   * is more work (filtering),
>

I'm fine with this because I believe the business logic belongs in the 
application, not the browser, and this is part of the business logic. 
Even if that wasn't true, I think the benefit of input validation 
justifies this extra cost.

>   * is redundant (two ways to detect browser support, directly vs.
>     indirectly with gUM), and
>

Throwing exceptions on bad input is not meant as a mechanism for 
detecting browser support. There is no duplication here. Typos are not 
an indication that the browser doesn't support the input *yet*. They are 
an indication that the browser may *never* support this kind of input.

>   * is more complicated (bailing early is simple, filtering is not.
>     Dictionaries are well-defined, vs. ...)
>
>
> The only reason I can see to pick it would be to keep what I call the 
> footgun, the default behavior where webdevs who don't think about the 
> complicated unknown case, make apps that (perhaps inadvertently) block 
> everyone indiscriminately, including both legitimate and illegitimate 
> users, until a browser is upgraded with a new constraint. Since the 
> webdev can flip the default and the user cannot, I think we should 
> default to the way that doesn't block the user. I already have 
> evidence that webdevs aren't thinking ahead when they use mandatory.
>

I almost understand this point, but not quite yet. Can you please give a 
concrete example/scenario of this taking place?

>>> Our spec is not implementable as it stands, because it speaks about 
>>> mandatory constraints being dictionaries.
>>> Our spec sorely needs review by the webidl experts group.
>>
>> I get your point about Dictionary as defined by WebIDL but I think 
>> you're splitting hairs. If the only reason you want to ignore unknown 
>> constraints is for the sake of meeting the WebIDL specification then 
>> rename the type from Dictionary to something else that allows you to 
>> throw an error on unknown constraints.
>
> I don't mean to pick on you, as you are hardly alone in this, but this 
> is the cavalier attitude towards browser implementation and webidl 
> that concerns me in this group, and it is evident in the spec as well. 
> We seem to have no webidl experts, yet this doesn't seem to bother 
> anyone, or prevent people from making stuff up.
>
> This is why I think we need to have our spec reviewed by a webidl 
> experts group.
>
> I'm no expert, but I can tell you from having implemented JavaScript 
> browser-objects so far, that it is not a picnic without a webidl 
> compiler that generates secure bindings. The reason is that with JS 
> being such a dynamic language, there is no guarantee what a content JS 
> object contains or what it will do (what code will run) once invoked. 
> As a result, even for non-malicious content JS, there can be dire 
> side-effects and even security issues depending on exactly how or even 
> how many times a browser interacts with a content JS object in an API 
> call. This is why every webidl type I've seen implemented has a 
> well-defined processing model, e.g. an agreed-upon pattern of access 
> that is universal in the browser.
>
> Take the processing-model for dictionaries, for instance. Whenever a 
> dictionary is passed in through a webidl API, the original JS object 
> never even enters the secure area. Instead, a dedicated internal 
> dictionary of the same type is initialized from default webidl values, 
> and then for each known key in that dictionary, it queries the 
> blackbox content JS object for that property once. This is repeated 
> for each key. The normalized copy is then passed to the c++ or 
> browser-js. This provides important code-invariants to our 
> browser-code and minimizes chances of bugs and exploits. Conversely, 
> when a plain JS object is passed in, or a dictionary contains a plain 
> JS object as a member somewhere, even in a deeply nested dictionary, 
> then different binding code is generated for the entire API that 
> introduces tracking and rooting of that argument (or top argument if 
> it is nested) for garbage and cycle collection purposes. The c++ or 
> JSImpl code is then left to normalize and query the object itself, and 
> basically try to mimick the same processing model correctly, using 
> low-level JSApi calls. The chance of parsing and pilot errors and bugs 
> go up dramatically. Use of these APIs require special reviews from DOM 
> people, and they generally tell you not to do this.

What is the practical impact of calling our variable a Dictionary but 
throwing an exception on unknown properties? What you said above is 
nice, but it doesn't indicate that anything would actually go wrong. It 
seems to me that ignoring unknown properties is purely related to future 
proofing and nothing else.

>> Consider the alternative:
>>
>> { audio: true, video: { mandatory: { maxW1dth: 320, maxHeight: 240 }}}
>>
>> Do you know how much of my life has been wasted dealing with these 
>> kinds of stupid typos?
>
> What about: { audio: true, video: { optional: { maxW1dth: 320, 
> maxHeight: 240 }}}  ?
>
> or: { audio: true, video: { mandtory: { maxW1dth: 320, maxHeight: 240 
> }}}  ?
>
> This is JavaScript, right?
>
> We're making one API, not fixing JavaScript. This is out of scope. We 
> should be creative in other areas IMHO.

I'm not advocating changing the language, just validating the function 
input. What happens when you pass an invalid value into HTTPXmlRequest? 
Does it ignore it silently and fire the success callback? Or does it 
return an error indicating that the input was invalid? There is nothing 
in Javascript which implies that unknown keys should be ignored silently.

Gili

Received on Friday, 6 December 2013 16:56:45 UTC