Re: [Bug 23933] Proposal: Change constraints to use WebIDL dictionaries

On 12/6/13 1:05 AM, cowwoc wrote:
> On 05/12/2013 11:30 PM, Jan-Ivar Bruaroey wrote:
>> Please understand that Webidl dictionaries ignore unknown keys.
>>
>> From http://heycam.github.io/webidl/#idl-dictionaries :
>>> "A dictionary value of type D can have key–value pairs corresponding 
>>> to the dictionary members defined on D and on any of D’s inherited 
>>> dictionaries."
>>
>> This is on purpose for future-proofing, and all other web APIs (that 
>> use dictionaries at least) are on-board with this. AFAIK it is only 
>> our freaky mandatory constraints that fly in the face of this.
>
> I don't think you understood my point. There are two ways to support 
> "future proofing": One is to ignore unknown keys, and the other is to 
> support a mechanism for looking up supported keys. If you've got one, 
> you don't need the other. Anyone who wants to "future proof" the 
> dictionary just needs to use the function that returns all known 
> constraints as a filter (meaning, remove any key in the dictionary 
> that is not returned by the function). That's it, that's all. As such, 
> There is no reason to ignore unknown keys.

I'm glad we agree on having a mechanism for looking up supported keys, 
and given such a mechanism, you're right, we could go either way. 
However, picking your way:

  * does not fix the footgun,
  * is a missed opportunity to fix our bastardized webidl,
  * is more work (filtering),
  * is redundant (two ways to detect browser support, directly vs.
    indirectly with gUM), and
  * is more complicated (bailing early is simple, filtering is not.
    Dictionaries are well-defined, vs. ...)


The only reason I can see to pick it would be to keep what I call the 
footgun, the default behavior where webdevs who don't think about the 
complicated unknown case, make apps that (perhaps inadvertently) block 
everyone indiscriminately, including both legitimate and illegitimate 
users, until a browser is upgraded with a new constraint. Since the 
webdev can flip the default and the user cannot, I think we should 
default to the way that doesn't block the user. I already have evidence 
that webdevs aren't thinking ahead when they use mandatory.

>> Our spec is not implementable as it stands, because it speaks about 
>> mandatory constraints being dictionaries.
>> Our spec sorely needs review by the webidl experts group.
>
> I get your point about Dictionary as defined by WebIDL but I think 
> you're splitting hairs. If the only reason you want to ignore unknown 
> constraints is for the sake of meeting the WebIDL specification then 
> rename the type from Dictionary to something else that allows you to 
> throw an error on unknown constraints.

I don't mean to pick on you, as you are hardly alone in this, but this 
is the cavalier attitude towards browser implementation and webidl that 
concerns me in this group, and it is evident in the spec as well. We 
seem to have no webidl experts, yet this doesn't seem to bother anyone, 
or prevent people from making stuff up.

This is why I think we need to have our spec reviewed by a webidl 
experts group.

I'm no expert, but I can tell you from having implemented JavaScript 
browser-objects so far, that it is not a picnic without a webidl 
compiler that generates secure bindings. The reason is that with JS 
being such a dynamic language, there is no guarantee what a content JS 
object contains or what it will do (what code will run) once invoked. As 
a result, even for non-malicious content JS, there can be dire 
side-effects and even security issues depending on exactly how or even 
how many times a browser interacts with a content JS object in an API 
call. This is why every webidl type I've seen implemented has a 
well-defined processing model, e.g. an agreed-upon pattern of access 
that is universal in the browser.

Take the processing-model for dictionaries, for instance. Whenever a 
dictionary is passed in through a webidl API, the original JS object 
never even enters the secure area. Instead, a dedicated internal 
dictionary of the same type is initialized from default webidl values, 
and then for each known key in that dictionary, it queries the blackbox 
content JS object for that property once. This is repeated for each key. 
The normalized copy is then passed to the c++ or browser-js. This 
provides important code-invariants to our browser-code and minimizes 
chances of bugs and exploits. Conversely, when a plain JS object is 
passed in, or a dictionary contains a plain JS object as a member 
somewhere, even in a deeply nested dictionary, then different binding 
code is generated for the entire API that introduces tracking and 
rooting of that argument (or top argument if it is nested) for garbage 
and cycle collection purposes. The c++ or JSImpl code is then left to 
normalize and query the object itself, and basically try to mimick the 
same processing model correctly, using low-level JSApi calls. The chance 
of parsing and pilot errors and bugs go up dramatically. Use of these 
APIs require special reviews from DOM people, and they generally tell 
you not to do this.

> Consider the alternative:
>
> { audio: true, video: { mandatory: { maxW1dth: 320, maxHeight: 240 }}}
>
> Do you know how much of my life has been wasted dealing with these 
> kinds of stupid typos?

What about: { audio: true, video: { optional: { maxW1dth: 320, 
maxHeight: 240 }}} ?

or: { audio: true, video: { mandtory: { maxW1dth: 320, maxHeight: 240 }}} ?

This is JavaScript, right?

We're making one API, not fixing JavaScript. This is out of scope. We 
should be creative in other areas IMHO.

> Gili

.: Jan-Ivar :.

Received on Friday, 6 December 2013 07:59:37 UTC