Re: WebIDL usage for Algorithms

On Mon, Mar 17, 2014 at 6:55 PM, Boris Zbarsky <bzbarsky@mit.edu> wrote:

> On 3/17/14 9:37 PM, Ryan Sleevi wrote:
>
>> Just to make sure - are you advocating for "any" as the type, on this
>> basis, or are you still advocating for "object", for the reason below.
>>
>
> I think using "object" here would be the right thing to do.
>
>
>  I'm not sure I fully grok your concern here or what you're proposing as
>> an alternative.
>>
>
> If you look at the algorithm at https://dvcs.w3.org/hg/
> webcrypto-api/raw-file/tip/spec/Overview.html#dfn-
> SubtleCrypto-method-encrypt for example, it performs all the steps after
> step 3 asynchronously (using what task source?).  Step 6 involves invoking
> https://dvcs.w3.org/hg/webcrypto-api/raw-file/tip/
> spec/Overview.html#algorithm-normalizing-rules (I assume; the actual link
> in that step is broken), which can cause execution of arbitrary page script.
>
> If it's not critical that this happen asynchronously, it would be best to
> do this arbitrary script execution before returning from the encrypt()
> call, to avoid introducing races.  If it _is_ critical that this happen
> asynchronously, then we need to specify which task queue is used for the
> task that calls into script.


Yeah. The previous version of the spec was clear that there was a new
(crypto) task queue, but this got inadvertently reverted.

Algorithm *normalization*, as currently specified, could happen serially.
This design itself was an artifact of the event-based system. The key event
that needs to remain asynchronous is detection as to whether or not an
algorithm is supported by the underlying implementation. In the previous
(event-based) system, it had to make a distinction between a fast-failure
path or not, but in the Promise system, we have a bit more flexibility to
still reject later.

This definitely all goes out the window if we allow JS to register its own
polyfills as algorithms - as has been discussed in the past (eg: for
allowing custom KDFs/digest operations). Of course, that's a "Version 3"
kind of feature (where "Version 2" just deals with Streams API).


>
>
>  The asynchronous nature is already going to be a problem for
>> ArrayBuffers - which we MUST take a structured clone of the data BEFORE
>> we can return the promise
>>
>
> This is somewhat similar, yes.
>
>
>  On a spec level, can we not simply state that you invoke the structured
>> clone algorithm on /alg/, letting o be the internal object created from
>> such clone. Then, at later points, as needed, you convert o to the IDL
>> dictionary type D, then normalize over D?
>>
>
> Even if you create a structured clone, as long as the result of the clone
> operation is being created in the same global getting properties from it
> can invoke arbitrary script.  Consider, for example:
>
>   Object.defineProperty(Object.prototype, "name", {
>       get: function() { /* Now we get to do whatever we want */ }
>     });
>
> and then passing an empty object literal (so {}) as the algorithm.  The
> structured clone will create a new object with no properties in the same
> global, and getting .name on it will invoke the getter defined on
> Object.prototype.
>
> You could structured clone into a clean global, I guess.  Is that what
> you're talking about here?  If so, can we guarantee that the clean global
> will never leak out observably?
>
> -Boris
>

Yeah, I think we *could* get away with a clean global, since it's just used
as a place-holder to defer IDL conversions, and never exposed.

I'll try to put together a spec that attempts to *synchronously* handle the
IDL conversion into the target/derived type prior to returning the promise,
which should hopefully address this. It won't happen until after the LC
version is published, unfortunately.

Received on Tuesday, 18 March 2014 02:30:52 UTC