Re: Regarding Issue-24: Defining a Synchronous API

There are no plans for any new synchronous APIs in the W3C. This is a
decision for all APIs.

It does not force iteration of recursive calls. Your concerns about stack
size are unfounded.
On Oct 23, 2014 3:30 PM, "B Galliart" <bgallia@gmail.com> wrote:

> If I read the thread of Issue-24 correctly, the feeling of the working
> group that any synchronous API would lead to significant performance
> concerns..
>
> However, there should be some use cases for digest, importKey and sign
> where it should not be unreasonable to expect the use-case to complete in
> very tight time/processor constraints even on smart phones that are over a
> year old.  I would like to purpose a method which allows the crypto API
> provider to specify the limits of what those use cases can be.
>
> Consider the following addition to the SubtleCrypto interface:
>
> Number syncMaxBytes(String method, AlgorithmIdentifier algorithm);
>
> So, if someone calls syncMaxBytes('digest', { name: 'SHA-1' }) and it
> returns 4096 then the script knows to make synchronous SHA-1 digest calls
> will require the CryptoOperationData to be less than or equal to 4096
> bytes.  On a different provider the value returned may be only 1024 due to
> limitations of resources or maybe it has enough resources to return 8192.
> Also, if the webcrypto provider decides any the call must always go through
> the Promise API then it could return a max of 0.  So,
> syncMaxBytes('digest', { name: 'SHA-512' }) may result in 0 by a mobile
> browser that still supports SHA-512 through the asynchronous API but not
> via a synchronous call.
>
> Likewise, for methods importKey and sign, as long as the key and
> CryptoData lengths are kept limited, the time constraints on the call
> should be reasonable.
>
> The biggest problem I have with the current API is, if I understand it
> correctly, that it forces iteration to be recursive function calls which is
> limited by the maximum size of the call stack..  I have found in some cases
> the call stack may be as small as 1,000.  But there are several cases where
> the recommended number of iterations for uses of has and HMAC is
> recommended to be 5,000 or 10,000.
>
> For example, the current versions of Chrome provide generateKey and sign
> for performing a HMAC but not deriveKey/deriveBits for performing PBKDF2.
> Once the HMAC part of PBKDF2 is taken care of, the rest of the function is
> largely use of XOR and moving memory around.  Hence, using nodejs's crypto
> module (which does allow synchronous function calls) to do the HMAC,
> performing all the rest of PBKDF2 returns results fairly quickly.  Doing it
> in Chrome, despite it having the API functions to perform HMAC, is
> impossible.
>
> One might suggest just waiting for Chrome to provide deriveKey, but this
> isn't the only function impacted.  What if a PBKDF3 is released which
> requires only minor tweaks but still has HMAC as the most costly part?
> Should there really be no way to provide an alternative method of
> performing it to allow for use on browsers that do not currently or will
> not be updated to support the new method?
>
> How about One Time Passwords where one of the recommended methods is to do
> multi-round re-hashing?  Or being able to generate a SHA-256 crypt() hash
> on the client side based on Ulrich Drepper of Red Hat's specifications?
>
> Just because synchronous API's can be abused for large amounts of data
> which will take a long time to process, doesn't mean the standard should
> just throw the baby out with the bath water.  It shouldn't be an all or
> none deal.  There must be some compromise where the synchronous API use is
> kept limited to non-abusing situations so that iterative calls can still be
> done using a classic for loop.
>

Received on Thursday, 23 October 2014 22:36:43 UTC