W3C home > Mailing lists > Public > public-webcrypto@w3.org > June 2012

Re: [W3C Web Crypto WG] Deciding if we need a discovery mechanism

From: Mark Watson <watsonm@netflix.com>
Date: Mon, 4 Jun 2012 17:48:00 +0000
To: GALINDO Virginie <Virginie.GALINDO@gemalto.com>, "public-webcrypto@w3.org" <public-webcrypto@w3.org>
Message-ID: <C97711EF-3D23-40D6-9CE4-6862C480C63F@netflix.com>
For completeness, the other option for algorithm discovery is trial-and-error (unsupported algos return an "unsupported" error and the script tries a different one).

The disadvantages of this are obvious.

The advantage is that it adds no additional complexity to the API.

The disadvantages are mitigated if the set of alternative algorithms that a *script may wish to use* is small, even if the set of possible algorithms that a client may support is large. In practice, sites probably support only a small set of different algorithms. Probably the minimal set needed to get the browser/device coverage they want. So the trial-and-error process is probably limited to a set of 2 or three "tries".


On May 15, 2012, at 9:10 AM, Jarred Nicholls wrote:

On Tue, May 15, 2012 at 10:59 AM, GALINDO Virginie <Virginie.GALINDO@gemalto.com<mailto:Virginie.GALINDO@gemalto.com>> wrote:
Dear all,

Some people mentioned that a webapp may be able to discover the algorithms supported the environment it is running in, thus identifying algorithms available thanks to the Web Crypto API. There are several means to do that (1) either by an actual discovery mechanism sending back the entire list of algorithms, (2) or by creating ‘cipher suite’. In addition to that, we may be able to mandate a minimum set of algorithms that should be supported when implementing the API.

I would like to get your feedback on that one, (or any alternative proposal).


I'd say a combination of 1 and 2 are satisfying.  If there's a way to use 1 and then create/discover a particular permutation of a 'cipher suite', as well as a way to request it via a cipher suite identifying string for example, then I think we get the best of both worlds in terms of robustness and discoverability.

Once again, it is my opinion that mandating a minimum set of algorithms is aggressive.  While it does have its benefits in the short term like predictability for authors and establishing a deterministic test suite for the WG, there are a number of reasons I am against it:

  *   It could cause bad decisions in the API that take advantage of knowing a particular algorithm is meant to be implemented, or even worse, are decided solely for a particular algorithm.  The temptation could get the best of us ;)
  *   It means that vendors will have to come to consensus (assuming they all care about being compliant to the spec) on that set of algorithms, and that can be a slippery slope and a time sink because there might be differences in requirements between some vendors and/or the locales they support.
  *   Authors will undoubtedly write code that is not future proof, always assuming certain support with no feature detection. This will make it harder for vendors to deprecate & remove algorithm support in the future when e.g. the algorithm is no longer considered strong enough in the post-quantum cryptography world.  If we can nail down easy discoverability and feature detection, then library authors will code defensively (as they should) without a lot of effort.

Received on Monday, 4 June 2012 17:48:30 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:10 UTC