[Bug 25985] WebCrypto should be inter-operable

https://www.w3.org/Bugs/Public/show_bug.cgi?id=25985

--- Comment #1 from Ryan Sleevi <sleevi@google.com> ---
There are types of interoperability that WebCrypto has tried to capture
normatively:

- Defining, explicitly, the structure of data inputs and outputs. This may seem
pedantic, but real world APIs have expected inputs in a variety of forms,
causing application developers all sorts of pain. An oft-surprising example is
that the inputs to CryptoAPI (Windows' cryptographic services) for signature
verification expect to be byte-reversed from the standard, on-the-wire form
(and the form that 'virtually' every other cryptographic API expects)

- Providing consistent naming for an algorithm, as well as what is configurable
within that set.

- Defining the structure of CryptoKeys and how they're serialized/deserialized
into different formats.


However, there are types of interoperability that are more problematic.
Cryptography is an extremely special case, for a variety of reasons. Some of
them include:

Regulatory - The export of cryptography requires special licensing/approval
from many governments.

Legal - The use of certain algorithms, key sizes, or 'purposes' are restricted,
by force of law, in some jurisdictions. Likewise, the use of some algorithms
with some purposes may be encumbered within some jurisdictions.

Political - Likewise, there are some algorithms that are mandated on a national
level that, within the cryptographic community and within implementations, are
not generally present outside that country (eg: SEED, GOST)

Administrative - Most cryptographic libraries allow administrative control over
the set of algorithms permitted to be used. This may be required for compliance
with some appropriate sector (eg: FIPS 140-2, PCI DSS) or may be motivated by
an organization's security posture.

Practical - Historically speaking, user agents do not ship cryptography
themselves. They interface with some cryptographic library - either one
provided by the system (eg: IE, Safari on Windows, Safari on OS X) or through a
'third-party' library (eg: Firefox, Chrome, Opera, the GTK WebKit port). If
you're surprised to see Firefox on this list - it's because Mozilla distributes
a baseline module, but allows distros to remove that module, to point it at a
system module, or, in some cases (Firefox in FIPS mode), encourages people to
replace that module themselves.

This is also similar to browsers' existing TLS stacks - administrators can (and
do) change or disable the ciphersuites used to negotiate with a server. Even
the SSL "mandatory to implement" cipher may be disabled.

Thus, when it comes to cryptographic algorithms, there are a variety of reasons
why, even if we can all agree on what to call it and how it behaves, there is
not an intrinsic guarantee that something will be available.

With a normative, mandatory to implement suite, user agents in these
circumstances are faced with a choice - enable all of the feature or none of
it. Since they can't enable all of the feature in a variety of circumstances
(note that NSS, the basis for Firefox, is still missing support for a number of
the algorithms documented in here that are already present on Windows and
Safari), the interoperable choice would be enabling none. However, that would
be akin to tossing the baby out with the bathwater.

It also presents an unfortunate situation for the world going forward, as has
been noted, algorithms only get weaker over time. As more and more users
disable weak algorithms, user agents would be forced to remove *all*
cryptographic capabilities (or wait for the WG to standardize a new version -
which would, in effect, make that algorithm 'optional' by virtue of which
"version" of the spec a UA has implemented)

At the end of the day, this does place the burden on the application developer
to either design their system to work with a series of algorithms that may
offer comparable security strength (in the hope one is supported), or to only
support user agents that are configured to support their necessary suite. A
given web application will not, generally speaking, require ALL of the
algorithms ALL of the time - and may not require some of them EVER.


If we accept this, then the next state is a question about whether the User
Agent can allow users to enable/disable algorithms directly (eg: within the UA,
without disabling them at the OS or cryptographic library level). If we accept
this, then we get to the point where UAs may allow the user to automatically
disable certain algorithms under certain conditions - it's simply automating a
manual task. 

If we can get to that point, then we're at a point where the choice of UA
itself can imply the users' choice to automatically disable certain algorithms
under certain conditions. For example, the UA may market itself as a "secure"
UA, or at least a UA whose focus is on security, and thus the use of algorithms
under insecure conditions would be a violation of the UA's focus.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Received on Thursday, 5 June 2014 01:40:11 UTC