- From: Harry Halpin <hhalpin@w3.org>
- Date: Mon, 08 Oct 2012 22:51:14 +0200
- To: Vijay Bharadwaj <Vijay.Bharadwaj@microsoft.com>
- CC: "public-webcrypto@w3.org" <public-webcrypto@w3.org>, David Dahl <ddahl@mozilla.com>, Emily Stark <estark@mit.edu>, Wan-Teh Chang <wtc@google.com>, GALINDO Virginie <Virginie.GALINDO@gemalto.com>, Ryan Sleevi <sleevi@google.com>
- Message-ID: <50733CC2.6020702@w3.org>
On 10/08/2012 09:22 PM, Vijay Bharadwaj wrote:
>
> ØThen, what threat model does crypto in JA make sense for at all?
> Obviously, when there's some lack of trust on the server *or*
> connection to the server that can be ameliorated by public key crypto.
>
> Harry asked the above on a different email thread. This is an
> important question. But first we should be precise about what we're
> asking. WebCrypto is not (only) about "crypto in JS". It is about
> giving JS access to the crypto capabilities of the underlying
> platform. This includes cases in which the actual crypto is done
> elsewhere such as in a smart card.
>
> So when does it make sense to give JS access to the platform's crypto
> capabilities? In my mind, there are a few possible answers.
>
> It makes sense when one wants a trusted piece of JS in a trusted UA to
> interact with a less trusted data store. This is the Facebook use
> case. It is also David's cloud storage use case if the script is
> retrieved from somewhere other than the server that keeps the data.
>
> It makes sense when one wants a trusted piece of JS in a trusted UA to
> be able to interoperate with a server using an existing protocol (e.g.
> sign requests to access-controlled REST APIs, JimD's use cases on
> authenticating to government services).
>
> It makes sense when a server wants to deliver JS that uses a trusted
> piece of pre-provisioned crypto hardware to establish end-to-end trust
> independent of the UA (e.g. using a smart dongle for online banking,
> some of the Netflix use cases).
>
> There may be others, and I'd love to hear what others think.
>
> It's important to note that the "trusted UA" assumption is not as
> outlandish as it might seem at first; as Ryan points out on other
> threads, we routinely make an assumption that the OS is trusted when
> talking about native apps. One does not argue that including crypto
> APIs in operating systems is futile just because malware and rootkits
> exist. Many methods exist to improve the trust in the UA, including
> the use of non-browser JS implementations. One could also argue that
> various crypto primitives -- notably hash and RNG -- are only
> meaningful if one accepts this assumption.
>
I agree with all the above, and thanks for listing them out, as I think
they are all quite valid. Again, most of the critiques we've gotten on
the API are about not having trust in the JS at all, yet in general, I
would maintain that arguments about server's being compromised are
similar to arguments over rootkits on the OS level - its just XSS is
generally easier than rootkits.
I guess what some developers want is:
1) ability to write new secure protocols in JS for use with WebApps with
functions such as digital signatures . This would be very useful for a
whole range of functions involving multiple servers besides the
same-origin, such as OpenID Connect flows where one passes a signed
token from a browser to identity provider, who then can pass that to a
relying party in order to access personal data.
2) Developers want the crypto API to be a silver bullet for security as
they assume access to "crypto functions = must be secure", but of course
in reality there's quite a few more bases to be covered. Off the top of
my head, the developer should use CSP combined with HSTS/Cert
Transparency/pinning for TLS (anything missing here?). That's about as
close as we're going to get to allowing them to creating secure
protocols in a reasonable manner for WebApps.
Also, its unclear if the server and client should be considered *one*
application, as is traditional in web-apps. We have not yet the spec
stable enough where we can have a use-case where the user can have a
private key, store it, but not let secret key material be arbitrarily
replaced by key material from the same-origin. I'd personally like to
see that as a possibility, as that would enable use-cases where the
server might be not be entirely trusted.
cheers,
harry
> Since this question seems to keep coming up in feedback, maybe we
> should develop a position on it as a group. Does anyone else have any
> thoughts on the matter?
>
> **
>
Received on Monday, 8 October 2012 20:51:27 UTC