Re: Usefulness of WebCrypto API

On Mon, Oct 8, 2012 at 2:32 PM, Seetharama Rao Durbha
<S.Durbha@cablelabs.com> wrote:
>
>
> On 10/8/12 1:22 PM, "Vijay Bharadwaj" <Vijay.Bharadwaj@microsoft.com> wrote:
>
> Ø  Then, what threat model does crypto in JA make sense for at all?
> Obviously, when there's some lack of trust on the server *or* connection to
> the server that can be ameliorated by public key crypto.
>
> Harry asked the above on a different email thread. This is an important
> question. But first we should be precise about what we’re asking. WebCrypto
> is not (only) about “crypto in JS”. It is about giving JS access to the
> crypto capabilities of the underlying platform. This includes cases in which
> the actual crypto is done elsewhere such as in a smart card.

To be fair to both Firefox and Chromium, I think there's a little bit
of a disconnect, since the crypto of the platform (eg: Windows, Mac,
iOS, Android) may be different than the crypto of the browser (eg:
NSS, OpenSSL). But yes, I would certainly agree that it's about giving
more access (equivalent to native apps) to web apps, much like WebGL
gives more access to the 3D capabilities, or the WebRTC/MediaStream
proposals give more access to the audio/video capture capabilities.

>
>
>
> So when does it make sense to give JS access to the platform’s crypto
> capabilities? In my mind, there are a few possible answers.
>
>
>
> It makes sense when one wants a trusted piece of JS in a trusted UA to
> interact with a less trusted data store. This is the Facebook use case. It
> is also David’s cloud storage use case if the script is retrieved from
> somewhere other than the server that keeps the data.
>
>
>
> It makes sense when one wants a trusted piece of JS in a trusted UA to be
> able to interoperate with a server using an existing protocol (e.g. sign
> requests to access-controlled REST APIs, JimD’s use cases on authenticating
> to government services).
>
>
>
> It makes sense when a server wants to deliver JS that uses a trusted piece
> of pre-provisioned crypto hardware to establish end-to-end trust independent
> of the UA (e.g. using a smart dongle for online banking, some of the Netflix
> use cases).
>
>
>
> There may be others, and I’d love to hear what others think.
>
>
>
> It’s important to note that the “trusted UA” assumption is not as outlandish
> as it might seem at first; as Ryan points out on other threads, we routinely
> make an assumption that the OS is trusted when talking about native apps.
> One does not argue that including crypto APIs in operating systems is futile
> just because malware and rootkits exist. Many methods exist to improve the
> trust in the UA, including the use of non-browser JS implementations.
>
> <snip>
> I am not sure I can agree with this. I think the whole confusion so far has
> been regarding our position on the trustability of UA and the JS it is
> running. I personally think that we should steer away from the
> responsibility of providing a trusted UA. What is a trusted UA, BTW? A
> server has no way to say that it is communicating with a trusted UA.
> There is also difference between a JS running within a browser on the far
> end of the world, and a native application a user is using. As I pointed out
> earlier – in the former case, the trust refers to the one between the server
> application and the client UA/JS. In the later case, the trust refers to one
> between the human user using the app and the app itself. Apples and oranges.
>
> We will have to convince others that this API is not about trust – as you
> said earlier, it is a gateway into the crypto functionality (provided by the
> platform) - stronger and uniform. Whether it be preexisting keys or newly
> created ones, there is an element of user education when it comes to
> implementations. For example, verifying URL of the web site before accepting
> a signing request. If the implementation is so bad to allow injection of
> malicious JS on their sites, too bad.
> </snip>
>
> One could also argue that various crypto primitives – notably hash and RNG –
> are only meaningful if one accepts this assumption.
>
>
>
> Since this question seems to keep coming up in feedback, maybe we should
> develop a position on it as a group. Does anyone else have any thoughts on
> the matter?
>
>
>
>

+1 to clarifying a position as the group, as an introduction for
reviewers and for making sure expectations are set appropriately.

However, just so it's not missed, I do think the security
considerations reviewers have raised do come into play when we talk
about platform crypto (pre-existing) vs web crypto (web provisioned),
so we can't quite ignore those either. As Web Apps become more akin to
native apps, we don't want the security model to permit drive-by
malware that would otherwise be "prevented" by native app security
boundaries.

I think Seetharama's point about two types of trust are relevant to
that discussion, particularly when we talk about the user interaction
model. Much like the OS use case, we assume and presume some degree of
"secure" entry or interaction for some cases.

For example, on the OS side, Windows has its unspoofable security
screen to be used for password/pin entry, OS X has the Keychain Access
dialogs, and, on some Intel hardware, there are even drivers to
integrate directly with the hardware to provide TPM secure entry. On
the browser side, the equivalent would be whatever the particular user
agent deems as "unspoofable" (different U-A's have different
definitions according to their particular browser chrome/ui, but
functionally they're the same).

Such screens provide a means for trusted entry (or acknowledgement of
permissions, selection of certs, etc) for users, but they do not at
all provide any means of "trust" for the web application.

Received on Monday, 8 October 2012 21:47:42 UTC