Re: Usefulness of WebCrypto API

On Wed, Oct 10, 2012 at 7:47 AM, Vijay Bharadwaj
<Vijay.Bharadwaj@microsoft.com> wrote:
> So it seems to me that there are at least two aspects of "security" and/or "trust" at play here:
>
> 1. Server wants to know that client is executing the right script.
> 2. User wants to know that the script won't harm them.
>
> I think #1 can only be solved by using a crypto API in conjunction with some form of pre-provisioning. The only way for a server to deal with a potentially misbehaving client is to construct a protocol that will fail when the client misbehaves. For example, Netflix appears to be doing this by baking keys into known good client platforms, so that possession of such a key implies something about the platform's trustworthiness. Another way to achieve this may be a smart card used with an external reader that has its own display for the user to verify the transaction being performed.
>
> #2 includes the set of concerns around privacy. We can solve for some of these - for example, we can have the UA display prompts before allowing discovery of, or access to, keys from other origins. Other concerns are not so easy to solve (e.g. UA correctly executes script that allows you to access your health records, but then sends copies of everything to an attacker). Either way, mitigating these concerns requires trusting the client to do the "right thing" for the user. One could layer more stuff on top of the client platform (e.g. DDahl's proposal for having script signed by a store) but the fact remains that you assume a trustworthy execution environment for the script to verify these signatures, for example.
>
> I think part of the disconnect is because developers (as represented in our use cases) are looking at #1 as the primary problem solved by this API, while reviewers are focused on #2. For example, the Facebook use case assumes that the browser is trustworthy though its WebStorage may not be. Consequently, delivering a script to the browser with a public key and some signatures is enough to ensure the browser executes the right script. This improves user responsiveness and prevents one class of attack on the user. The fact that it doesn't prevent other classes of attack (such as Trojans in the browser) is understood and potentially addressed in other ways. From the user's perspective, when used correctly, the API delivers a better web experience without significant increase in risk.

It's not immediately clear to me the point you're making here.

Do you consider the Facebook use case as relying on pre-provisioned
keys (what you called out in #1)?

>
> So maybe we should set expectations that implementing a WebCrypto API is not by itself a means of improving the security of web browsers; perhaps that is even a non-goal. The point of having a WebCrypto API is to make it possible to do things in browsers that were hard or impossible to do before, allowing developers to make the right tradeoffs with respect to risk management. We have plausible use cases where such tradeoffs can be made, and the fact that there are many use cases where it doesn't make sense is not necessarily an argument against the API.

Absolutely agree with this point. This is not trying to be the
solution that "solves" all of the (security) concerns with JS, even if
it is a security API, and even if some of these concerns apply to
cryptography. These concerns equally apply to non-cryptographic uses,
so trying to say that we have to solve them first to solve
cryptography is... misleading.

>
> So I'm not suggesting that we ignore the security considerations that differentiate web apps from native apps, I'm suggesting that perhaps we should try and convey that WebCrypto is just another API, not "security magic" that renders existing security considerations irrelevant. Leading with the use cases might help reinforce that.


Yes, but I do think we need to be careful to take considerations as to
what the security of the web platform currently offers, to make sure
that the claims/needs/desires of the API can actually match with
reality. I've tried to highlght this several times (eg: when talking
about multi-origin key access and same-origin-policies), and how some
desired security properties may only be achievable using out-of-band
means.

That said, I think there is still *significant* value in this "as an
API" - as demonstrated by Facebook's use case, as demonstrated by the
"encrypted IndexedDB" use case I'd raised previously, as demonstrated
by systems like Mozilla's persona.

>
> Does that address your points?
>
> -----Original Message-----
> From: Ryan Sleevi [mailto:sleevi@google.com]
> Sent: Monday, October 8, 2012 2:47 PM
> To: Seetharama Rao Durbha
> Cc: Vijay Bharadwaj; public-webcrypto@w3.org; David Dahl; Emily Stark; Wan-Teh Chang; GALINDO Virginie; Harry Halpin
> Subject: Re: Usefulness of WebCrypto API
>
> On Mon, Oct 8, 2012 at 2:32 PM, Seetharama Rao Durbha <S.Durbha@cablelabs.com> wrote:
>>
>>
>> On 10/8/12 1:22 PM, "Vijay Bharadwaj" <Vijay.Bharadwaj@microsoft.com> wrote:
>>
>> Ø  Then, what threat model does crypto in JA make sense for at all?
>> Obviously, when there's some lack of trust on the server *or*
>> connection to the server that can be ameliorated by public key crypto.
>>
>> Harry asked the above on a different email thread. This is an
>> important question. But first we should be precise about what we're
>> asking. WebCrypto is not (only) about "crypto in JS". It is about
>> giving JS access to the crypto capabilities of the underlying
>> platform. This includes cases in which the actual crypto is done elsewhere such as in a smart card.
>
> To be fair to both Firefox and Chromium, I think there's a little bit of a disconnect, since the crypto of the platform (eg: Windows, Mac, iOS, Android) may be different than the crypto of the browser (eg:
> NSS, OpenSSL). But yes, I would certainly agree that it's about giving more access (equivalent to native apps) to web apps, much like WebGL gives more access to the 3D capabilities, or the WebRTC/MediaStream proposals give more access to the audio/video capture capabilities.
>
>>
>>
>>
>> So when does it make sense to give JS access to the platform's crypto
>> capabilities? In my mind, there are a few possible answers.
>>
>>
>>
>> It makes sense when one wants a trusted piece of JS in a trusted UA to
>> interact with a less trusted data store. This is the Facebook use
>> case. It is also David's cloud storage use case if the script is
>> retrieved from somewhere other than the server that keeps the data.
>>
>>
>>
>> It makes sense when one wants a trusted piece of JS in a trusted UA to
>> be able to interoperate with a server using an existing protocol (e.g.
>> sign requests to access-controlled REST APIs, JimD's use cases on
>> authenticating to government services).
>>
>>
>>
>> It makes sense when a server wants to deliver JS that uses a trusted
>> piece of pre-provisioned crypto hardware to establish end-to-end trust
>> independent of the UA (e.g. using a smart dongle for online banking,
>> some of the Netflix use cases).
>>
>>
>>
>> There may be others, and I'd love to hear what others think.
>>
>>
>>
>> It's important to note that the "trusted UA" assumption is not as
>> outlandish as it might seem at first; as Ryan points out on other
>> threads, we routinely make an assumption that the OS is trusted when talking about native apps.
>> One does not argue that including crypto APIs in operating systems is
>> futile just because malware and rootkits exist. Many methods exist to
>> improve the trust in the UA, including the use of non-browser JS implementations.
>>
>> <snip>
>> I am not sure I can agree with this. I think the whole confusion so
>> far has been regarding our position on the trustability of UA and the
>> JS it is running. I personally think that we should steer away from
>> the responsibility of providing a trusted UA. What is a trusted UA,
>> BTW? A server has no way to say that it is communicating with a trusted UA.
>> There is also difference between a JS running within a browser on the
>> far end of the world, and a native application a user is using. As I
>> pointed out earlier - in the former case, the trust refers to the one
>> between the server application and the client UA/JS. In the later
>> case, the trust refers to one between the human user using the app and the app itself. Apples and oranges.
>>
>> We will have to convince others that this API is not about trust - as
>> you said earlier, it is a gateway into the crypto functionality
>> (provided by the
>> platform) - stronger and uniform. Whether it be preexisting keys or
>> newly created ones, there is an element of user education when it
>> comes to implementations. For example, verifying URL of the web site
>> before accepting a signing request. If the implementation is so bad to
>> allow injection of malicious JS on their sites, too bad.
>> </snip>
>>
>> One could also argue that various crypto primitives - notably hash and
>> RNG - are only meaningful if one accepts this assumption.
>>
>>
>>
>> Since this question seems to keep coming up in feedback, maybe we
>> should develop a position on it as a group. Does anyone else have any
>> thoughts on the matter?
>>
>>
>>
>>
>
> +1 to clarifying a position as the group, as an introduction for
> reviewers and for making sure expectations are set appropriately.
>
> However, just so it's not missed, I do think the security considerations reviewers have raised do come into play when we talk about platform crypto (pre-existing) vs web crypto (web provisioned), so we can't quite ignore those either. As Web Apps become more akin to native apps, we don't want the security model to permit drive-by malware that would otherwise be "prevented" by native app security boundaries.
>
> I think Seetharama's point about two types of trust are relevant to that discussion, particularly when we talk about the user interaction model. Much like the OS use case, we assume and presume some degree of "secure" entry or interaction for some cases.
>
> For example, on the OS side, Windows has its unspoofable security screen to be used for password/pin entry, OS X has the Keychain Access dialogs, and, on some Intel hardware, there are even drivers to integrate directly with the hardware to provide TPM secure entry. On the browser side, the equivalent would be whatever the particular user agent deems as "unspoofable" (different U-A's have different definitions according to their particular browser chrome/ui, but functionally they're the same).
>
> Such screens provide a means for trusted entry (or acknowledgement of permissions, selection of certs, etc) for users, but they do not at all provide any means of "trust" for the web application.
>

Received on Wednesday, 10 October 2012 18:31:37 UTC