Re: [from ekr] More on key isolation/netflix use case

On Tue, May 15, 2012 at 11:26 AM, Seetharama Rao Durbha <
S.Durbha@cablelabs.com> wrote:

> The question is ‘how is the server going to know that a signature is
> originating from within the browser implementation’. This goes back to
> EKR’s alluding to a compliant browser and how the server would know it is
> coming from a compliant browser. As far as the server is concerned, it
> would not know the difference between a message received from a custom
> client vs browser. Unless, as mentioned by Mitch Zollinger, there is a
> pre-authentication step using some pre-stored keys – in which case,
> operations (like signature or HMAC) using those keys should suffice on my
> opinion. Any keys generated by JS itself, are, for all practical purposes,
> untrustable (meaning any client can replicate such generation).
>

Ah, I'm not clear on what attacks we're trying to prevent here. I was
imagining a situation in which the only attacker we're worried about is
some malicious JS that gets run at some point after key generation. For
example, maybe a website generates a key for each user at registration
time, and if the website becomes aware of a compromise that started at some
time, it knows that only users who registered after that time were
vulnerable. (Assuming that signing requires some genuine user interaction.)
Or maybe the website is built in some way that an xss that allows the
attacker to sign messages is reasonably likely, but an xss that allows the
attacker to hijack the key generation process is not at all likely. (Maybe
the key generation process is heavily sandboxed or something, not sure I've
thought that through entirely.) Is there something I'm missing that
suggests that even this threat model is vulnerable? Or do you believe that
this threat model is just not particularly useful?


> Following your example, if the JS were to generate a signing key that
> requires some user interaction – question is, who is processing the user
> interaction – the browser or the JS itself? If the later, JS already has
> essential seeding material for the key. If the former, I am not sure the
> API will be supporting it. Even if the API does support such functions,
> what prevents the JS itself to generate the key (not using the API) – if
> the JS were so untrustworthy?****
>
> **
>

This UI would definitely have to be in the browser and not controlled (or
forge-able, or clickjack-able) by the JS.

Emily


> *From:* Emily Stark [mailto:estark@mit.edu]
> *Sent:* Tuesday, May 15, 2012 9:01 AM
> *To:* Seetharama Rao Durbha
> *Cc:* public-webcrypto@w3.org
> *Subject:* Re: [from ekr] More on key isolation/netflix use case****
>
> ** **
>
> On Tue, May 15, 2012 at 10:07 AM, Seetharama Rao Durbha <
> S.Durbha@cablelabs.com> wrote:****
>
> ** **
>
> I have a similar observation on the discussion of key isolation or hiding
> the key from JS in yesterday's call. In my opinion, any key generated by
> the JS (as a result of invoking the API) should be accessible to the JS. *
> ***
>
> ** **
>
> This seems overly permissive to me. Maybe geolocation-style access control
> UI is out of scope for this API, but it seems conceivable to me that an
> application could, for example, want to have JS generate a signing key that
> requires some user interaction to use. It would be useful to be able to
> guarantee that an xss or other malicious JS could only use the key as a
> signing oracle (which requires user interaction), and not be able to
> actually steal the key itself.****
>
> ** **
>
> Emily ****
>
> ** **
>
> ** **
>
> -----Original Message-----
> From: Wendy Seltzer [mailto:wseltzer@w3.org]
> Sent: Monday, May 14, 2012 11:53 AM
> To: public-webcrypto@w3.org
> Subject: Fwd: [from ekr] More on key isolation/netflix use case
>
> (I'm not sure why this didn't go through directly, since Eric is
> subscribed as an Invited Expert -- perhaps with a different email
> address?)
>
> -------- Original Message --------
> Date: Mon, 14 May 2012 16:07:28 +0000
> From: Eric Rescorla <ekr@rtfm.com>
>
>
> The Netflix use case document posted by Mitch shows an example of a DH key
> exchange designed to create a secure key between Alice and Bob without the
> JS getting it.
>
>    To support Diffie-Hellman key exchange using WebCrypto, we might do
> something like this:
>
>    // In this example, we use the following webcrypto APIs:
>    // DiffieHellman object ctor
>    //   DiffieHellman(p, g)
>    //
>    // (member function) generate() internally creates 'a' & returns 'A'
>    // 'a' is never visible in Javascript
>    //   generate()
>    //
>    // (member function) computeSS() takes 'B' & calculates 'ss'
>    //   computeSS(B)
>
>    // example usage of above APIs to create 'ss'
>    var dh = new DiffieHellman(p, g);
>    var A = dh.generate();
>    // we now send 'p', 'g', and 'A' to the server which responds with 'B'
>    // after receiving 'B' we generate 'ss' which stays inside our dh object
>    dh.computeSS(B);
>
>    At this point, we have created a shared secret which is inaccessible
>    to Javascript, but we can't yet do anything useful with it. In order
>    to transform the shared secret into something usable we need to use a
>    key derivation algorithm (RFC 2631? or something simpler?) to compress
>    or expand the keying material 'ss' to keying data which is the
>    appropriate size for some other algorithm."
>
> I agree that this creates a shared secret not known to the JS, but what
> stops the JS from mounting a MITM attack. I.e., it generates it's own DH
> key pair (c, C) and provides C to boththe local browser and the remote end.
> At this point, it shares K_ac with the browser and K_bc with Bob. Absent
> some method for verifying that a DH share came out of a compliant browser,
> it's not clear to me what security benefit has been achieved here.
>
> -Ekr
>
>
>
>
> ****
>
> ** **
>

Received on Saturday, 19 May 2012 05:41:02 UTC