Re: Usefulness of WebCrypto API

On Tue, Oct 9, 2012 at 9:22 AM, Seetharama Rao Durbha
<S.Durbha@cablelabs.com> wrote:
> As Vijay originally said, we probably cannot escape answering this question
> around trusted JS.
>
> As I was thinking about it, the API can fall into two categories –
>
> One dealing with complete in-memory crypto operations - RNG, key
> derivation/creation, encryption/decryption/signing,etc. all done within
> browser's memory
> Another dealing with crypto operations on 'external' devices or
> storing/retrieving key in/from external devices/storage (including browser
> storage)
>
> The later ones require the JS to be downloaded using SSL. The former ones do
> not require JS to be downloaded using SSL.
>
> Can we recognize this in our API? That is, some parts of the API do not
> require JS to be trusted, and other parts do?
>
> So, the trust models we support will be
>
> In-memory crypto operations – NONE
> Device/Storage operations – user trusts the browser application he/she is
> using; browser trusts the JS invoking the API; (Of course, browser trusts
> the platform and vice versa :) -  hoping both browser and platform are not
> susceptible to malware )
>
>
> I know that this could be more complicated than the above paragraph, but
> just wanted to put it out there.
>
> Thanks,
> Seetharama


While I appreciate what you're trying to propose here, I do not
believe the distinction is at all relevant to the security properties.
The same concerns regarding the malleability of the environment exist
for "in-memory" operations and external devices.

We support requiring SSL and CSP for both options. There's no added
benefit by doing SSL-only (but not CSP), in the case of "external"
devices, and there's no "inherent reduced security" for the
non-external situation.

Thus, I think your split is not accurate, nor reasonable grounds for
security distinction, at least not where you've drawn it. Both use
cases want JS to be trusted.

>
>
> On 10/8/12 7:12 PM, "Mountie Lee" <mountie.lee@mw2.or.kr> wrote:
>
> Hi.
> for this issue, I think it is important.
>
> we need consensus for securing JS codes.
>
> Harry's opinion "ability to write new secure protocols in JS" is one of good
> consideration.
>
> in details
>
> Signed JS
> (http://www.w3.org/2012/webcrypto/wiki/Use_Cases#Signed_web_applications) is
> already listed in our UseCase.
> to generate and verify JS, generating JS code hash is also required.
>
> can these features be focused more?
>
> regards
> mountie.
>
>
> On Tue, Oct 9, 2012 at 5:51 AM, Harry Halpin <hhalpin@w3.org> wrote:
>>
>> On 10/08/2012 09:22 PM, Vijay Bharadwaj wrote:
>>
>> Ø  Then, what threat model does crypto in JA make sense for at all?
>> Obviously, when there's some lack of trust on the server *or* connection to
>> the server that can be ameliorated by public key crypto.
>>
>> Harry asked the above on a different email thread. This is an important
>> question. But first we should be precise about what we’re asking. WebCrypto
>> is not (only) about “crypto in JS”. It is about giving JS access to the
>> crypto capabilities of the underlying platform. This includes cases in which
>> the actual crypto is done elsewhere such as in a smart card.
>>
>>
>>
>> So when does it make sense to give JS access to the platform’s crypto
>> capabilities? In my mind, there are a few possible answers.
>>
>>
>>
>> It makes sense when one wants a trusted piece of JS in a trusted UA to
>> interact with a less trusted data store. This is the Facebook use case. It
>> is also David’s cloud storage use case if the script is retrieved from
>> somewhere other than the server that keeps the data.
>>
>>
>>
>> It makes sense when one wants a trusted piece of JS in a trusted UA to be
>> able to interoperate with a server using an existing protocol (e.g. sign
>> requests to access-controlled REST APIs, JimD’s use cases on authenticating
>> to government services).
>>
>>
>>
>> It makes sense when a server wants to deliver JS that uses a trusted piece
>> of pre-provisioned crypto hardware to establish end-to-end trust independent
>> of the UA (e.g. using a smart dongle for online banking, some of the Netflix
>> use cases).
>>
>>
>>
>> There may be others, and I’d love to hear what others think.
>>
>>
>>
>> It’s important to note that the “trusted UA” assumption is not as
>> outlandish as it might seem at first; as Ryan points out on other threads,
>> we routinely make an assumption that the OS is trusted when talking about
>> native apps. One does not argue that including crypto APIs in operating
>> systems is futile just because malware and rootkits exist. Many methods
>> exist to improve the trust in the UA, including the use of non-browser JS
>> implementations. One could also argue that various crypto primitives –
>> notably hash and RNG – are only meaningful if one accepts this assumption.
>>
>>
>> I agree with all the above, and thanks for listing them out, as I think
>> they are all quite valid.  Again, most of the critiques we've gotten on the
>> API are about not having trust in the JS at all, yet in general, I would
>> maintain that arguments about server's being compromised are similar to
>> arguments over rootkits on the OS level - its just XSS is generally easier
>> than rootkits.
>>
>> I guess what some developers want is:
>>
>> 1) ability to write new secure protocols in JS for use with WebApps with
>> functions such as digital signatures . This would be very useful for a whole
>> range of functions involving multiple servers besides the same-origin, such
>> as OpenID Connect flows where one passes a signed token from a browser to
>> identity provider, who then can pass that to a relying party in order to
>> access personal data.
>>
>> 2) Developers want the crypto API to be a silver bullet for security as
>> they assume access to "crypto functions = must be secure", but of course in
>> reality there's quite a few more bases to be covered. Off the top of my
>> head,  the developer should use CSP combined with HSTS/Cert
>> Transparency/pinning for TLS (anything missing here?).  That's about as
>> close as we're going to get to allowing them to creating secure protocols in
>> a reasonable manner for WebApps.
>>
>> Also, its unclear if the server and client should be considered *one*
>> application, as is traditional in web-apps. We have not yet the spec stable
>> enough where we can have a use-case where  the user can have a private key,
>> store it, but not let secret key material be arbitrarily replaced by key
>> material from the same-origin. I'd personally like to see that as a
>> possibility, as that would enable use-cases where the server might be not be
>> entirely trusted.
>>
>>    cheers,
>>        harry
>>
>>
>>
>>
>>
>> Since this question seems to keep coming up in feedback, maybe we should
>> develop a position on it as a group. Does anyone else have any thoughts on
>> the matter?
>>
>>
>>
>>
>>
>>
>
>
>
> --
> Mountie Lee
>
> PayGate
> CTO, CISSP
> Tel : +82 2 2140 2700
> E-Mail : mountie@paygate.net
>
> =======================================
> PayGate Inc.
> THE STANDARD FOR ONLINE PAYMENT
> for Korea, Japan, China, and the World
>

Received on Tuesday, 9 October 2012 16:46:45 UTC