[Bug 25721] extractable keys should be disabled by default

https://www.w3.org/Bugs/Public/show_bug.cgi?id=25721

--- Comment #26 from Harry Halpin <hhalpin@w3.org> ---
(In reply to Richard Barnes from comment #25)
> (In reply to Harry Halpin from comment #24)
> > The existence of extractable keys has some use cases that the Working Group
> > has already gone over in detail. For example, backing up keys. 
> 
> For more example, I spoken with multiple developers who intend to extract
> keys and wrap them with PBKDF2 for storage.  This is actually safer in the
> face of an adversary with the ability to read the local disk (but without
> the ability to hook the browser).  There's a trade-off here between
> protecting against script injection and protecting against local processes;
> disabling extractable keys just forces developers to accept the local
> attackers.

All solid points, although I assume again it all depends on your threat model. 

> 
>  
> > Also, Ryan, Tom, and Elijah all agree that currently with the design of
> > Javascript and Web Crypto,  private keys can always be extracted by the
> > server, even if the keys are marked unextractable. Does anyone have any text
> > to put under the definition of "extractabie" (Currently just "Whether or not
> > the raw keying material may be exported by the application") to help Web
> > developers understand that keys marked unextractable may not actually give
> > protection of private key material from the server? 
> 
> Isn't the definition of "extractable" just "extract() and wrap() work"?  
> 
> In any case, I'm not clear what you mean by "private keys can always be
> extracted by the server, even if the keys are marked unextractable".  I'm
> assuming that by "server" here, you mean "JS".   If the JS calls
> generateKey() with extractable == false, then it certainly cannot access the
> private key material.

Sounds like a test case to me. My point is that some text that highlights the
problems inherent in trying to do end-to-end user-centric encryption  on the
Web (where the server doesn't have the ability to decrypt the user's data
without the user's knowledge) could prevent app designers from being misled
about the security properties of their WebApps. I'm thinking of you Protonmail
:)  

Again, this seems to boil down to a problem of checking the veracity of JS
code. Independent code verification and auditing of JS code would probably help
here in cases where the user really wanted to be assured of the code that the
server was running. 

> 
> 
> > I think the question is what are the use cases for truly non-extractable
> > keys that can't be accessed by the server, so the server  has no actual way
> > of decrypting the data. The obvious use-case is applications where there is
> > to be genuine user-to-user (end-to-end encryption) where the server cannot
> > retrieve the private keys (at least without the user's knowledge). Right now
> > on the Web, the server that serves the code is *always* trusted (Trent) as
> > it can modify the code it wants, and thus always modify the code to get the
> > keys. So Ryan is right, this is basically impossible. 
> 
> I'm not as negative as Ryan on this.  Even if the server can modify the
> code, if the code that creates a key the first time is good, then at the
> very least, the server has to re-key the browser in order to have an
> extractable private key.  And that action is visible to anyone that the
> endpoint corresponds with, so you can apply techniques analogous to
> Certificate Transparency.

This sounds like an idea for a new standard :) Ben Laurie has a mailing list on
this, and I'm sure W3C would be happy to see a draft of something in this
space. 

Again, would this response satisfy the reviewers? 

> 
> (And this is not to mention things like sub-resource integrity, which can
> prevent JS from changing without authorization.)
> 
> 
> > My observation is that while there are valid use-cases for user-to-user
> > (end-to-end) encryption, Ryan is right insofar as it seems impossible to
> > build these types of applications on the Web. However, it would seem
> > possibly desirable in the future for the Web to support such use-cases.
> > Thus, we are hoping that broaching this topic with the wider WebAppSec group
> > at W3C, and perhaps later with the relevant other standards bodies, would be
> > at least a start.
> > 
> > Would this satisfy the reviewers? 
> > 
> > (In reply to Tom Lowenthal from comment #23)
> > > Ryan, I chose my words carefully. I said “trustworthy” not “secure”. I think
> > > that the option of extractable keys makes it harder for applications built
> > > on this API to be worthy of users' trust.
> > > 
> > > As you say — if someone wants to make a key which they can extract, they can
> > > do that right now. My objection is based on the firm belief that the ability
> > > to extract keys is a harmful design pattern. I think that this choice would
> > > give developers enough rope to shoot themselves in the foot which would be
> > > harmful to web security.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Received on Monday, 11 August 2014 01:11:38 UTC