- From: Katharine Berry <katharine@getpebble.com>
- Date: Wed, 11 Jun 2014 05:32:35 -0700
- To: Mike West <mkwst@google.com>
- Cc: Zack Weinberg <zackw@cmu.edu>, Brian Smith <brian@briansmith.org>, "public-webappsec@w3.org" <public-webappsec@w3.org>
- Message-Id: <DBC6D545-1DE7-4B97-BAE7-8BCE58A4426F@getpebble.com>
On 11 Jun 2014, at 02:40, Mike West <mkwst@google.com> wrote: > The nice thing from my perspective is that users who don't fall into > the category of developers who need to access local devices from a > public website would retain all their status-quo protections As a minor addendum, I would like to step back from “developers”; this use case is more broadly something like “webapp on a public site would like to configure a user’s private, network-accessible devices.” For us, “user” is (loosely) “developer” and “configure” is “run code on”, but I doubt this is the only use for such a setup (even for us). > (I realized while writing that bit that I don't actually understand > (how Pebble's app opens a WebSocket connection in the first place. How > (do you know what IP address to connect to?) We have two paths for this: a) The user looks at their phone, which displays the address. The user types the address into the web app. We store the address locally to save future typing. b) 0) The phone registered itself with a Pebble service during onboarding, when the user logged in with the same credentials used on the webapp (OAuth both times). This registration includes the phone’s human-readable name and a push token. 1) The webapp retrieves and presents the user’s registered devices in a dropdown list. 2) The user selects the device from the list; the webapp arranges for a message containing a unique token to be pushed to the app via Google or Apple’s push messaging services. 3) The app receives the message and responds with its local IP address and that token. 4) The webapp temporarily stores that address locally because this dance is relatively slow. After that the interaction between webapp and phone is the same in each case. We have found that users very frequently fail the first path (IP addresses are hard), but the second path is pretty reliable and we default to it when available (it isn’t always). The existence of the second path is also why we already have much of the infrastructure required to support Zack’s suggestion. > Note also that, because of the pharming attack, we really need > mutual authorization here. > > I do agree. That's one of the reasons I find it difficult to allow > arbitrary access to pieces of an internal network without the user > actively doing something in order to enable that ordinarily dangerous > connection. “Arbitrary” access, sure, but I don’t think anyone is suggesting that - if both ends have explicitly indicated their interest in making the connection, it’s not clear to me why the user needs to cast the deciding vote. I’m not sure that you actually disagree with me here, since we could (hypothetically) just kill TLS and then a CORS-like or similar solution would work fine on its own. > Manually installing a certificate that Pebble provides seems like the > right solution for the TLS side of the problem. That was rejected as > being too much work for the developer. Personally, I'd prefer that we > reevaluate that objection. I’m not actually rejecting it as “too much work” so much as some combination of “they probably can’t do it” on their side and “demanding this is embarrassing” on ours. At least anecdotally, I have seen (proportionally) many users struggle with certificate installation in an environment where an organisational root CA was used universally, and few succeed. If necessary we will consider this, but I think it is more likely we will either attempt to subvert existing PKI (we have already discussed this and are willing to try but are kinda disgusted by the idea) or just throw our hands up and turn off TLS on the main site, isolating that as much as possible from the rest of our services (we generally dismiss this out of hand, but it can and may well be our final fallback if necessary). > Something CORS-like seems like the right solution to the private-IP > side of the problem, but I'm quite open to other proposals. While I prefer Zack’s suggestion in that it solves all of our problems in one shot and in a manner that looks to provide security in both the general case and the mixed public/private case, I cannot disagree with the complexity (or “PITA”) comment. Failing that, the CORS-like preflight seems pretty reasonable to me to specifically solve the public/private issue. > The two of those together would provide mutual authorization: the user > expects connections to a device matching the cert they installed into > their trust store, and the device expects to service connections from > the outside world. Again, not sure that the *user* is the right party to be performing this authorisation; I imagine the device and webapp together know much better than the user what they expect to happen. At best, the user is blindly guided by one or both of those parties trying to encourage them to do the things that make them happy, whatever those ultimately end up being. – Katharine
Received on Wednesday, 11 June 2014 12:33:08 UTC