- From: Katharine Berry <katharine@getpebble.com>
- Date: Thu, 5 Jun 2014 22:13:09 -0700
- To: Zack Weinberg <zackw@cmu.edu>
- Cc: mkwst@google.com, brian@briansmith.org, public-webappsec@w3.org
On 5 Jun 2014, at 17:50, Zack Weinberg <zackw@cmu.edu> wrote: > On Thu, Jun 5, 2014 at 8:48 AM, Mike West <mkwst@google.com> wrote: >> https://github.com/w3c/webappsec/commit/d635094f4e6f6a27fd565f63c9570858de27172b >> is a first pass at making this change. The draft at >> http://w3c.github.io/webappsec/specs/mixedcontent/ has been updated >> accordingly; it's probably easier to read there. :) > > On a more substantive note, I'm aware of one scenario where being able > to refer from a public to a private origin is desirable: suppose you > have a network-attached home device (which, in the US anyway, will be > on a private-use IP address behind NAT, accessible from a browser on > the same NAT, but not by the public Internet), the vendor's website > might like to offer a configuration interface to that device. I know > one developer in particular who has been very frustrated with > Firefox's existing restrictions on that sort of thing; I could invite > her to explain further if it would be helpful. Some sort of opt-in > mechanism from the device side (reuse Access-Control-Allow-Origin, > perhaps?) might thread the gap between "can't be done" and "drive-by > pharming: game on!" (Obviously doesn't have to be in level 1 of the > spec.) Hey there; I’m the developer Zack referred to. In short: the change proposed here is effectively designed to break our use-case, which involves websockets from a secure, public origin to an insecure, private origin. We control the software on both ends, and the user is aware of the action. The security of the client origin may be changed (we’d prefer not to); the rest most likely cannot be. For more detail, I will first give some context as to what exactly we’re trying to achieve and why we need to be able to do mixed content; I will also mention the issues we have with the existing TLS scope, since they’re tied together. I work at Pebble; we make watches that connect to your phone, and on which developers can run code. To be specific, our communication scheme for watch development is that the watch connects to an app running on the user’s phone via Bluetooth. The phone provides a websocket server, and our development tools connect to that. We use this connection to relay control messages, logs, binaries, etc. bidirectionally. The phone mostly acts as a proxy, forwarding messages to/from the watch without inspection or modification. Historically our standard development environment involved a series of command-line tools, but this is not user-friendly and is difficult to use on Windows. We therefore additionally provide a web-based tool, CloudPebble, as a complementary development solution. The key part here is that CloudPebble must be able to contact the watch; hence, it must be able to communicate with the websocket server on the phone. Our problem is that CloudPebble is a secure, public origin, and the phones constitute an insecure, private origin. Since the majority of our developers choose to use this solution instead of our command-line tools we would very much appreciate it if we could keep it working. We are open to making reasonable changes on both ends to keep it doing so (QA/release cycles notwithstanding). The current standards already effectively forbid this on the basis that a secure origin may not create a websocket to an insecure endpoint. This is enforced by Firefox and IE, but currently ignored by Chrome, Safari and Opera. Given that most of our users apparently use Chrome anyway, we have thus far mostly ignored the issue and not had any proper Firefox support (sorry Mozilla!). We have a couple of workarounds for that problem, both of which are unfortunate hacks or loophole abuse - or we could just disable TLS completely, but that’s certainly not an improvement and exposes a lot of presently safe information that never goes over the insecure websocket connection. That’s the context. Since the mobile apps are on the user’s internal network, then, we now have *two* closely tied problems, and we cannot use our existing workarounds to avoid the issue caused by this proposal Firstly, to the best of my knowledge we cannot reasonably provide a valid certificate from our websocket servers, as explained above. Secondly, the change being discussed here is explicitly designed to break this use-case. I do understand why this is a desirable change. However, it breaks our app and ruins an experience that currently works well for many of our developers. It was also, as far as I'm aware, legitimate functionality to have used up to this point. As for what we can do on our side: we control the code running on websocket servers, and can insist that people update if changes to the servers are necessary. However, that would require the server having the option to make this decision, or preemptively affirm that this is intended behaviour to the client; under the current draft this option does not exist. - Katharine Berry
Received on Friday, 6 June 2014 07:59:34 UTC