- From: Mike West <mkwst@google.com>
- Date: Fri, 6 Jun 2014 13:42:31 +0200
- To: Katharine Berry <katharine@getpebble.com>, Joel Weinberger <jww@google.com>
- Cc: Zack Weinberg <zackw@cmu.edu>, Brian Smith <brian@briansmith.org>, "public-webappsec@w3.org" <public-webappsec@w3.org>
- Message-ID: <CAKXHy=dtNeVGZWDd2tv6nfPXtdO2dUKCf=ea0S7PjFKYQPTYww@mail.gmail.com>
On Fri, Jun 6, 2014 at 11:14 AM, Katharine Berry <katharine@getpebble.com> wrote: > > More generally, we can’t do USB (the device has no USB port), and > bluetooth pairing with computers tends towards painful – especially since > it requires repeatedly switching paired host for testing. > The phone has USB though. Rather than routing requests through a WebSocket server on the phone, you could poke at the app via USB in some vague, hand-wavey way (I know next to nothing about the APIs offered via https://developer.chrome.com/apps/app_usb; I just know that they exist). > Again, we are trying to offer a seamless experience as much as possible. > Understood, but remember that seamless developer experience can't be an overriding concern. When defining the security parameters within which user agents operate, developers are a) a vanishingly small group, and b) the single group that's _best capable_ of finding their own workarounds that change the default behaviors (installing local proxy servers, for instance). One is already implemented and live (this fixed both Chrome Canary and > current Firefox; I haven’t yet checked IE), but the fact that it works at > all feels like a browser flaw in itself. > If it's worker-based weirdness or sandboxed <iframe> origin confusion, then yes, it's a browser bug. If it's not one of those bugs, then it's still probably a browser bug. The intent is to block insecure connections. > An alternative, absurd hack we have is to actually enable TLS in the > websocket servers, use a wildcard cert pointing to a domain that will > resolve to arbitrary IPs, and provide the private key to the mobile apps as > necessary. This solution has a bunch of obvious issues. > I think a self-signed certificate that you ask your developers to accept is more likely to be a reasonable solution than a wildcard cert with wildcard dns. That limits the risk associated with the certificate to those folks who explicitly trust Pebble. > Finally, we can just turn off TLS on the main site, which would have the > unfortunate side effect of potentially exposing users’ login sessions, > source code, etc. that were previously safe. > That would be an unfortunate result indeed. More TLS is better TLS. > In general, this restriction does not feel like it is actually *enhancing* > security; instead, it pressures us to eliminate it. > Another way of looking at it is that the general push to eliminating mixed content enhances security on the web as a whole at the expense of a few outlying use-cases (which themselves are only supported in some browsers). > Regardless of how we cope with it, it doesn’t help with the public/private > discussion that is the core of this thread. > Indeed. However, I don't think that leaving things as wide open as they are today in Chrome is an option. > In my experience, getting users to correctly install a certificate, > regardless of whether they understand the implications, is tricky. > For Chrome, If you present a self-signed certificate, and the user navigates through the interstitial that Chrome presents, I believe we'll save that certificate state for some period of time. +Joel, who did some work on that recently. I don't know what Mozilla's behavior here is. > I was specifically referring to the public->private connections; I’m aware > that the the TLS->non-TLS connections have been banned forever. If > something has actually called public->private websockets out as being > illegal I haven’t run across them previously – have they? > Probably not WebSockets in particular, but RFC1918 defined private networks in the mid 90's. User agents like IE and Opera ran with those definitions to block access to private networks from public networks. The fact that none of this worked in browsers like Firefox or IE probably suggested to you that it wasn't well-supported by the platform. :) > If not, I think it would be ideal to avoid breaking something that was > historically legal and has (at least what we believe to be) legitimate use > without providing any recourse. > With the caveat that I haven't given a lot of thought to the impacts, a CORS-like approach with a preflight request to a `/.well-known/...` URL sounds like a possible way of enabling these kinds of connections. -mike
Received on Friday, 6 June 2014 11:43:19 UTC