Re: [MIX]: Expand scope beyond TLS/non-TLS (Re: "Mixed Content" draft up for review.)

On 2014-06-06 1:13 AM, Katharine Berry wrote:
> On 5 Jun 2014, at 17:50, Zack Weinberg <> wrote:
>> On a more substantive note, I'm aware of one scenario where being able
>> to refer from a public to a private origin is desirable: suppose you
>> have a network-attached home device (which, in the US anyway, will be
>> on a private-use IP address behind NAT, accessible from a browser on
>> the same NAT, but not by the public Internet), the vendor's website
>> might like to offer a configuration interface to that device.
> In short: the change proposed here is effectively designed to break
> our use-case, which involves websockets from a secure, public origin
> to an insecure, private origin. We control the software on both ends,
> and the user is aware of the action. The security of the client origin
> may be changed (we’d prefer not to); the rest most likely cannot be.
> I work at Pebble; we make watches that connect to your phone, and on
> which developers can run code. To be specific, our communication
> scheme for watch development is that the watch connects to an app
> running on the user’s phone via Bluetooth. The phone provides a
> websocket server, and our development tools connect to that. We use
> this connection to relay control messages, logs, binaries, etc.
> bidirectionally. The phone mostly acts as a proxy, forwarding messages
> to/from the watch without inspection or modification.

Thinking out loud:

Presupposition: as far the reference monitor in the browser is 
concerned, communication is with the phone.  More precisely, if the 
websockets channel were to be secured, you would terminate TLS on the 
*phone*, not the watch, and rely on Bluetooth link encryption to protect 
that dialogue.  Is that right, Katherine?  (I bring this up mainly 
because pretending the watch doesn't exist simplifies reasoning from the 
browser's perspective, but also because the phone is probably capable of 
doing more work if we need it to, whereas the watch might not be.)

Now, it seems to me that this scenario presents two intertwined, but 
logically distinct, problems.  Pebble needs to communicate with a device 
whose address is in private IP space, from a program whose Web security 
origin is in public IP space.  It also needs to refer to a device that 
has no globally unique name and therefore can't be given a TLS server 
certificate even if Pebble could find a CA that would cooperate; so it 
can't expose wss:// or https:// communications ports.

The challenge for the committee is that both of these legitimate needs 
on Pebble's part are computationally indistinguishable from things 
malicious or clueless actors might try to do: we know that it's possible 
for a malicious website to exploit an enormous number of home "routers" 
by blind CSRF attacks on, and we also know that if we allow 
connecting to ws:// from https://, some ignoramus will load scripts over 
that and feed them to eval() and then deny there is a security hole in 
their webapp.

With my security hat on, I really really don't want to relax the 
mixed-content restriction on WebSockets.  The public-to-private 
restriction, however, I am less attached to.  In particular, I think 
private-device developers could be allowed to opt into being accessible 
from a specific public origin.  But, of course, if we do *that*, we have 
to confront the problem of not having a globally unique name for the 
private device, that can be assigned a certificate.

But I think it can be done.  Here is a strawman protocol, featuring 
three parties: Device (the host on a private IP address; it is assumed 
that Device can contact the public Internet as a web client; this would 
be the phone in the Pebble scenario); Browser (the browser, which can 
communicate with Device as a client); and Mothership (a public website, 
operated by Device vendor or aftermarket).

0) Device registration.  Device generates a self-signed TLS certificate 
with a special CN/SAN, of the form  Device contacts 
Mothership directly, and submits its certificate, which Mothership 
stores, associated with a particular user account.  Device records 
Mothership's end-entity certificate at the same time.

1) Setup.  Browser connects to Mothership, logs in as a particular 
end-user, and receives a JS program that is going to talk to Device. 
Mothership also transmits the expected certificate for Device.  The JS 
program pokes this certificate into the browser.

Browser generates an ephemeral client certificate and submits it to 
Mothership; Mothership signs it with its end-entity key (yes, willful 
violation of TLS PKI here).

2) Handshake.  Browser, under control of Mothership's program, makes a 
TLS (https: or wss:) connection to Device.  This connection is allowed 
to go into private IP space, and normal PKI validation rules are 
suspended.  Instead, Device accepts the connection only if Browser 
submits a TLS client certificate signed with Mothership's end-entity 
certificate, and Browser accepts the connection only if Device submits 
the server certificate that Mothership told it to expect.


This monkeywrenches our way around the inability to assign a "real" TLS 
server certificate to a host with no global DNS name.  It also protects 
Device from malicious websites other than the Mothership that it 
registered itself with (which it trusts, by assumption).  I know all too 
well that it's going to be a PITA to implement in the browser, but I 
don't see anything simpler that gives us the security properties we need.

Katharine: How infeasible would the Device/Mothership behavior here be 
for Pebble to implement?  (Assume all the browser APIs you need are 

webappsec: Poke holes in my strawman, please.


Received on Monday, 9 June 2014 21:38:17 UTC