Re: Drastically cutting primary features [was Re: Last call for public comments on Web Crypto charter]

Mitch Zollinger wrote:
> We've spent the last 4 years finding that a secure protocol without
> TLS is a Really Good Thing for our use cases. I can break the issues
> down into two man categories: operational issues and performance
> issues.

It is one thing to have a secure proprietary protocol and another thing for the browser to know your proprietary protocol is secure. If your protocol operates over plain HTTP (not HTTP over TLS) or plain websockets (ws:// not wss://), then the browser is going to assume that it is insecure, full stop. That means, when you are watching a video streamed using such a protocol on https://netflix.com/, the browser is going to block the video because it is HTTP content embedded in HTTPS content ("mixed content"). (Note: currently we, Firefox, don't do this blocking, but that is not something that people should be relying on because we're aiming to make improvements in this area.)

Now, Netflix could avoid that by just serving its whole website over http:// instead of https://. But, that is exactly the opposite of what we (some security people at Mozilla and elsewhere) want to to happen. At least some of us want to get to the point where the browser explicitly says "This website is *definitely* insecure" when we see http:// instead of https://.

This is why I say that I don't see the output of this working group allowing web developers to produce substitutes for TLS--not because individual web developers couldn't build better secure protocols, but because the browser wouldn't be able to know it is as good as TLS, and would therefore make decisions on the assumption that it isn't.

So, I think there are one or two things that need to happen: (1) our TLS implementation needs to get better, and (2) if those improvements aren't enough, then we have to develop some new secure protocol *that the browser understands to be secure*. AFAICT, that means the browser has to implement it itself, and that means the protocol has to be standardized.

The rest of this email is (off-topic?) replies along the lines of how to make TLS faster and more reliable. I would really like to see to see a description of Netflix's new transport security protocol, and how it beats well-tuned TLS implementations. (To be perfectly frank, Mozilla's current TLS implementation has a lot of room for improvement. This is something I am actively working on.)

> We've had TLS failures on our devices because of these operational
> issues which are out of our control. (Example: a CDN decides to change
> CA provider and doesn't tell us.)

The CA, and perhaps even the exact specifications of the certificate used, should be part of the contract that a website signs with a CDN. Besides changing CAs, some CDNs will issue a certificate that is valid for your hostname AND other hostnames--including your competitors and/or untrustworthy websites; this practice increases the risk that the certificate for the CDN will need to be revoked, and it potentially opens you up to new attacks with SPDY features (see the discussions on spdy-dev), at least.

> Performance:
> * Assuming you get through all the issues above (which we have) you'll
> find out that when you want a really high performance user experience,
> it's just not going to happen in many cases.
> * CRL / OCSP retrieval & response issues. As mentioned above, we have
> a
> CRL distribution point managed by a major CA provider, used by a major
> CDN that simply fails to respond sometimes. Let's say for the sake of
> argument, the thing fails for 5% of all request during peak Netflix
> viewing hours. That means that if I watch movies & TV during peak
> hours,
> 1 in 20 times I use my device I will actually hit my socket timeout
> value (1 minute on a lot of devices). I'm going to sit twiddling my
> thumbs wondering why things are so slow and this will happen
> non-deterministically.

Choose a CA that only requires you to use a three-level certificate chain (CA -> intermediate -> EE).

For the EE certificate: OCSP stapling. We (Mozilla) are planning to finally add OCSP stapling support soon. The next release of Apache will have OCSP stapling and could use some people pounding on it to work out the bugs. OCSP stapling seems like a good idea for most implementations, given the current state of things.

For the intermediate certificate: Use a CA that is very likely to have its intermediates already cached in the browser; i.e. use the same CA that Facebook and Twitter and everybody else is using. We (Mozilla) are planning to improve our intermediate CA OCSP response caching to make this work better than it currently does.

That is what we can with 5+-year-old specifications. There is also a proposed new extension for stapling intermediate certificate OCSP responses in the handshake, so that you don't have to rely on the browser "probably" having the intermediate CA OCSP response cached.

> In the case of Netflix, we want startup to happen in a second or less
> (imagine if we were BETTER than digital cable. That's a worthwhile
> goal, yes?) and using TLS means we can't get there.

When all the OCSP responses are stapled into the handshake, then the performance cost is the latency of the TLS handshake itself. 

In an embedded device, you could store the session ID/ticket and master secret persistently, and do the same on the server, so that nearly every handshake is a resumption handshake, avoiding all of the above the vast majority of the time, and with less TLS handshake latency. (Note also that you can avoid your sessions becoming too stale by creating new sessions, with full handshakes, in the background, to be used in the next connection.)

As for the clock management issue: Again, it seems like it wouldn't be hard to do for time what OCSP stapling does for certificate status, and simply have the server provide a signed (by some trusted time authority) attestation of the time to the client in the TLS handshake. If that wouldn't work, then we should work together on some kind of standard protocol for finding out what time it is in a secure way. Several applications, including many user authentication protocols, require accurate clocks.

If all of this is done and it still isn't enough, then I would rather we standardize an improvement or successor to TLS that better meets websites' requirements. Again, it would be helpful to see what Netflix's optimized protocol does differently than TLS so that we could see what (if anything) we could standardize. (Perhaps, this is something better discussed on tls@ietf.org) or in private.

I believe Netflix and others have content protection (DRM) concerns that are not addressed by TLS. I think that is one area that we are less likely to standardize and one area where the type of work we are discussing is going to be quite key. I could see how it might be easier to mix in the content protection stuff with the rest of the security framework, but I would like to try a scheme where application developers layer their custom content protection on top of TLS (or a successor) first, if possible, so that the browser doesn't have to implement the content protection scheme itself.

- Brian

Received on Friday, 25 November 2011 17:21:23 UTC