Re: [MIX] 4 possible solutions to the problem of Mixed Content Blocking stalling HTTPS deployment

On 2 February 2015 at 18:21, Peter Eckersley <pde@eff.org> wrote:
> 0. Really force millions of site operators to edit all their code.  If
> we're going to do this and expect to win, we had better provide much,
> much better tools for doing it.  Perhaps Let's Encrypt can enable
> report-only CSP and host analytics somewhere to tell webmasters where
> things are going wrong.  Perhaps, instead of a mysterious MCB shield,
> browsers could provide a list of required edits to HTML and JS source
> files to fix MC.  Even with measures like this, I believe option 0 would
> leave far too much of the work to be done by millions of fallible
> humans.

I think this will be necessary for people whose first priority is to
provide a secure browsing experiences to their users.  I expect that
Google's effort to move to HTTPS everywhere was not a half-dozen
edits, but a pretty laborious process.  But it was what was necessary.
And (like I mentioned in the other thread) I think a CSP mechanism for
"Tell me about the non-SSL includes I have" is a great idea.

That said, I think there is a larger percentage of people who would be
willing to do some, but not as much, work to provide a secure browsing
experience to a large percentage of their users.  And that's where's
(3) will come in.


> 1. Try to deploy automated serverside fixes.  Webservers, or the Let's
> Encrypt agent, could ship with a proxy that it commonly interposes
> between the HTTP server and TLS termination, which parses HTML, JS, etc,
> and tries to identify and fix same-origin mixed content and (where
> possible, perhaps by using HTTPS Everywhere rulesets) cross-origin mixed
> content.
>
> This could work, but will be a shamefully janky solution.

Yup. And it's going to miss some things sometimes. I would be happy if
someone created a few examples of these and put them up on the web,
but no org is going to find a one-size-fits-them solution to this
problem, so they'd have to do a lot of the heavy lifting themselves.
So I'm pretty 'meh' about this solution.


> 2. Alter their HSTS implementations, so that those help solve rather than
> exacerbate mixed content situations.  Perhaps this is only realistic
> within first party origins, plus those third party origins that have
> themselves enabled HSTS.  Though I do think we should consider a more
> ambitious option like this:
>
>  - If a site sets HSTS, all third party resource fetches are attempted
>    over HTTPS before being blocked, but if this is happening on 3rd
>    party domains that haven't themselves set HSTS, the site gets the
>    angry crossed-out HTTPS UI to warn the user and admin that something
>    squirrely and potentially dangerous is occurring.  For my money, this
>    would be an appropriate response to the conceivable but probably very
>    rare case that 3rd party HTTPS upgrades are actually a security
>    problem because the 3rd party server does something weird with them.
>
> I know there have been some arguments made against solution 2,
> summarised for instance by agl in this November thread...

For this and the reasons I put in the other thread, I think changing
the semantics of the HSTS is a non-starter.  (But adding a directive
is fine...)


> 3. Add a new directive to the HSTS header, which sites (and the Let's
> Encrypt agent working on behalf of sites) can set.  It could be called
> the "easy" or "helpful" bit.  In slogan form, the semantics of this
> would be "if you're a modern client that knows how to do Helpful HSTS,
> this site should be entirely HTTPS; if you're an old client that has
> MCB/HSTS but doesn't know about Helpful HSTS, leave the site on HTTP".

I don't think the answer to "if you're an old client with (current)
MCB/HSTS" is "leave the site on HTTP" but rather "Keep the main site
TLS and leave the subresources HTTP".  After all, you have HSTS on the
main site: it should be TLS and work fine for redirects HTTP->HTTPS,
and just maybe the cdn.example.com subresources are still HTTP.
(Unless that was what you meant in which case: yes I agree.)

> Assuming that clients are following point 6.1.4 in RFC 6797
> (https://tools.ietf.org/html/rfc6797#section-6.1) correctly, it should
> be practical to make this kind of functionality part of HSTS.

Yup.

> There's a question about how to get the HSTS Helpful bit to the
> client if the server is trying to leave the HTTP version of their site
> alive for clients with traditional MCB implementations.

Shouldn't be an issue: AIU what you're saying, there's a site today
that wants to keep people on HTTP because it references a bunch of
mixed content. And yet despite this, it has a HTTPS version that
serves a HSTS header.  That site is broken today and we should not try
and 'un-break' it.  All I need to do is 'CSRF' a user into loading
something from that site on the HTTPS version, they get the HSTS
header, and the next time they visit it: it's broken.

> The Helpful bit should probably also have a way for site operators to
> request and control automatic upgrades of embedded third party
> resources.  That could range from "try every third party resource over
> HTTPS, all of the time", through a whitelist of domains for which
> this should happen, through to the full gory details of putting
> something flexible like HTTPS Everywhere rulests in the HSTS directive.

I would assume it would actually just force _all_ requests from that
page to be done over TLS. What's the point of opting out for a
specific third party but requiring it for all the rest?



> Okay, so which of these options should we pick?

0 gets my first vote, 3 gets my second vote.

-tom

Received on Wednesday, 4 February 2015 02:39:02 UTC