W3C home > Mailing lists > Public > public-webappsec@w3.org > March 2015

[UPGRADE] On testing upgades for breakage (was Re: Plan B)

From: Peter Eckersley <pde@eff.org>
Date: Tue, 10 Mar 2015 18:29:36 -0700
To: Mike West <mkwst@google.com>
Cc: Daniel Kahn Gillmor <dkg@fifthhorseman.net>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Eric Mill <eric@konklone.com>
Message-ID: <20150311012936.GM7934@eff.org>
On Tue, Mar 10, 2015 at 08:58:15AM +0100, Mike West wrote:
 
> I do agree that user agents should experiment with this kind of behavior
> for blockable mixed content.

Glad to hear that you're open to that experiment.  One path would be to
view UPGRADE itself as the experiment, and to design it with a path to
turning a subset of it on by default.

Another path would be to take a population of opted-in test users, have
their browsers make BOTH the HTTP and HTTPS requests, and send reports
if the HTTP response body is constant while the HTTPS one is different.  Any
browser devs interested in trying that?

> 
> However, I don't think that's going to be enough to enable publisher
> migration, for three reasons:
> 
>    1. The amount of work that has gone into HTTPS Everywhere regexen shows
>    that HTTP and HTTPS versions of a host often have complex relationships
>    with each other. Forbes.com is the canonical example, but it's certainly
>    not the only one. Browser vendors will need to carefully evaluate whether
>    automagically switching to HTTPS for some requests will do more harm than
>    good. I don't think that's going to happen quickly.

It's hard to map the complexity of the HTTPS Everywhere ruleset library
onto an answer for the question of how often we expect the upgrading
behaviour to be worse in any sense than MCB.

I took a random sample of 20 rulesets╣ and manually classified them:

4 added a www. to a site's top-level domain while rewriting
1 added path elements and changed domains while rewriting
2 excluded particular paths from upgrades
13 just rewrote http -> https for some domains

So in 13 out of 20 cases a naive upgrade would have succeeded; in 5
cases we could probably expect some requests to fail, mostly with a cert
warning because of a lack of cert Subject Alternative Name fields for
all of the domains in question.  

The two path exclusions are harder to classify; those can be web apps
that can't handle HTTPS, but they are also commonly used to avoid
triggering mixed content breakage :)

So this sample makes us think we'd get about a two-third success rate
from simple upgrading, and could get to 85% success by accepting a
www.example.com cert for example.com.

What this doesn't tell us, at all, is how often bad things would happen
as a result of the rewritten request succeeding but returning a success
code with the wrong content.  

Such cases are rare; it's possible though unlikely that cases of
radically different content on the HTTPS origin, like forbes.com or
(historically) livejournal.com would not simply fail 40x but load
incorrect CSS or scripts, or render unexpected images, in a way that
produces user-visible breakage.

In an even smaller subset of cases this breakage may create exploitable
vulnerabilities in web apps.  But AFAIK forbes.com has only been documented
to host malware on its HTTP origin,▓ not the HTTPS one :).


╣ Sampling methodology:
 
 https://github.com/EFForg/https-everywhere/blob/master/utils/rule-sample.py

 My exact sample is here:

 https://www.eff.org/files/sample.rulesetshttp://www.forbes.com/sites/thomasbrewster/2015/02/10/forbes-com-hacked-in-november-possibly-by-chinese-cyber-spies/

-- 
Peter Eckersley                            pde@eff.org
Technology Projects Director      Tel  +1 415 436 9333 x131
Electronic Frontier Foundation    Fax  +1 415 436 9993
Received on Wednesday, 11 March 2015 01:30:07 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:11 UTC