W3C home > Mailing lists > Public > public-webappsec@w3.org > January 2015

optimistic HTTP → HTTPS [was: Re: Require HTTPS scripts to be able to anything HTTP scripts can do.]

From: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
Date: Fri, 02 Jan 2015 17:42:50 -0500
Message-ID: <54A71EEA.1060902@fifthhorseman.net>
To: Brad Hill <hillbrad@gmail.com>, Tim Berners-Lee <timbl@w3.org>, public-webappsec@w3.org
On 01/02/2015 04:14 PM, Brad Hill wrote:
> If I might suggest a pivot here along the lines of the compatibility and
> path forward you (and we all) desire, perhaps we ought to discuss the
> possibility of automatic / optimistic upgrade from HTTP -> HTTPS for
> requests issued from a secure document context.  So if you load a script
> over https from a secure context, just auto-fixup all links to https.
> 
> We have always shied away from doing this in the past because there is no
> formal guarantee that the resource at an URL with the http scheme is
> semantically equivalent to one available at the "same" URL with the https
> scheme.
> 
> Perhaps that shyness is worth revisiting today in light of the broad push
> to move as much as possible to secure transports.  If resource authors
> simply started serving the same content over https as they do today over
> http, we could make vast improvements and avoid much of the pain
> mixed-content blocking creates for such transitions today.
> 
> The edge cases introduced by this kind of optimistic upgrade may very well
> be fewer and less harmful than those introduced by allowing insecure
> content into secure contexts.  In fact, the EFF probably already has a good
> amount of data on exactly this from the HTTPS Everywhere extension.

I think this suggestion is worth exploring further.

We've discussed a similar tradeoff with regard to HSTS relatively
recently (see the "Interaction between HSTS and mixed content blocking"
thread starting on November 19th): HSTS link rewriting is currently done
after mixed-content checks.  for HSTS, iirc, the reasons we've heard
have been:

 (a) applying HSTS rewrites before mixed-content would leak information
about whether a visitor has visited the HSTS-wrapped site before.

 (b) we can't guarantee that the content is the same across schema.

 (c) sites will now randomly work (or not) depending on which of the
HSTS-covered sites the user has visited in the past

 (d) browsers that don't implement HSTS will fail to work with
mixed-content sites that would work with HSTS-enabled browsers.

Having a broader policy of being generally willing to try an optimistic
http→https upgrade in a mixed-content case (presumably failing quietly
when an optimistic https pageload fails) would remove basically all of
these concerns.

I'd like to hear concrete concerns about adopting the  more general
optimistic HTTP → HTTPS upgrade in the case of what would otherwise be
mixed-content blocking.  Do we have real-world cases where this will
break things in a dangerous way?  We certainly have real-world cases
where https versions of sites don't work because of mixed content, which
causes people to use them via cleartext (arguably a "dangerous break" in
itself).

Are any of the active browser vendors willing to consider a
configuration switch that changes mixed-content blocking to an
opportunistic https upgrade for resources that would otherwise be blocked?

	--dkg


Received on Friday, 2 January 2015 22:43:20 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:09 UTC