Re: An HTTP->HTTPS upgrading strawman. (was Re: Upgrade mixed content URLs through HTTP header)

On Tue, Feb 03, 2015 at 08:45:57PM +0100, Mike West wrote:
> On Tue, Feb 3, 2015 at 7:31 PM, Peter Eckersley <pde@eff.org> wrote:
> 
> > Why exclude navigational requests from the rewriting mechanism?  That's
> > going to force sites to enable HSTS if they want that behaviour, with
> > the associated problems from other forms of strictness.  Instead
> > consider a "subresourcesOnly" option for sites that want to stay in HTTP
> > by default but behave as well as possible if the top-level request is
> > HTTPS.
> >
> 
> 1. I excluded navigations from the strawman on my vague assumption that
> navigations often go to third-party sites for which the first-party can't
> make the same guarantees as it can for its own content. This might be
> another argument for the source list discussion at the bottom.

Oh, I agree that we should exclude navigation from the page that set the
policy to an arbitrary HTTP destination.

Where I think navigation links should be upgraded is if they're
same-origin HTTP links, or if it's inbound navigation from a random
website to an HTTP URL on the domain that set the policy.

> 
> 2. It's not clear to me that the HSTS strictness you're trying to avoid is
> something we actually want developers to avoid. Don't we want to encourage
> folks to use HSTS?

If field testing indicates that Let's Encrypt's automated selection of
which domains to get certs for, and automated renewal of certs, is
enough to really avoid triggering the strict, hard-fail states that HSTS
causes, then yes we'll start enabling it easily or even automatically
for folks (the age will start small and grow in the background on a cron
job).

But it might be prudent to have a way to ease into that, forcing
everything to HTTPS but giving the user a way to proceed if (say)
there's a cert warning after everything has been HTTPSified.

> 
> 
> > The report-only mode should have a setting -- perhaps the default --
> > where the site operator gets reports only about upgrades that fail (TLS
> > connection error, cert warning, or an HTTP error code).  In a typical
> > deployment case, there will be an *enormous* number of successful
> > upgrades and the thing people will want to track most will be the failures.
> >
> 
> Hrm. Ok. What is the use case that you think reporting ought to address?
> 

Figuring out which (potentially obscure) first- and third-party
subresources are failing to load as a result of being upgraded to HTTPS, so
that the admin can chase them down and figure out what to do about them.


> I would like it to be a mechanism by which admistrators/authors can
> prioritize the small amount of time they want to spend making their sites
> work in legacy clients. If one resource causes X% of upgrades, maybe it's
> worth `sed`ing around to fix it.

I agree we should also support this.
> 
> It sounds like you wish to address a different case, allowing
> administrators/authors to prioritize the time they spend fixing SSL errors.
> I'd suggest that this case is better dealt with via something like
> https://w3c.github.io/navigation-error-logging/. WDYT?
> 

A mechanism like Navigation Error Logging could also be a home to this
feature, though my admitedly hurried reading of the current draft
suggests that it doesn't anticipate logging of errors that occur while
fetching subresources.

> 
> > It would be ideal to support a blacklist of origins for which the
> > upgrade mechanism does not apply.  That way, if you have N third parties
> > on a site, and (say) two of them provide images only, and don't support
> > HTTPS at all, you can use the upgrade mechanism for scripts on the other
> > N - 2 origins.
> 
> 
> Blacklists are the opposite of how the rest of CSP works. Turning the
> directive into a source list could certainly address a whitelist, however,
> which was also Eduardo's first comment. The currently specified behavior
> would hide behind `upgrade-insecure-requests *`, and you could get more
> granular from there.

Whitelists would be slightly less convenient for admins of large sites, because
it would require them to know who all of their third parties are in order
to make a list of (per my example above) N-2 of them.  Often the list of
third parties on a very large site is hard to determine.

But I don't know if that argument is important enough to override the desire
for consistency in CSP.  Probably it's not, and we should just go with
whitelisting, including *.

-- 
Peter Eckersley                            pde@eff.org
Technology Projects Director      Tel  +1 415 436 9333 x131
Electronic Frontier Foundation    Fax  +1 415 436 9993

Received on Wednesday, 4 February 2015 09:57:56 UTC