Re: An HTTP->HTTPS upgrading strawman. (was Re: Upgrade mixed content URLs through HTTP header)

On Wed, Feb 4, 2015 at 10:57 AM, Peter Eckersley <pde@eff.org> wrote:
>
> Where I think navigation links should be upgraded is if they're
> same-origin HTTP links


I can see this, I suppose. Added a note to the document:
https://github.com/w3c/webappsec/commit/55af145dd0c8cc38fd704b4376506075008b0ebf

or if it's inbound navigation from a random
> website to an HTTP URL on the domain that set the policy.
>

This is exactly what HSTS is for, isn't it? I'm not a huge fan of creating
yet another pinning mechanism.


> If field testing indicates that Let's Encrypt's automated selection of
> which domains to get certs for, and automated renewal of certs, is
> enough to really avoid triggering the strict, hard-fail states that HSTS
> causes, then yes we'll start enabling it easily or even automatically
> for folks (the age will start small and grow in the background on a cron
> job).
>
> But it might be prudent to have a way to ease into that, forcing
> everything to HTTPS but giving the user a way to proceed if (say)
> there's a cert warning after everything has been HTTPSified.


I don't know about this. I understand the desire to make migration to HTTPS
and then HSTS a reasonable task for developers. It's not clear to me,
however, that creating multiple levels of strictness really improves the
state of affairs. At some point, we're just making things more complex, and
asking developers to make more security-relevant choices.

Splitting into "Insecure", "Pretty Secure For Newer Clients", and "Always
Secure Forever And Ever And Don't Let Anyone Touch It" seems like a
division folks can wrap their heads around. Adding more flags complicates
the model significantly.

 A mechanism like Navigation Error Logging could also be a home to this

feature, though my admitedly hurried reading of the current draft
> suggests that it doesn't anticipate logging of errors that occur while
> fetching subresources.
>

No, I don't think this use case is covered by the current draft. I'm
suggesting that if such a feature is something that would be useful to add,
it would be better to add it there than to CSP. :)


> Whitelists would be slightly less convenient for admins of large sites,
> because
> it would require them to know who all of their third parties are in order
> to make a list of (per my example above) N-2 of them.  Often the list of
> third parties on a very large site is hard to determine.
>
> But I don't know if that argument is important enough to override the
> desire
> for consistency in CSP.  Probably it's not, and we should just go with
> whitelisting, including *.
>

I'm not really sold yet on the idea of switching the policy to a whitelist
of hosts, but I think it's well worth considering. It would be good to get
feedback from admins of the large sites you're referring to in order to
figure out whether a whitelist would solve a real problem they face, or
whether the blunt approach of just upgrading everything off HTTP would be
enough.

Added an issue for discussion:
https://github.com/w3c/webappsec/commit/55af145dd0c8cc38fd704b4376506075008b0ebf

--
Mike West <mkwst@google.com>, @mikewest

Google Germany GmbH, Dienerstrasse 12, 80331 München,
Germany, Registergericht und -nummer: Hamburg, HRB 86891, Sitz der
Gesellschaft: Hamburg, Geschäftsführer: Graham Law, Christine Elizabeth
Flores
(Sorry; I'm legally required to add this exciting detail to emails. Bleh.)

Received on Wednesday, 4 February 2015 10:21:28 UTC