W3C home > Mailing lists > Public > public-webappsec@w3.org > March 2015

Re: [UPGRADE] Consider plan B for reduced complexity?

From: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
Date: Mon, 09 Mar 2015 18:03:01 -0700
To: Mike West <mkwst@google.com>
Cc: "public-webappsec\@w3.org" <public-webappsec@w3.org>, Peter Eckersley <pde@eff.org>, Eric Mill <eric@konklone.com>
Message-ID: <87mw3lll4q.fsf@alice.fifthhorseman.net>
On Mon 2015-03-09 08:43:55 -0700, Mike West wrote:

> It probably won't surprise you too much that I think the upgrade draft has
> some advantages over HSTS2. I'll walk through some of the line-by-line
> below, but at a high level, I think it's worthwhile to separate the
> concepts of a resource-representation-specific request to upgrade insecure
> subresources on the one hand, and a host-level opt-in to transport security
> on the other.

Yes, i think the resource vs. site-wide issue is at the heart of the

> UPGRADE is, in my mind, little more than a catalyst for migration from HTTP
> to HTTPS. It removes one barrier (mixed content in old pages) that we've
> heard is practically insurmountable for a subset of important sites, and it
> attempts to do so in a fairly targeted way. Given that target audience, I
> think that it's somewhat counterproductive to tie the upgrade mechanism to
> HSTS. The CSP directive I've proposed is scoped to a specific resource
> representation, and can be rolled out in a very granular fashion, which
> allows authors to gradually gain confidence in both the mechanism and their
> own TLS configuration, by (for example) upgrading low-traffic pages first,
> measuring the response, and then moving to higher-traffic pages.

if the default for new User Agents is to upgrade blockable mixed
content, then site operators don't have anything to do at all to those
resources on their site that only need an upgrade.

The issue for the site operator in this case is whether to *advertise*
https links to their content, which in many cases they can't know to do
because they may be placing these links in contexts where they don't
even get the browser's Prefer: header at all.  (consider entries in a
search engine's index; should a crawler issue two separate requests to a
crawled site, one with Prefer: representation=secure and the other
without, and then serve index the two views differently and select
between them depending on the Prefer: header sent to them by a UA
querying the engine?

> Host-level mechanisms don't allow this granularity, and I think that would
> inhibit the adoption of the upgrade mechanism. That would be unfortunate,
> since rapid adoption is the _only_ advantage of UPGRADE over just asking
> authors to fix their sites to work in today's browsers.

we can do the rapid adoption by simply making modern user agents do
upgrade automatically instead of blocking.

the only risk there is for subresources hosted on sites which serve
radically different content over https over http.  This is pretty
clearly goes against modern best practices, and i think we should
explicitly state (somewhere else? I'm not sure where this sort of
general guidance should go) that it is ill-advised to serve radically
different content on the HTTPS version of an HTTP resource.

> On Sun, Mar 8, 2015 at 10:51 PM, Daniel Kahn Gillmor <dkg@fifthhorseman.net> wrote:
>>  * avoid having to keep legacy headers around forever
> 1. Which legacy header does UPGRADE leave around forever? As much as I'd
> like CSP to be obsoleted, I don't see that happening in the near term.
> 2. We'll still have to support today's non-upgrading HSTS forever, right?

Origins won't need to send it unless they care about the decreasing
userbase of clients that do MCB without doing upgrade.  given that "we
expect browser update cycles to outpace the ability of a website with a
large body of content to go through and upgrade the resource links"
(quoting Eric Mill from the other thread) this suggests that origins
could just set HSTS2 and not bother with Strict-Transport-Security
unless they know their site will work with legacy clients.

Upgrade-capable clients may still want to check for both (so that they
can interact with legacy sites that set Strict-Transport-Security but
not HSTS2), but this is just one extra strcmp, since they'll treat the
resultant headers in the same way.

>>  * origin operators can set HSTS headers unconditionally
> Assuming they're ready for supporting browsers to use TLS and only TLS for
> a host, they could. I worry that that is a commitment they wouldn't be
> willing to make.

They certainly won't be willing to set it if they have some resources
that require upgrade capability.  And the utility of setting it itself
is diminished if HSTS isn't something that works across clients and can
be re-used by external advisors like HTTPS-Everywhere or search engines.

>>  * HSTS2 header is shorter (fewer bytes is a minor efficiency
>>    optimization)
> I like fewer bytes! Maybe we should rename `Content-Security-Policy` to
> `CSP` in the next iteration. Two characters shorter still!
> That said, I also like self-explanatory text. And HTTP/2's magical header
> compression dust should reduce the impact of header bloat, right? :)


>>  * large number of broken sites can get fixed without the operator
>>    learning and deploying CSP
> What's to learn? I think folks would be quite capable of copy/pasting
> `Content-Security-Policy: upgrade-insecure-requests` from the spec without
> understanding the nuances of the `child-src` directive. As no other
> directives are present, no other behavior is triggered. *shrug* They don't
> have to learn the rest if they don't want to.
> That said, I'd (selfishly!) claim that if we _did_ happen to inspire some
> folks to ask about the header, they'd be well-served by learning more. This
> might be a nice way of whetting their appetite for more. "If CSP can solve
> this problem for me," they'll say, "maybe it can help me with other things
> too! I'll go read that whole specification right now! Thanks, WebAppSec!"

i understand this perspective, but we're talking about under-resourced
sites, with few admin cycles to spare.  Making the default be to upgrade
otherwise-blocked content sounds like it will fix things much more
widely to me, with minimal interaction and thought on behalf of site

>>  * navigational upgrade can be handled for subdomains or not at the
>>    server operator discretion by choosing whether to set
>>    "includeSubdomains" in the HSTS2: header.
> If we end up running with https://w3c.github.io/webappsec/specs/csp-pinning/,
> I hope we'll be able to offer similar capability for CSP-delivered policy.

sure, and now we have even more logic for a site administrator to bolt
on and understand.  Simpler approaches that require less of an
investment in permanent logic seem preferable to me.

>>  * Client implementations can be simplified because there is no need to
>>    decide whether to send Prefer or not.
> I don't understand this claim. Use case #4 in the other thread seems to
> require some sort of signal to the server in order to safely downgrade
> legacy user agents. I don't think HSTS2 presents any capabilities that
> would avoid that necessity. I believe that's what Peter was referring to in
> his response earlier in this thread.

Yeah, i hadn't thought of conditional https→http downgrade as a use case
to try to support when i cooked up the HSTS2 strawman.  I'm tempted to
say "don't do that", but i recognize that there are large origins today
that do *unconditional* https→http downgrade, which is strictly worse,
and i suppose we'd like a way for them to move them into conditional
downgrades at least.

As i understand it, this will require signalling from the client to the
server that mixed-content upgrades are supported.  And this signal will
need to happen on *every* navigational https request.  This sounds like
adding permanent headers in the stack to me, unless we can find a
compatible way for the origin to recommend a downgrade that existing
clients will follow that newer upgrade-capable clients will know to
ignore.  I don't know what that would look like, unless we do something
barely unrecognizable as HTTP, like treating a 302 response over HTTPS
with a special header as though it were a 200 response
("Upgrade-Capable-Clients-Treat-Response-Code-As: 200" followed by the
full resource content? yikes!).

But this would all be to accomodate sites that need conditional
downgrades in a world where modern UAs would upgrade mixed content by
default in the first place, and once those sites fixed their remaining
issues (dependencies on subresources that do not have https
representations) they could go back to normal HTTPS traffic, and the
common protocol sent on the wire each time has less permanent cruft.

If we want things simpler and cleaner in the long term, something like
HSTS2 still seems better to me.

Received on Tuesday, 10 March 2015 01:03:30 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:54:47 UTC