W3C home > Mailing lists > Public > public-webappsec@w3.org > August 2015

Re: HSTS, mixed content, and priming

From: Brian Smith <brian@briansmith.org>
Date: Mon, 24 Aug 2015 23:24:14 -0700
Message-ID: <CAFewVt52UeD0Rj=KhSjySkxrHXJG-xRd3NDBuQUVhkfMwGx+jw@mail.gmail.com>
To: Richard Barnes <rbarnes@mozilla.com>
Cc: WebAppSec WG <public-webappsec@w3.org>
Richard Barnes <rbarnes@mozilla.com> wrote:

> 1. Discover HSTS support with "priming requests":
>   * When the browser encounters http://example.com/foo/bar.js on an HTTPS
> page...
>   * And the example.com is not an HSTS host...
>   * Send a HEAD request https://example.com/ with no cookies, etc.

Why not send a GET request to https://example.com/foo/bar.js as though it
was already upgraded via HSTS, and then use the response if the response
includes an HSTS header? This would save one request/response and would
avoid the practical problems with using HEAD. (Although servers are
supposed to return the same headers for HEAD requests that they return for
GET requests, in practice many do not.)

>   * See if the query returns HSTS headers
>   * If so, the browser loads https://example.com/foo/bar.js
>   * ... and don't consider it mixed content

> 2. Do not treat HSTS-upgraded requests as mixed content

You can do #2 without doing #1.

> As mentioned above, the primary value is to remove the indeterminacy
> around HSTS upgrades, so that it's safe to treat HSTS ugprades as not mixed
> content.

There is no safety issue here.

Consider http://foo.example.org/ which embeds a subresource from
http://bar.example.org/. Assume bar.example.org is HSTS. Then the browser
will already request https://bar.example.org/ instead of

The fact that the same doesn't happen for https://foo.example.org/ (i.e.
the mixed content case) is mostly due to the fact that the mixed content
blocking decision is made before HSTS upgrades are done instead of after.
In particular, if Firefox had done HSTS rewriting before it did mixed
content checks then I am pretty sure nobody would have done extra work to
reverse the order of those checks.

So relative to u-i-r, this reduces uncertainty for site operators, and gets
> more HTTPS faster (since it's a partial ugprade).  It seems like these two
> are complementary in much the same way that HTTPS and HSTS are -- you can
> turn on HTTPS for some parts of your site, then turn on HSTS to lock it
> in.  Relying on priming to upgrade what can be upgraded of your site on day
> 0, then once you're sure that all your sub-resources can upgrade properly,
> turn on u-i-r.

Neither "priming" nor u-i-r are secure against an active MitM so websites
cannot rely on them for security. Websites need to use https:// subresource
links to actually be secure.

> ## Is this something developers will understand?

Given the amount of confusion in this thread already...

> In terms of "expense": It's worth noting that HSTS priming would only be
> done for potentially mixed-content requests, in cases where the HSTS state
> of the remote host is unknown.

Actually, a browser is free to always ping the https:// server to see if it
is HSTS and the browser should probably do that, not only in the case of
mixed content, but also (especially) in the case where the user navigated
via the address bar (e.g. typed "example.org into the address bar) and
other cases.

Received on Tuesday, 25 August 2015 06:24:42 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:54:50 UTC