Re: HSTS, mixed content, and priming

On Tue, Aug 25, 2015 at 2:24 AM, Brian Smith <brian@briansmith.org> wrote:

> Richard Barnes <rbarnes@mozilla.com> wrote:
>
>> 1. Discover HSTS support with "priming requests":
>>   * When the browser encounters http://example.com/foo/bar.js on an
>> HTTPS page...
>>   * And the example.com is not an HSTS host...
>>   * Send a HEAD request https://example.com/ with no cookies, etc.
>>
>
> Why not send a GET request to https://example.com/foo/bar.js as though it
> was already upgraded via HSTS, and then use the response if the response
> includes an HSTS header? This would save one request/response and would
> avoid the practical problems with using HEAD. (Although servers are
> supposed to return the same headers for HEAD requests that they return for
> GET requests, in practice many do not.)
>

I think the worry here is leakage -- if the HTTPS site is actually
different, then you're leaking request context in the GET.  Of course, as
mnot points out, any flavor of HSTS lets the HTTPS site "claim" the HTTP
site.



>   * See if the query returns HSTS headers
>>   * If so, the browser loads https://example.com/foo/bar.js
>>   * ... and don't consider it mixed content
>>
>
>
>> 2. Do not treat HSTS-upgraded requests as mixed content
>>
>
> You can do #2 without doing #1.
>

That's true in principle, but in past discussions, indeterminacy had been a
blocker.



> As mentioned above, the primary value is to remove the indeterminacy
>> around HSTS upgrades, so that it's safe to treat HSTS ugprades as not mixed
>> content.
>>
>
> There is no safety issue here.
>
> Consider http://foo.example.org/ which embeds a subresource from
> http://bar.example.org/. Assume bar.example.org is HSTS. Then the browser
> will already request https://bar.example.org/ instead of
> http://bar.example.org.
>
> The fact that the same doesn't happen for https://foo.example.org/ (i.e.
> the mixed content case) is mostly due to the fact that the mixed content
> blocking decision is made before HSTS upgrades are done instead of after.
> In particular, if Firefox had done HSTS rewriting before it did mixed
> content checks then I am pretty sure nobody would have done extra work to
> reverse the order of those checks.
>
> So relative to u-i-r, this reduces uncertainty for site operators, and
>> gets more HTTPS faster (since it's a partial ugprade).  It seems like these
>> two are complementary in much the same way that HTTPS and HSTS are -- you
>> can turn on HTTPS for some parts of your site, then turn on HSTS to lock it
>> in.  Relying on priming to upgrade what can be upgraded of your site on day
>> 0, then once you're sure that all your sub-resources can upgrade properly,
>> turn on u-i-r.
>>
>
> Neither "priming" nor u-i-r are secure against an active MitM so websites
> cannot rely on them for security. Websites need to use https://
> subresource links to actually be secure.
>
>
>> ## Is this something developers will understand?
>>
>
> Given the amount of confusion in this thread already...
>
>
>> In terms of "expense": It's worth noting that HSTS priming would only be
>> done for potentially mixed-content requests, in cases where the HSTS state
>> of the remote host is unknown.
>>
>
> Actually, a browser is free to always ping the https:// server to see if
> it is HSTS and the browser should probably do that, not only in the case of
> mixed content, but also (especially) in the case where the user navigated
> via the address bar (e.g. typed "example.org into the address bar) and
> other cases.
>

Sure, individual browsers could do things unilaterally.  The idea of doing
a spec would be to have consistent behavior across browsers, particularly
with respect to the mixed content treatment of HSTS.

--Richard


>
> Cheers,
> Brian
> --
> https://briansmith.org/
>
>

Received on Tuesday, 25 August 2015 15:03:10 UTC