W3C home > Mailing lists > Public > public-webappsec@w3.org > August 2015

Re: HSTS, mixed content, and priming

From: Eric Mill <eric@konklone.com>
Date: Tue, 25 Aug 2015 11:41:44 -0400
Message-ID: <CANBOYLU=Zp4QBOkg7CgGnQiN21PGdDq4NP+1onB691dtbStnpQ@mail.gmail.com>
To: Brian Smith <brian@briansmith.org>
Cc: Richard Barnes <rbarnes@mozilla.com>, WebAppSec WG <public-webappsec@w3.org>
On Tue, Aug 25, 2015 at 2:24 AM, Brian Smith <brian@briansmith.org> wrote:

> Richard Barnes <rbarnes@mozilla.com> wrote:
>
>> 1. Discover HSTS support with "priming requests":
>>   * When the browser encounters http://example.com/foo/bar.js on an
>> HTTPS page...
>>   * And the example.com is not an HSTS host...
>>   * Send a HEAD request https://example.com/ with no cookies, etc.
>>
>
> Why not send a GET request to https://example.com/foo/bar.js as though it
> was already upgraded via HSTS, and then use the response if the response
> includes an HSTS header? This would save one request/response and would
> avoid the practical problems with using HEAD. (Although servers are
> supposed to return the same headers for HEAD requests that they return for
> GET requests, in practice many do not.)
>

You're more likely to get the HSTS headers set on the root than on the
asset files themselves (which might be served using a totally different
configuration).

But also, it's okay if this doesn't work all the time. This is about
creating a mechanism for informed resource owners to resolve mixed content
problems for integrating websites. If a resource owner intends to do this,
they'll make sure the root has the HSTS header. If they don't, then nothing
good or bad happens.


>
> Neither "priming" nor u-i-r are secure against an active MitM so websites
> cannot rely on them for security. Websites need to use https://
> subresource links to actually be secure.
>

I'm not sure this is accurate enough to be helpful. In principle, a MitM
could block the priming request/response. In practice, since it's an HTTPS
request, I'm not sure how it would be distinguished from the other
requests/responses the site is making.

A MitM can always block HTTPS requests if it feels like it, but if a MitM
did manage to reliably isolate priming requests and block them without
blocking other requests, then the site reverts to the current behavior -
mixed content warnings/blocking, which protects or informs users of the
dangers.

That's likely to be rare enough not to dissuade content owners from doing
it, and so non-catastrophic to the user (because mixed content blocking
would kick in) that no one's going to be endangered by it any more than
they are today. The worst that happens is that the HTTPS site doesn't work
right for attacked users (cause its resources got blocked), but this would
only be happening for targets of sophisticated attacks -- in which case,
the user is already better off for being on a now-HTTPS (if non-working)
site.

Before going farther down this line of thought, I'd want to see evidence
that MitM targeting priming requests/responses is even viable.


> ## Is this something developers will understand?
>>
>
> Given the amount of confusion in this thread already...
>

The confusion is centered around stuff no developer will have to understand
in the real world. From a developer perspective: "If you turn on HSTS for
the root of your site, anyone with old http: links to your stuff won't get
mixed content warnings anymore". That's a nice situation that browsers
could create, that doesn't exist today.

-- Eric

>


-- 
konklone.com | @konklone <https://twitter.com/konklone>
Received on Tuesday, 25 August 2015 15:42:49 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:14 UTC