W3C home > Mailing lists > Public > public-webappsec@w3.org > May 2015

Re: [SRI] Requiring CORS for SRI

From: Brad Hill <hillbrad@gmail.com>
Date: Wed, 06 May 2015 18:42:03 +0000
Message-ID: <CAEeYn8gWwGiXMDAkNiAaFGujYimqADoa1qRYoP_4gM1UmOOwVA@mail.gmail.com>
To: Tanvi Vyas <tanvi@mozilla.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>
Access-Control-Allow-Origin: * is actually rather safe.  It is difficult to
set it accidentally on resources that shouldn't be public, because user
agents won't actually make the resource available cross-origin unless the
request was also made explicitly in CORS mode without credentials.

So you can't easily accidentally combine cookies or other common ambient
authority information with ACAO:*. You'll only ever expose the same version
of a resource that anyone on the internet could already see by making a
uniform request, e.g. with a server-side proxy.

There may be a few loopholes regarding client certificate resources or
network topology based ambient authority, but they are really edge cases,
and hopefully the risk of combining those scenarios with an admin who
thinks it is legitimate to set ACAO:* for the purposes of SRI is
vanishingly small.

One thing the spec doesn't make clear, which I'm actually working on test
cases for at this very moment, is what if the fetch mode was CORS, but was
fetched with crossorigin='use-credentials'?  This doesn't seem to be
explicitly forbidden by the spec, though there are no examples of this.  Is
it intentional that this might be possible?  I'm not sure there is any
security impact (the resource's contents could already be viewed if the
checks passed) but am just curious.

On Wed, May 6, 2015 at 11:19 AM Tanvi Vyas <tanvi@mozilla.com> wrote:

>  As discussed in the conference call on Monday, in order for a subresource
> to go through an integrity check, the server that hosts the subresource
> must set a CORS header to allow access to the requesting origin.  If I
> understand correctly, the reason for this is that some resources contain
> confidential information.  We don't want the requesting origin to use brute
> force to extract the confidential information by trying to embed the
> subresource with multiple integrity checks until they find one that
> succeeds.
> As an example, assume https://attacker.com is embedding a script from
> https://social-network.com/script.js.  If the user is logged into
> social-network.com on their browser, script.js will include their
> username.  https://attacker.com could try to embed the script with
> numerous different hashes until it finds one that succeeds.  Now
> https://attacker.com knows exactly who is visiting their site.
> Requiring CORS is an unfortunate constraint because web developers cannot
> use SRI on all the third-party javascript embedded on their page.  They
> have to reach out to each third-party and ask that they set the CORS
> header.  The third-parties may then oblige and set the header, but if they
> don't fully understand what they are doing, they may end up setting an
> Access-Control-Allow-Origin: * on resources that should not be public.
> What if instead we limit the number of failed integrity checks before we
> fail closed?  For example, if three integrity checks fail on attacker.com,
> the user agent will not compute any more integrity checks and will not load
> any more scripts on the page that have an integrity attribute?  We'd have
> to find away to get around attacker.com refreshing the page or finding
> other ways to go over this limit.
> The downside of this, besides the added complexity in user-agent code, is
> that a very targeted attack is still possible.  If attacker.com knows
> that the victim is 1 of 3 people, they can determine who it is.  If
> attacker.com has a good idea of who the victim is, they can confirm with
> one integrity check.
> Thoughts?
> ~Tanvi
Received on Wednesday, 6 May 2015 18:42:31 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:54:49 UTC