W3C home > Mailing lists > Public > public-webappsec@w3.org > May 2015

Re: [SRI] Requiring CORS for SRI

From: Devdatta Akhawe <dev.akhawe@gmail.com>
Date: Wed, 6 May 2015 17:01:39 -0700
Message-ID: <CAPfop_0RFSxLwMRrbUQ_XqDebv+gpZ-sVAC8L09-cnWxV2PaHg@mail.gmail.com>
To: Brad Hill <hillbrad@gmail.com>
Cc: Tanvi Vyas <tanvi@mozilla.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>
>
> One thing the spec doesn't make clear, which I'm actually working on test
> cases for at this very moment, is what if the fetch mode was CORS, but was
> fetched with crossorigin='use-credentials'?  This doesn't seem to be
> explicitly forbidden by the spec, though there are no examples of this.  Is
> it intentional that this might be possible?  I'm not sure there is any
> security impact (the resource's contents could already be viewed if the
> checks passed) but am just curious.
>
>
Intentional on my part at least. Both this and ACAO * basically rely on the
security guarantee: "this is not insecure because this page could have just
XHR'ed and read it anyhow". The reason why none of the examples include
this is because I dont' think that will be a common use case.

Of course, there is still the concern that Tanvi bought up, namely, people
will start including this header without including it. Thankfully, doing
ACAO * is actually pretty safe, as you pointed out. I am personally not a
fan of spec'ing stuff like ratelimiting checks: this is a decision that I
think UAs can make based on what they are seeing in the wild.

cheers
Dev



> On Wed, May 6, 2015 at 11:19 AM Tanvi Vyas <tanvi@mozilla.com> wrote:
>
>>  As discussed in the conference call on Monday, in order for a
>> subresource to go through an integrity check, the server that hosts the
>> subresource must set a CORS header to allow access to the requesting
>> origin.  If I understand correctly, the reason for this is that some
>> resources contain confidential information.  We don't want the requesting
>> origin to use brute force to extract the confidential information by trying
>> to embed the subresource with multiple integrity checks until they find one
>> that succeeds.
>>
>> As an example, assume https://attacker.com is embedding a script from
>> https://social-network.com/script.js.  If the user is logged into
>> social-network.com on their browser, script.js will include their
>> username.  https://attacker.com could try to embed the script with
>> numerous different hashes until it finds one that succeeds.  Now
>> https://attacker.com knows exactly who is visiting their site.
>>
>> Requiring CORS is an unfortunate constraint because web developers cannot
>> use SRI on all the third-party javascript embedded on their page.  They
>> have to reach out to each third-party and ask that they set the CORS
>> header.  The third-parties may then oblige and set the header, but if they
>> don't fully understand what they are doing, they may end up setting an
>> Access-Control-Allow-Origin: * on resources that should not be public.
>>
>> What if instead we limit the number of failed integrity checks before we
>> fail closed?  For example, if three integrity checks fail on attacker.com,
>> the user agent will not compute any more integrity checks and will not load
>> any more scripts on the page that have an integrity attribute?  We'd have
>> to find away to get around attacker.com refreshing the page or finding
>> other ways to go over this limit.
>>
>> The downside of this, besides the added complexity in user-agent code, is
>> that a very targeted attack is still possible.  If attacker.com knows
>> that the victim is 1 of 3 people, they can determine who it is.  If
>> attacker.com has a good idea of who the victim is, they can confirm with
>> one integrity check.
>>
>> Thoughts?
>>
>>
>> ~Tanvi
>>
>
Received on Thursday, 7 May 2015 00:02:28 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:13 UTC