W3C home > Mailing lists > Public > public-webappsec@w3.org > May 2015

[SRI] Requiring CORS for SRI

From: Tanvi Vyas <tanvi@mozilla.com>
Date: Wed, 06 May 2015 11:17:41 -0700
Message-ID: <554A5AC5.7090601@mozilla.com>
To: "public-webappsec@w3.org" <public-webappsec@w3.org>
As discussed in the conference call on Monday, in order for a 
subresource to go through an integrity check, the server that hosts the 
subresource must set a CORS header to allow access to the requesting 
origin.  If I understand correctly, the reason for this is that some 
resources contain confidential information.  We don't want the 
requesting origin to use brute force to extract the confidential 
information by trying to embed the subresource with multiple integrity 
checks until they find one that succeeds.

As an example, assume https://attacker.com is embedding a script from 
https://social-network.com/script.js.  If the user is logged into 
social-network.com on their browser, script.js will include their 
username.  https://attacker.com could try to embed the script with 
numerous different hashes until it finds one that succeeds. Now 
https://attacker.com knows exactly who is visiting their site.

Requiring CORS is an unfortunate constraint because web developers 
cannot use SRI on all the third-party javascript embedded on their 
page.  They have to reach out to each third-party and ask that they set 
the CORS header.  The third-parties may then oblige and set the header, 
but if they don't fully understand what they are doing, they may end up 
setting an Access-Control-Allow-Origin: * on resources that should not 
be public.

What if instead we limit the number of failed integrity checks before we 
fail closed?  For example, if three integrity checks fail on 
attacker.com, the user agent will not compute any more integrity checks 
and will not load any more scripts on the page that have an integrity 
attribute?  We'd have to find away to get around attacker.com refreshing 
the page or finding other ways to go over this limit.

The downside of this, besides the added complexity in user-agent code, 
is that a very targeted attack is still possible.  If attacker.com knows 
that the victim is 1 of 3 people, they can determine who it is.  If 
attacker.com has a good idea of who the victim is, they can confirm with 
one integrity check.

Thoughts?

~Tanvi
Received on Wednesday, 6 May 2015 18:18:08 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:13 UTC