Re: [SRI] Requiring CORS for SRI

On Fri, May 8, 2015 at 2:06 PM, Joel Weinberger <> wrote:

> I'm with Anne on this. All of the issues that we're discussing here are
> the entire reason CORS headers exist, and I'm afraid that any
> implementation we come up with that has the full security properties we
> want will be a reimplementation of CORS.
> I think we should continue this conversation, but I believe that any
> solution other than requiring CORS will require a broader conversation than
> just this list as it has the potential to break SOP.

As I understand, there's only really two major ones: Stopping access to
intranet resources that are inadequately access-controlled, and access
credentialed responses (which is pretty dangerous and never really
necessary). I don't believe any of the use-cases for CORS apply to SRI for
the reasons I described.

> On Thu, May 7, 2015 at 6:06 PM Austin William Wright <> wrote:
>> I don't want to give the impression that CORS is preventing breakage or
>> securing anyone. Worms and other malicious programs already have access to
>> a local intranet.
>  I don't think I understand your point here. I think if your making an
> equivalence between visiting a random website and installing malware, we're
> on a very different page.

My point is that CORS is not an excuse to write intranet applications
without access controls. CORS may prevent Web browsers from being the one
to compromise the intranet application, but the systems are still
vulnerable nonetheless.

Likewise, I have PR to SRI essentially saying it's not safe to use SRI as
an excuse to relax existing security precautions: SRI supplements existing
security, it doesn't replace existing security.

I'm coming from the perspective of the HTTP server in a conversation that
seems to be aimed at user agent design. If *I* were to disable access
controls on my server, it would be for some (inexplicable) purpose. Other
servers might be doing it by mistake, but our efforts to protect them
shouldn't be limited to damage control, we should be encouraging them to
secure their systems in a responsible manner.

> One of the great benefits of the Same Origin Policy is exactly the point
> that a user agent should be able to load a random page without worry about
> cross-origin content leaking (modulo several obvious issues, a la <img>,
> stylesheets, etc). I think Anne's post referenced earlier (
> covers this nicely.

Could you elaborate a little on exactly what you mean by "cross-origin
content leaking"? (I tried to address most of these concerns.)

I'm familiar with Anne's post, and I get the purpose of CORS, but it's
missing responses to a few key objections.

Namely, Web browsers can already load content anonymously via a public
proxy (or a so-called "CORS proxy", for which a Google search links me to
numerous implementations). So as I reason, at least for the public
Internet, there's no additional leakage for most applications if the script
is loaded anonymously (no credentials), which is what I propose. Is this
line of reasoning incorrect?

It gets a little more complex for intranet applications, but I'd be happy
to entertain objections to my earlier line of reasoning.



>> If you don't have access control within your intranet, you're *already
>> broken*.
>> And not just that, I would argue we're actually doing people a disservice
>> by not exposing their *existing* security holes that will, inevitably, fail
>> (if not to web browsers). For instance, I have a feeling we're going to see
>> similar problems when people upgrade their NAT-ed networks to IPv6 without
>> realizing that _all_ their devices, NAS servers, etc, now have a public IP
>> address.
>> ~~~
>> Here's my thoughts on CORS from an HTTP server design perspective. Much
>> of this should be obvious from the conversation so far, but allow me to
>> elaborate on the conversation for the public, if not the WG:
>> If CORS simply means "I'm public Internet accessible" (which is all that
>> it can mean, given a web browser's ability to hit a public proxy), I don't
>> see too much issue -- everyone on the public Internet *should* have
>> `Access-Control-Allow-Origin: *` set (i.e. if you're using CORS for access
>> control, you're fooling yourself).
>> At the same time, I think that allowing the a remote resource to request
>> third party resources to embed scripts, themselves requested with the
>> user's credentials, violates access control assumptions. (Let's assume
>> First party: me; Second party: cat photos website; Third party: social
>> media share button.)
>> That is, the second party is acting on behalf of me in my name, instead
>> of on behalf of itself. This problem has been mitigated by heavily
>> sandboxing the third party content from the second party (typically
>> responses are not accessible to the second party, so even as second parties
>> make requests for sensitive information they can't read it), but this is
>> becoming increasingly harder to box in as new features are exposed (the
>> response necessarily has side-effects on the page, so image dimensions are
>> accessible, etc).
>> Recall the concern here is that this second party, Cute Kitten Photos,
>> might be able to determine if I'm logged into Social Network Site by seeing
>> if a request to a script fails or not. (This is a simplification of the
>> problem for the sake of argument, but this is still a concern; I don't want
>> Cute Kitten Photos to know I also have an account with rival site Cute
>> Puppy Photos -- or worse.)
>> By default, user agents currently assume that we aren't privileged to see
>> the contents of <
>>> unless it
>> specifically says that's acceptable.
>> With `integrity`, literally the only thing the third party script would
>> be able to do is return an error instead of the expected contents. Any
>> variation in contents will be equivalent to a network error. If the third
>> party server is HTTP compliant, all it can use the credentials for is
>> access control, logging, or rate limiting.
>> Logging and rate limiting are not very compelling use-cases given the
>> highly cacheable nature of these resources - a cache server won't even hit
>> my origin for me to log anything.
>> Access control to sensitive resources (assuming >128 bits unknown by an
>> attacker) is defeated because we already have the hash: one of the major
>> points of cryptography is taking big secrets and making them little
>> secrets: Taking a many-TB drive full of secrets and turning it into a
>> pesudorandom string of bits, and carrying the 128-bit secret key in your
>> pocket. For hashing, however, this tends to have the side-effect of making
>> secrets more accessible. If the secret data is available on a DHT, then
>> people no longer have to copy the full TB, just a 256-bit reference to the
>> TB of data, and your secret is out (this is exactly what BitTorrent is).
>> Therefore, authorization isn't a compelling use-case either.
>> And access control to sensitive resources with <128 bits unknown is
>> susceptible to a brute force attack as described by Tanvi.
>> With respect to CORS, if you have the hash, then the second party
>> presumably already knows the contents - meaning the major point of CORS has
>> been defeated.
>> Given these issues, I don't see any good reason to send credentials with
>> an `integrity` attribute. `integrity` should imply no credentials with the
>> request, and shouldn't require CORS.
>> Austin Wright.
>> On Thu, May 7, 2015 at 3:16 AM, Anne van Kesteren <>
>> wrote:
>>> On Thu, May 7, 2015 at 12:14 PM, Wendy Seltzer <> wrote:
>>> > Sure firewalls are the problem. So say that those behind firewalls
>>> > should fix their resource control in a way that doesn't require those
>>> in
>>> > the open to add headers to make their resources truly open.
>>> Yes, let's break all the things!
>>> --

Received on Friday, 8 May 2015 21:59:42 UTC