W3C home > Mailing lists > Public > public-webappsec@w3.org > May 2015

Re: [SRI] Requiring CORS for SRI

From: Austin William Wright <aaa@bzfx.net>
Date: Thu, 7 May 2015 18:03:49 -0700
Message-ID: <CANkuk-WUAmascJM649JzBr+yzfcGr3T0fLaysJprv6u44Uh4=Q@mail.gmail.com>
To: Anne van Kesteren <annevk@annevk.nl>
Cc: Wendy Seltzer <wseltzer@w3.org>, Frederik Braun <fbraun@mozilla.com>, WebAppSec WG <public-webappsec@w3.org>
I don't want to give the impression that CORS is preventing breakage or
securing anyone. Worms and other malicious programs already have access to
a local intranet.

If you don't have access control within your intranet, you're *already
broken*.

And not just that, I would argue we're actually doing people a disservice
by not exposing their *existing* security holes that will, inevitably, fail
(if not to web browsers). For instance, I have a feeling we're going to see
similar problems when people upgrade their NAT-ed networks to IPv6 without
realizing that _all_ their devices, NAS servers, etc, now have a public IP
address.

~~~

Here's my thoughts on CORS from an HTTP server design perspective. Much of
this should be obvious from the conversation so far, but allow me to
elaborate on the conversation for the public, if not the WG:

If CORS simply means "I'm public Internet accessible" (which is all that it
can mean, given a web browser's ability to hit a public proxy), I don't see
too much issue -- everyone on the public Internet *should* have
`Access-Control-Allow-Origin: *` set (i.e. if you're using CORS for access
control, you're fooling yourself).

At the same time, I think that allowing the a remote resource to request
third party resources to embed scripts, themselves requested with the
user's credentials, violates access control assumptions. (Let's assume
First party: me; Second party: cat photos website; Third party: social
media share button.)

That is, the second party is acting on behalf of me in my name, instead of
on behalf of itself. This problem has been mitigated by heavily sandboxing
the third party content from the second party (typically responses are not
accessible to the second party, so even as second parties make requests for
sensitive information they can't read it), but this is becoming
increasingly harder to box in as new features are exposed (the response
necessarily has side-effects on the page, so image dimensions are
accessible, etc).

Recall the concern here is that this second party, Cute Kitten Photos,
might be able to determine if I'm logged into Social Network Site by seeing
if a request to a script fails or not. (This is a simplification of the
problem for the sake of argument, but this is still a concern; I don't want
Cute Kitten Photos to know I also have an account with rival site Cute
Puppy Photos -- or worse.)

By default, user agents currently assume that we aren't privileged to see
the contents of <http://kittens.example.com/script_if_user_is_logged_in.js>
unless it specifically says that's acceptable.

With `integrity`, literally the only thing the third party script would be
able to do is return an error instead of the expected contents. Any
variation in contents will be equivalent to a network error. If the third
party server is HTTP compliant, all it can use the credentials for is
access control, logging, or rate limiting.

Logging and rate limiting are not very compelling use-cases given the
highly cacheable nature of these resources - a cache server won't even hit
my origin for me to log anything.

Access control to sensitive resources (assuming >128 bits unknown by an
attacker) is defeated because we already have the hash: one of the major
points of cryptography is taking big secrets and making them little
secrets: Taking a many-TB drive full of secrets and turning it into a
pesudorandom string of bits, and carrying the 128-bit secret key in your
pocket. For hashing, however, this tends to have the side-effect of making
secrets more accessible. If the secret data is available on a DHT, then
people no longer have to copy the full TB, just a 256-bit reference to the
TB of data, and your secret is out (this is exactly what BitTorrent is).
Therefore, authorization isn't a compelling use-case either.

And access control to sensitive resources with <128 bits unknown is
susceptible to a brute force attack as described by Tanvi.

With respect to CORS, if you have the hash, then the second party
presumably already knows the contents - meaning the major point of CORS has
been defeated.

Given these issues, I don't see any good reason to send credentials with an
`integrity` attribute. `integrity` should imply no credentials with the
request, and shouldn't require CORS.

Austin Wright.

On Thu, May 7, 2015 at 3:16 AM, Anne van Kesteren <annevk@annevk.nl> wrote:

> On Thu, May 7, 2015 at 12:14 PM, Wendy Seltzer <wseltzer@w3.org> wrote:
> > Sure firewalls are the problem. So say that those behind firewalls
> > should fix their resource control in a way that doesn't require those in
> > the open to add headers to make their resources truly open.
>
> Yes, let's break all the things!
>
>
> --
> https://annevankesteren.nl/
>
>
Received on Friday, 8 May 2015 01:04:17 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:13 UTC