W3C home > Mailing lists > Public > public-webapps@w3.org > July to September 2012

Re: Why the restriction on unauthenticated GET in CORS?

From: Eric Rescorla <ekr@rtfm.com>
Date: Thu, 19 Jul 2012 07:06:33 -0700
Message-ID: <CABcZeBPReUms59DKsmE_Ee_Ts8yqYQAzC5LgRsmpJcxuYjBdaQ@mail.gmail.com>
To: Anne van Kesteren <annevk@annevk.nl>
Cc: Henry Story <henry.story@bblfish.net>, Cameron Jones <cmhjones@gmail.com>, Ian Hickson <ian@hixie.ch>, public-webapps <public-webapps@w3.org>, public-webappsec@w3.org
On Thu, Jul 19, 2012 at 6:54 AM, Anne van Kesteren <annevk@annevk.nl> wrote:
> On Thu, Jul 19, 2012 at 2:43 PM, Henry Story <henry.story@bblfish.net> wrote:
>> If a mechanism can be found to apply restrictions for private IP ranges then that
>> should be used in preference to forcing the rest of the web to implement CORS
>> restrictions on public data. And indeed the firewall servers use private ip ranges,
>> which do in fact make a good distinguisher for public and non public space.
> It's not just private servers (there's no guarantee those only use
> private IP ranges either). It's also IP-based authentication to
> private resources as e.g. W3C has used for some time.

Moreover, some companies have public IP ranges that are
firewall blocked. It's not in general possible for the browser
to distinguish publicly accessible IP addresses from non-publicly
accessible IP addresses.

More generally, CORS is designed to replicate the restrictions that non-CORS
already imposes on browsers. Currently, browsers prevent JS from obtaining
the result of this kind of cross-origin GET, thus CORS retains this restriction.
This is consistent with the general policy of not adding new features to
browsers that would break people's existing security models, no matter
how broken one might regard those models as being.

I believe the WG already has consensus on this point.

Received on Thursday, 19 July 2012 14:07:49 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:13:37 UTC