W3C home > Mailing lists > Public > public-webappsec@w3.org > July 2012

Re: Why the restriction on unauthenticated GET in CORS?

From: Henry Story <henry.story@bblfish.net>
Date: Sat, 21 Jul 2012 06:41:27 +0200
Cc: Adam Barth <w3c@adambarth.com>, Cameron Jones <cmhjones@gmail.com>, Anne van Kesteren <annevk@annevk.nl>, Ian Hickson <ian@hixie.ch>, public-webapps <public-webapps@w3.org>, public-webappsec@w3.org
Message-Id: <8A6D6FDA-688C-46ED-ADBE-731AC335D038@bblfish.net>
To: Jonas Sicking <jonas@sicking.cc>

On 21 Jul 2012, at 05:39, Jonas Sicking wrote:

> On Fri, Jul 20, 2012 at 11:58 AM, Henry Story <henry.story@bblfish.net> wrote:
>> 
>> On 20 Jul 2012, at 18:59, Adam Barth wrote:
>> 
>>> On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones <cmhjones@gmail.com> wrote:
>>>> On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth <w3c@adambarth.com> wrote:
>>>>> On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones <cmhjones@gmail.com> wrote:
>>>>>> So, this is a non-starter. Thanks for all the fish.
>>>>> 
>>>>> That's why we have the current design.
>>>> 
>>>> Yes, i note the use of the word "current" and not "final".
>>>> 
>>>> Ethics are a starting point for designing technology responsibly. If
>>>> the goals can not be met for valid technological reasons then that it
>>>> a unfortunate outcome and one that should be avoided at all costs.
>>>> 
>>>> The costs of supporting legacy systems has real financial implications
>>>> notwithstanding an ethical ideology. If those costs become too great,
>>>> legacy systems loose their impenetrable pedestal.
>>>> 
>>>> The architectural impact of supporting for non-maintained legacy
>>>> systems is that web proxy intermediates are something we will all have
>>>> to live with.
>>> 
>>> Welcome to the web.  We support legacy systems.  If you don't want to
>>> support legacy systems, you might not enjoy working on improving the
>>> web platform.
>> 
>> Of course, but you seem to want to support hidden legacy systems, that is systems none of us know about or can see. It is still a worth while inquiry to find out how many systems there are for which this is a problem, if any. That is:
>> 
>>  a) systems that use non standard internal ip addresses
>>  b) systems that use ip-address provenance for access control
>>  c) ? potentially other issues that we have not covered
> 
> One important group to consider is home routers. Routers are often
> secured only by checking that requests are coming through an internal
> connection. I.e. either through wifi or through the ethernet port. If
> web pages can place arbitrary requests to such routers it would mean
> that they can redirect traffic arbitrarily and transparently.

The proposal is that requests to machines on private ip-ranges - i.e. machines
on 192.168.x.x and 10.x.x.x addresses in IPv4, or in IPV6 coming from 
the unique unicast address space [1] - would still require the full CORS 
handshake as described currently. The proposal only  affects GET requests 
requiring no authentication,  to machines with public ip addresses: the 
responses to these requests would be allowed through to a CORS javascript 
request without requiring the server to add the Access-Control-Allow-Origin 
header to his response. Furthermore it was added that the browser should 
still add the Origin: Header. 

The argument is that machines on such public IP addresses that would 
respond to such GET requests would be accessible via the public internet 
and so would be in any case accessible via a CORS proxy.

This proposal would clearly not affect home routers as currently deployed. The 
dangerous access to those are always to the machine when accessed via the 
192.168.x.x ip address range ( or the 10.x.x.x one ). If a router were insecure
when reached via its public name space and ip address, then it would be simply 
an insecure router.

I agree that there is some part of risk that is being taken in making this 
decision here. The above does not quite follow analytically from primitives.
It is possible that internal networks use public ip addresses for their own
machines - they would need to do this because the 10.x.x.x address space was
too small, or the ipv-6 equivalent was too small. Doing this they would make
access to public sites with those ip-ranges impossible (since traffic would be
redirected to the internal machines). My guess is that networks with this type
of setup, don't allow just anybody to open a connection in them. At least 
seems very likely to be so for ipv4. I am not sure what the situation with ipv6
is, or what it should be. ( I am thinking by analogy there. ) Machines on ipv6 
addresses would be machines deployed by experienced people who would probably
be able to change their software to respond differently to GET requests on internal
networks with an Origin: header whose value was not an internal machine.

Henry

[1] http://www.simpledns.com/private-ipv6.aspx


> 
> / Jonas

Social Web Architect
http://bblfish.net/
Received on Saturday, 21 July 2012 04:42:01 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 21 July 2012 04:42:02 GMT