- From: Henry Story <henry.story@bblfish.net>
- Date: Sat, 21 Jul 2012 17:25:14 +0200
- To: Eric Rescorla <ekr@rtfm.com>
- Cc: Jonas Sicking <jonas@sicking.cc>, Adam Barth <w3c@adambarth.com>, Cameron Jones <cmhjones@gmail.com>, Anne van Kesteren <annevk@annevk.nl>, Ian Hickson <ian@hixie.ch>, public-webapps <public-webapps@w3.org>, public-webappsec@w3.org
On 21 Jul 2012, at 15:02, Eric Rescorla wrote: > Henry, > > In my opinion as Chair, there has been broad consensus in the > WebAppSec WG that one of the basic design constraints of > CORS is that introducing CORS features into browsers not create > new security vulnerabilities for existing network deployments. I understand that concern completely. > What you are proposing would have that result. Well that was what was in question. For example Jonas Sicking, clearly misunderstood the proposal, since he believed this would affect the security of home routers. Other responses seemed to believe that security via ip-address selection would be affected - not so for internal ip-addresses as argued below. > > You are of course free to believe that that consensus is wrong, I understand the consensus, and I think as a general policy it is a good one. I assume policies are general guides that have to be wielded with care and not be used just to shut down interesting improvements that may look like they are close to the borderline. Often the interesting ideas are those that look weirdly like they are breaking and contradicting a number of deeply held beliefs. > but I do not believe that discussing this further serves any purpose. I was not going to add anything myself after my previous e-mail, frankly. And I was just responding to what I thought are misunderstandings of a possibility I had seen. If you look carefully at this thread, I initially was satisfied with the first answer to the problem. Then a new possibility came up proposed by another member of this group, Cameron Jones, which we were considering. > Please take this discussion elsewhere. I have other things to do than to discuss CORS. I have built a proxy to bypass the limitations, and have some other ideas on how to get things done better. I was just sending some feedback at the cost of my time, to this group, as I thought it could be of interest. All the best with getting through to final recommendation, Henry > > -Ekr > > > On Fri, Jul 20, 2012 at 9:41 PM, Henry Story <henry.story@bblfish.net> wrote: >> >> On 21 Jul 2012, at 05:39, Jonas Sicking wrote: >> >>> On Fri, Jul 20, 2012 at 11:58 AM, Henry Story <henry.story@bblfish.net> wrote: >>>> >>>> On 20 Jul 2012, at 18:59, Adam Barth wrote: >>>> >>>>> On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones <cmhjones@gmail.com> wrote: >>>>>> On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth <w3c@adambarth.com> wrote: >>>>>>> On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones <cmhjones@gmail.com> wrote: >>>>>>>> So, this is a non-starter. Thanks for all the fish. >>>>>>> >>>>>>> That's why we have the current design. >>>>>> >>>>>> Yes, i note the use of the word "current" and not "final". >>>>>> >>>>>> Ethics are a starting point for designing technology responsibly. If >>>>>> the goals can not be met for valid technological reasons then that it >>>>>> a unfortunate outcome and one that should be avoided at all costs. >>>>>> >>>>>> The costs of supporting legacy systems has real financial implications >>>>>> notwithstanding an ethical ideology. If those costs become too great, >>>>>> legacy systems loose their impenetrable pedestal. >>>>>> >>>>>> The architectural impact of supporting for non-maintained legacy >>>>>> systems is that web proxy intermediates are something we will all have >>>>>> to live with. >>>>> >>>>> Welcome to the web. We support legacy systems. If you don't want to >>>>> support legacy systems, you might not enjoy working on improving the >>>>> web platform. >>>> >>>> Of course, but you seem to want to support hidden legacy systems, that is systems none of us know about or can see. It is still a worth while inquiry to find out how many systems there are for which this is a problem, if any. That is: >>>> >>>> a) systems that use non standard internal ip addresses >>>> b) systems that use ip-address provenance for access control >>>> c) ? potentially other issues that we have not covered >>> >>> One important group to consider is home routers. Routers are often >>> secured only by checking that requests are coming through an internal >>> connection. I.e. either through wifi or through the ethernet port. If >>> web pages can place arbitrary requests to such routers it would mean >>> that they can redirect traffic arbitrarily and transparently. >> >> The proposal is that requests to machines on private ip-ranges - i.e. machines >> on 192.168.x.x and 10.x.x.x addresses in IPv4, or in IPV6 coming from >> the unique unicast address space [1] - would still require the full CORS >> handshake as described currently. The proposal only affects GET requests >> requiring no authentication, to machines with public ip addresses: the >> responses to these requests would be allowed through to a CORS javascript >> request without requiring the server to add the Access-Control-Allow-Origin >> header to his response. Furthermore it was added that the browser should >> still add the Origin: Header. >> >> The argument is that machines on such public IP addresses that would >> respond to such GET requests would be accessible via the public internet >> and so would be in any case accessible via a CORS proxy. >> >> This proposal would clearly not affect home routers as currently deployed. The >> dangerous access to those are always to the machine when accessed via the >> 192.168.x.x ip address range ( or the 10.x.x.x one ). If a router were insecure >> when reached via its public name space and ip address, then it would be simply >> an insecure router. >> >> I agree that there is some part of risk that is being taken in making this >> decision here. The above does not quite follow analytically from primitives. >> It is possible that internal networks use public ip addresses for their own >> machines - they would need to do this because the 10.x.x.x address space was >> too small, or the ipv-6 equivalent was too small. Doing this they would make >> access to public sites with those ip-ranges impossible (since traffic would be >> redirected to the internal machines). My guess is that networks with this type >> of setup, don't allow just anybody to open a connection in them. At least >> seems very likely to be so for ipv4. I am not sure what the situation with ipv6 >> is, or what it should be. ( I am thinking by analogy there. ) Machines on ipv6 >> addresses would be machines deployed by experienced people who would probably >> be able to change their software to respond differently to GET requests on internal >> networks with an Origin: header whose value was not an internal machine. >> >> Henry >> >> [1] http://www.simpledns.com/private-ipv6.aspx >> >> >>> >>> / Jonas >> >> Social Web Architect >> http://bblfish.net/ >> >> Social Web Architect http://bblfish.net/
Received on Saturday, 21 July 2012 15:25:50 UTC