- From: Brad Hill <hillbrad@gmail.com>
- Date: Tue, 17 Sep 2013 14:08:24 -0600
- To: Austin William Wright <aaa@bzfx.net>
- Cc: Anne van Kesteren <annevk@annevk.nl>, "Hill, Brad" <bhill@paypal-inc.com>, WebAppSec WG <public-webappsec@w3.org>
- Message-ID: <CAEeYn8j0dQgCoRMSDAtC3MqaZeu2o3vb5YpurCx0suNtQAXtyQ@mail.gmail.com>
Austin, The Web is the way it is. It works the way it works. It has worked that way for a long time and been quite successful. It was designed to be deeply connected, has grown organically in ways that its original design never could have anticipated, and there are some consequences for security and privacy that have come with that deep interconnection. But it's not as clear as you claim that these are vulnerabilities that need to be fixed, regardless of the consequences to existing content and applications. There's an equally valid perspective that resources that place themselves into the HTTP and browser ecosystem must be aware of how it works and how to operate securely according to its rules. If those rules doesn't work for your particular application, I don't think you'll find anyone on this list who will discourage you from experimenting with different ways of doing things that may be subtly or radically different from the current security model of the Web. But if you're proposing changes that will break huge swaths of the existing web and require radical re-tooling of the fundamental security model of browsers, cramming them at the last minute into a specification that's been stable for years isn't the right approach. The right approach is to experiment, to demonstrate that your model is valuable and viable, and establish a consensus in the industry that the benefits of adopting your model outweigh the costs. Write a plugin or fork a browser and do some research. Learn, understand and be able to clearly explain the impact and consequences of your proposal, and the full scope of the changes necessary to make it really work. Think hard about how it might be possible to adopt it incrementally without breaking existing content. And in the meantime, if you just want to write your application, I will again suggest that you look into the sandbox directive of Content Security Policy, implemented in all major browsers, which allows you to place every resource in a unique origin and in so doing, "opt out" of the origin security model. Sincerely, Brad Hill On Fri, Sep 13, 2013 at 6:55 PM, Austin William Wright <aaa@bzfx.net> wrote: > On Wed, Sep 11, 2013 at 3:25 AM, Anne van Kesteren <annevk@annevk.nl>wrote: > >> >> These seem unrelated to CORS. >> > > I'm trying to establish the premise for why specifications are modular and > narrow in scope, by describing my use case. > > Specifically, my original post describes the flaws of the origin security > model (and most of the flaws I listed directly stem from this policy), why > the same origin policy is not suitable for many wide classes of > applications including mine, and how CORS can (in part) help > mitigate/migrate to a better, more secure, more flexible model. > > My apologies for writing a lengthy essay, if you're crunched for time to > grok it. > > >> >> > Overall, the course of action here would be to raise an issue with the >> > appropriate WG. >> >> The things you "identified" cannot be fixed as content relies on them >> not being fixed. The way we solve them is by providing sites >> additional hooks. >> > > We never settle for "insecure by default" - if either party fails to opt > in (or an attacker causes one party to opt out when they otherwise > wouldn't), the application becomes vulnerable. Again, security trumps > reverse compatibility. I provided a number of examples of features that > have been outright removed, often breaking Web applications (and rightfully > so!), because they posed security problems. I'm just not sure why the > problems I list are overlooked, while others (often more subtle) are > hastily fixed. > > >> > Are these two notes something that can be added? >> >> How does http://www.w3.org/TR/cors/#security not cover this? >> > > Can you please make specific objections to my two suggestions? I'm not > sure how I'm supposed to prove a negative... All I can say is that I've > read it thoroughly, multiple times, and I cannot find any recommendations > on policy or administration, which is the purpose of the section. > > Could you point out where the report explains that > `Access-Control-Allow-Credentials` is often very dangerous, and will expose > CSRF tokens, a security feature also suggested in the same section? I mean, > there is literally nothing describing the header's possible side effects -- > there's the definition of the header and that's it. > > Could you point out where it describes how an application author might > ensure that scripts in untrusted resources cannot make requests with user > credentials, or if they do, that the response does not leak CSRF tokens? > > If it wasn't clear, I'm not proposing changes to any normative text. > > Austin. >
Received on Tuesday, 17 September 2013 20:08:52 UTC