- From: Mike West <mkwst@google.com>
- Date: Tue, 25 Feb 2014 14:48:01 +0100
- To: Sigbjørn Vik <sigbjorn@opera.com>
- Cc: Dan Veditz <dveditz@mozilla.com>, Egor Homakov <homakov@gmail.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Michal Zalewski <lcamtuf@google.com>, "Eduardo' Vela" <evn@google.com>
- Message-ID: <CAKXHy=dqGh2bJG4_LGWqR9JAcnME3CHN3WHsMjy4-L4T-XOCog@mail.gmail.com>
Hi! Thanks for continuing the discussion! On Mon, Feb 24, 2014 at 5:19 PM, Sigbjørn Vik <sigbjorn@opera.com> wrote: > This seems to combine the worst of the other suggestions. It only > removes part of the leakage (e.g. logged-in detection is still > possible), and it risks losing all of the CSP protection if there are > open redirects allowed. I don't agree. I don't think it's an elegant or pretty solution, but I think it's lower risk than either of the two it combines: 1. Logged-in detection would be detectable only in cases where the logged-in/out user was redirected across origins (`img-src example.com` would catch a redirect to `mikewest.example.com`), which is a significant reduction in attack surface. The majority of the risk seems to be wrapped up in reading path information cross-origin, which allowing redirects would resolve. 2. I think Egor's claim that "example.com/path/to/static/js/" is much less likely to contain open redirects than "example.com/*" pretty reasonable. For instance, it would seem to solve the Google use-cases that Michal and Eduardo noted above. > It is confusing for a web author, if trying to > secure his web page by making CSP more strict (enabling paths), he might > accidentally make his web page less secure instead (because of unrelated > open redirects). > I agree that this introduces (more!) complexity into the policy language. Authors will almost certainly be confused at one time or another about when redirects would be followed, and when they wouldn't. I'd suggest that it's a fairly simple rule once understood, but accept your claim entirely that understanding that point won't initially be obvious. > It is the most complex solution[1], and there will be side channels > (e.g. timing). It will be implementable though, this is known as the > same-origin-policy, and web browsers have a long history of implementing > this, despite the challenges involved. There will be fewer side channels > than b-2 (b-2 has side channels baked in as a feature), none that don't > already exist today (and then no worse), and none that cannot be > protected against if a website so wishes. You've said this a few times, and I still don't understand it. How can a website protect itself against this style of attack, other than by simply not redirecting (which would mitigate both the CSP and non-CSP versions)? > Even if not implemented > perfectly, it will still be both more secure and easier to understand > than b-2. > > If there are any particular side channels you are concerned about, that > are worse than the side channels built into b-2, please mention them, so > they can be considered explicitly. > "Worse" is hard to define. Aren't timing attacks bad enough? More to the point, please assume that I and everyone like me is a terrible programmer. :) I'm 100% certain I'd introduce new and interestingly detectable behaviors while trying to pretend that a resource loaded when it didn't. -- Mike West <mkwst@google.com> Google+: https://mkw.st/+, Twitter: @mikewest, Cell: +49 162 10 255 91 Google Germany GmbH, Dienerstrasse 12, 80331 München, Germany Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg Geschäftsführer: Graham Law, Christine Elizabeth Flores (Sorry; I'm legally required to add this exciting detail to emails. Bleh.)
Received on Tuesday, 25 February 2014 13:48:56 UTC