W3C home > Mailing lists > Public > public-webappsec@w3.org > February 2014

Re: Remove paths from CSP?

From: Sigbjørn Vik <sigbjorn@opera.com>
Date: Tue, 25 Feb 2014 16:01:22 +0100
Message-ID: <530CB042.1020408@opera.com>
To: Mike West <mkwst@google.com>
CC: Dan Veditz <dveditz@mozilla.com>, Egor Homakov <homakov@gmail.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Michal Zalewski <lcamtuf@google.com>, Eduardo' Vela <evn@google.com>
On 25-Feb-14 14:48, Mike West wrote:
> Hi! Thanks for continuing the discussion!
> 
> On Mon, Feb 24, 2014 at 5:19 PM, Sigbjørn Vik <sigbjorn@opera.com
> <mailto:sigbjorn@opera.com>> wrote:
> 
>     This seems to combine the worst of the other suggestions. It only
>     removes part of the leakage (e.g. logged-in detection is still
>     possible), and it risks losing all of the CSP protection if there are
>     open redirects allowed. 
> 
> I don't agree. I don't think it's an elegant or pretty solution, but I
> think it's lower risk than either of the two it combines:
> 
> 1. Logged-in detection would be detectable only in cases where the
> logged-in/out user was redirected across origins (`img-src example.com
> <http://example.com>` would catch a redirect to `mikewest.example.com
> <http://mikewest.example.com>`), which is a significant reduction in
> attack surface.

For services such as gmail and hotmail, the login happens on a different
domain than the service. This is an extremely common setup, including on
high value targets, which prior to CSP would be safe. I consider this a
significant increase in attack surface. Personally, I consider any
solution which instantly reveals logged-in status on such services to be
a security flaw, and a non-starter.

> 2. I think Egor's claim that "example.com/path/to/static/js/
> <http://example.com/path/to/static/js/>" is much less likely to contain
> open redirects than "example.com/* <http://example.com/*>" pretty
> reasonable. For instance, it would seem to solve the Google use-cases
> that Michal and Eduardo noted above.

It is a tradeoff, it is a reduction in surface area against an increase
in complexity of understanding what it protects against.

>     It is the most complex solution[1], and there will be side channels
>     (e.g. timing). It will be implementable though, this is known as the
>     same-origin-policy, and web browsers have a long history of implementing
>     this, despite the challenges involved. There will be fewer side channels
>     than b-2 (b-2 has side channels baked in as a feature), none that don't
>     already exist today (and then no worse), and none that cannot be
>     protected against if a website so wishes.
> 
> You've said this a few times, and I still don't understand it. How can a
> website protect itself against this style of attack, other than by
> simply not redirecting (which would mitigate both the CSP and non-CSP
> versions)?

If you tell me which kind of attack you are worried about, I can tell
you how to protect against it.

Timing attacks are generally protected against by ensuring operations
take equally long regardless of the input. While not redirecting might
decrease that difference, it is not a generic protection. In crypto
there is an entire field dedicated to timing attacks, and browsers
already have lots of built-in protections against them. See e.g. Lucky
13 for a recent and well known example. Protection against timing
attacks is a solved problem, but often not applied, due to cost/risk
analysis. Websites which want to protect against this can do so though.

>     Even if not implemented
>     perfectly, it will still be both more secure and easier to understand
>     than b-2.
> 
>     If there are any particular side channels you are concerned about, that
>     are worse than the side channels built into b-2, please mention them, so
>     they can be considered explicitly.
> 
> "Worse" is hard to define. Aren't timing attacks bad enough?

Timing attacks on logged-in vs not-logged-in pages are currently not
considered very serious in general. It depends on heuristics, many
retries, and is susceptible to failure due to e.g. noise, background
processes, user location, etc. An attack requires close statistical
studies of the target first, and must be updated whenever the target
changes.

Being able to tell for certain if users are logged in or not, on most
high value targets, with a single request, without pre-studies, and with
no noise, is a completely different league.

> More to the point, please assume that I and everyone like me is a
> terrible programmer. :)

I do :) Which is why I would much rather have the most complex parts
centrally, in browsers. Browsers have a solid code review process,
automated tests, and people who generally have a clue what they are
doing. When security flaws are found, they are notified, and able to fix
them, often quickly.

Web authors on the other hand, typically have no code reviews, few
tests, and are often happy with whatever "works". When security flaws
are found, they are blissfully unaware of them. Leaving them to work out
the intricacies of paths and open redirects in CSP is a recipe for
disaster :)

-- 
Sigbjørn Vik
Opera Software
Received on Tuesday, 25 February 2014 15:01:55 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:04 UTC