Re: CSRF: alternative solutions

On Tue, Jun 9, 2009 at 9:55 AM, Giovanni
Campagna<scampa.giovanni@gmail.com> wrote:
> 2009/6/9 Adam Barth <w3c@adambarth.com>:
>> I recommend reading
>> http://www.adambarth.com/papers/2008/barth-jackson-mitchell-b.pdf for
>> examples of why this is often sufficient for the attacker to perform
>> misdeeds.
>
> I read it, and I must say that it is worse than I expected, but I
> believe that most harm of login CSRF would be reduced if the user was
> always informed of its present credentials.
> In other words, every page should show visibly "You are logged as
> XXX". This solves the PayPal problem for example, and may solve the
> Search History problem (by saying "All your searches are being logged
> to XXX.")

Informing the user doesn't help if the login CSRF leads to XSS because
the attacker can just re-write that part of the page.

> I meant to change only for the cases that browsers currently don't
> send Referer (ie from data and ftp URIs). They could send Referer with
> <scheme>://<host>:<port>/ or send a null Referer. In both case we
> don't change the current semantics and we're backward compatible
> (because if the server used to assume nothing and just live without
> it, it would skip it)

This is being discussed in the HTTP working group.  The current
proposal is to recommend that user agents send the string
"about:blank" when they don't have a better idea of what Referer to
send (e.g., for requests generated from data URLs).  If you're
interested in this idea, I'd encourage you to get involved with the
discussion there.

>> http://www.cs.purdue.edu/homes/ninghui/papers/csrf_fc09.pdf
>
> That document was quite interesting, and also showed a case in which
> both Origin and secret tokens fail. Anyway, it showed that heuristics
> do work to determine if requests are intended or not, and in the
> latter case the browser may strip authentication info or just ask the
> user.

I'm glad you enjoyed the paper.

>>>> I'm not sure what UI you have in mind here.  In general, we should be
>>>> careful about relying on the user to make correct security decisions.
>>>
>>> 1)
>>> ========================================================
>>> | This website, "Title of malicious page", at          |
>>> | http://www.dangerous.com/index.html                  |
>>> | tried to perform a POST request to                   |
>>> | http://www.mybank.com/pay?to=attacker                |
>>> | A similar site was last seen as                      |
>>> | "My bank account - userId - Transaction Confirm"     |
>>> |                                                      |
>>> | This request includes authentication info, and       |
>>> | thus may involve processing not desired on the       |
>>> | behalf of the user (for example sending an email)    |
>>> |                                                      |
>>> |         ((Allow))          ((Deny))                  |
>>> |                                                      |
>>> | ( ) Always for these origins only                    |
>>> | ( ) Always from www.dangerous.com                    |
>>> | ( ) Always to www.mybank.com                         |
>>> | ( ) Only this time                                   |
>>> ========================================================
>>
>> We could evaluate the usability of this UI with a user study, but I
>> suspect users won't understand the consequences of this decision.
>> More likely user will click allow to get there work done.  It's hard
>> enough to get users not to click through warnings that say "this site
>> will install malicious software on your machine."
>
> This depends on what exactly the text will be in the dialog and the
> relative frequency of legitimate requests over all cross-site request
> (ie the frequency of "Allow"s). I guess that many user will see the
> dialog just a few times, and yet they will be protected, because they
> will have clicked "Always from this site", that they have discovered
> to be safe.

Yes.  There are all good things to evaluate:

A) False positives (i.e., how often the warning appears when there is
no attack).
B) False negative (i.e., how often an attack fails to trigger the warning).
C) Usability (i.e., whether user actually make effective security
decisions when prompted with this warning).

I'd encourage you study all three of these questions and present your
findings to this working group.

>> Perhaps, but we might be better off with a solution that doesn't
>> require the user to make a security decision.
>
> I don't agree with you. I think that ours are two completely different
> points to see security, and I'm not sure they're compatible.

This is a largely empirical question that we can have a more informed
discussion about once you gather data on (A), (B), and (C).

Adam

Received on Tuesday, 9 June 2009 18:18:03 UTC