W3C home > Mailing lists > Public > public-webappsec@w3.org > June 2013

Re: Content Security Policy

From: Yoav Weiss <yoav@yoav.ws>
Date: Tue, 18 Jun 2013 11:10:28 +0200
Message-ID: <CACj=BEhqGRg=KdE=1zrHW2vX4FXCJNZVghQCUttV+Fd-t4WvOw@mail.gmail.com>
To: Mountie Lee <mountie@paygate.net>
Cc: Евнгений Яременко <w3techplayground@gmail.com>, Brad Hill <hillbrad@gmail.com>, Bryan McQuade <bmcquade@google.com>, Joel Weinberger <jww@chromium.org>, Neil Matatall <neilm@twitter.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>
Some of the previous comments in this thread made me realize a significant
problem with current nonces & caching
If an author adds nonces to its HTML, it must render it uncacheable (not
even cached privately). Resource validation [1] will also be broken for
that resource.
This will hurt performance in cases where the HTML was previously
cacheable, when one of the goal of adding nonces is to enable inlining for
performance purposes.

But, since authors have poor track record of actually setting the right
caching headers, a different, significant problem arise.

Let's say an author forgot to make his HTML uncacheable, and it lacks
explicit caching headers. In this case any caching proxy along the way can
assign heuristic expiration[2] to this resource.
if that proxy decides to serve content based on heuristic expiration, and
this content contains nonces, it has 3 options:
1. Serving the old CSP headers along with the content, making the nonce
guessable after a while.
2. Serving the content without the CSP nonce, breaking the content.
3. Avoid heuristic expiration when nonces are present.

Since at the moment the CSP headers are defined as end-to-end headers, a
compliant proxy will serve the nonces upon heuristic expiration, which will
enable XSS for such pages on this proxy's users. If there are many of these
users (e.g. a large ISP), that can be a nice target for an attacker.
IMO, this issue needs to be addressed. I'm not sure that modifying the spec
to render nonce based content uncacheable by definition would be enough to
avoid such a vulnerability.

Hashes are much preferable to nonces in that aspect, since the content can
be safely cacheable.

[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.3
[2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.2.2


On Tue, Jun 18, 2013 at 2:19 AM, Mountie Lee <mountie@paygate.net> wrote:

> Hi.
> could you inform me the details or any other link about text you mentioned
> (web standard policy is "user intent first")?
>
> regards
> mountie.
>
>
> On Tue, Jun 18, 2013 at 4:29 AM, Евнгений Яременко <
> w3techplayground@gmail.com> wrote:
>
>> pros:
>> - easy server-side implementation
>> - highly compatible with templates
>> - Since only page render engine needs upgrade, this method of
>> verification can be implemented in CMS, frameworks and be distributed
>> transparently to all developers with new versions.
>> - to some extent it can be implemented with optimisation proxy like
>> "PageSpeed Service" with automatic JS marking and hashing.
>> - doesn't clutter page source.
>> - doesn't break caching.
>> - can be implemented via JS shim.
>> - to enforce uniformity we can build hash upon parsed and tokenized
>> sources.
>>
>> Some cons:
>> Hash can be broken. It even worse since web standard policy is "user
>> intent first" so we can't hide hash function or parameters or even add salt
>> because hashing needs to be recreated at the client side so attacker has
>> all data that he needs to "eventually" form code that will pass
>> verification. But with big hash or even raw source code in response header
>> instead of  hash we will have penalty on bandwidth, size limitations. And
>> if hash function  is too complex it will lock page render, eat device
>> battery etc.
>>
>> So it might be reasonable to make hash function parametrize with
>> complexity, bandwidth and time to expire. For example for corporate sector
>> or e-trade it might  prefer security but for handheld mobile devices sites
>> it can  prefer speed and bandwidth.
>> One way to make hash faking nearly impossible is usage of lossy
>> compression instead of fixed hashing. Server should provide compression
>> ratio in the header.
>>
>> Or even some hybrid approach. Response header provides code verification
>> service address(it might be source server or 3-d party) and security level.
>> When browser detects code block it passes js and evaluates potential risks(
>> is it purely CSS DOM manipulation script? Any requests? Access to fields?)
>> and allows it to run or send it in some form to the verification server.
>>  Also it can alarm when some type of new code detected. This  approach may
>> result in page locks but some optimizations could make it viable. For
>> example, if it's pure asunc function call without requests browser engine
>> can run and verify it in Non-blocking way. Or if it's DOM manipulation code
>> browser can run it in parallel with verification and build affects frame
>> that will be applied only after verification.  Processor pipeline works in
>> similar way.
>>
>> PS. Added as requested.
>>
>>
>> On 17 June 2013 21:12, Brad Hill <hillbrad@gmail.com> wrote:
>>
>>> I would suggest that using the RFC 6920 syntax (
>>> http://tools.ietf.org/html/rfc6920) as a source expression is a good
>>> fit with the current pattern of using nonces as such.  Nicholas Green had a
>>> start at a proposal, (
>>> http://lists.w3.org/Archives/Public/public-webappsec/2013Feb/0052.html)
>>> but I think it needs to be updated significantly given the direction we've
>>> taken with nonces.  A partial list of what remains to be done to spec out
>>> inline hashes is:
>>>
>>> Specify which hash algorithms CSP 1.1 would require support for.
>>>
>>> Specify whether and to what extent truncation is allowed.
>>>
>>> Specify what to do with the content-type attribute of ni: URIs if we
>>> allow this to be used for non-inline content... or shouldthis be used to
>>> determine the type (css, js, vbs, etc..) of the inline resource?
>>>
>>> Specify an algorithm to exactly determine the bytes-to-be-hashed in a
>>> reliable and cross-browser manner.  I would suggest that this should be
>>> defined in terms of the HTML5 parsing algorithm, with some restrictions
>>> such as requiring any resource employing hash sources declare an explicit
>>> encoding.
>>>
>>> *shudder* Is canonicalization necessary?  I hope not.
>>>
>>> Think about and determine what needs to be covered by the
>>> bytes-to-be-hashed:
>>>    - should attributes of the script tag be included?  (e.g. whether it
>>> is javascript, vbscript, ruby or json?)
>>>
>>> Specify algorithm agility behavior
>>>    - what to do if a policy specifies only SHA4 hashes and a user agent
>>> doesn't understand SHA4?  fail?  fallback to unsafe-inline?
>>>    - possibly: if a policy specifies SHA1 and SHA3 hashes of the same
>>> content what should user agent behavior be?  allow all as valid?  only
>>> trust the strongest hashes it understands how to process in a given policy
>>> string?  In the composite policy?
>>>
>>> -Brad
>>> <http://tools.ietf.org/html/rfc6920>
>>>
>>>
>>> On Mon, Jun 17, 2013 at 10:56 AM, Bryan McQuade <bmcquade@google.com>wrote:
>>>
>>>> Brad and I just chatted offline & concluded that an attack vector does
>>>> exist for static HTML with unsafe-inline: it would be possible for an
>>>> inline block with a vulnerability to be used to create another inline
>>>> script block that executed some exploit.
>>>>
>>>>
>>>>
>>>> On Mon, Jun 17, 2013 at 1:40 PM, Brad Hill <hillbrad@gmail.com> wrote:
>>>>
>>>>> Just to play devil's advocate, if the HTML is truly being served in a
>>>>> completely static manner, is "unsafe-inline" actually unsafe?  (there
>>>>> should be no inline-content injection vulnerabilities in such a resource)
>>>>>
>>>>>
>>>>> On Mon, Jun 17, 2013 at 10:36 AM, Bryan McQuade <bmcquade@google.com>wrote:
>>>>>
>>>>>> Does CSP support inline scripts and styles in statically served HTML
>>>>>> files? My impression was that nonce only works for dynamic serving. If
>>>>>> that's the case then IMO hashes are warranted to support the static case
>>>>>> alone.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Jun 17, 2013 at 1:22 PM, Joel Weinberger <jww@chromium.org>wrote:
>>>>>>
>>>>>>> I'm not particularly against, hashes, but I'm naturally hesitant to
>>>>>>> add more constructs to CSP, especially since the use of nonces seem to
>>>>>>> completely overlap with the use cases for hashes. I think the concern about
>>>>>>> nonce abuse as Yoav pointed out are valid concerns, but I'd be hesitant to
>>>>>>> add a new construct just to cover that particular concern. Put differently,
>>>>>>> I don't see any dramatically different uses for hashes from nonces.
>>>>>>> --Joel
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Jun 17, 2013 at 4:09 AM, Yoav Weiss <yoav@yoav.ws> wrote:
>>>>>>>
>>>>>>>> +1 for discussing it further.
>>>>>>>>
>>>>>>>> The advantages I see:
>>>>>>>> * The author is authorizing a *specific* script/style and can do so
>>>>>>>> using static configuration
>>>>>>>>   - No need for a dynamic backend that changes the nonce for each
>>>>>>>> request..
>>>>>>>>   - This can simplify deployment, resulting in more people using it
>>>>>>>> * I'm afraid of authors abusing nonces, sending the same nonce over
>>>>>>>> and over as means to "bypass" CSP
>>>>>>>>   - Offering an alternative to nonce can reduce that risk
>>>>>>>>
>>>>>>>> The complications I can think of:
>>>>>>>> * Make sure that either hashes don't break with small white-spaces
>>>>>>>> removals, text encoding changes, etc.
>>>>>>>>   - An alternative is tools that can give authors the resulting
>>>>>>>> hash for a specific script/style. (e.g. inside the Web inspector tools).
>>>>>>>> That may be more fragile, though.
>>>>>>>>
>>>>>>>> All in all, I think hashes can make it easier for "copy&paste"
>>>>>>>> authors to integrate CSP. They can also make deployment of third party
>>>>>>>> scripts easier.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sat, Jun 15, 2013 at 8:00 AM, Neil Matatall <neilm@twitter.com>wrote:
>>>>>>>>
>>>>>>>>> This is the script-hash proposal. I would love it if we discussed
>>>>>>>>> this more as it has numerous benefits over a nonce as well as complications
>>>>>>>>> :)
>>>>>>>>> On Jun 15, 2013 1:11 AM, "Евнгений Яременко" <
>>>>>>>>> w3techplayground@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Is it possible to verify(whitelist) inline script block via
>>>>>>>>>> checksum of its logic(uniform) as alternative to "Nonce"?  ie send checksum
>>>>>>>>>> of the allowed script via header and if inlined script checksum is the same
>>>>>>>>>> it's allowed to execute.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> Mountie Lee
>
> PayGate
> CTO, CISSP
> Tel : +82 2 2140 2700
> E-Mail : mountie@paygate.net
>
>  =======================================
> PayGate Inc.
> THE STANDARD FOR ONLINE PAYMENT
> for Korea, Japan, China, and the World
>
>
>
Received on Tuesday, 18 June 2013 09:10:55 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:02 UTC