W3C home > Mailing lists > Public > public-webappsec@w3.org > June 2013

Re: Content Security Policy

From: Евнгений Яременко <w3techplayground@gmail.com>
Date: Mon, 17 Jun 2013 22:29:20 +0300
Message-ID: <CADZ8+p1DcnpChSvrP4n5qH0bMi7SzKoZ5B=QaWO-0jFep_qEZg@mail.gmail.com>
To: Brad Hill <hillbrad@gmail.com>
Cc: Bryan McQuade <bmcquade@google.com>, Joel Weinberger <jww@chromium.org>, Yoav Weiss <yoav@yoav.ws>, Neil Matatall <neilm@twitter.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>
pros:
- easy server-side implementation
- highly compatible with templates
- Since only page render engine needs upgrade, this method of verification
can be implemented in CMS, frameworks and be distributed transparently to
all developers with new versions.
- to some extent it can be implemented with optimisation proxy like
"PageSpeed Service" with automatic JS marking and hashing.
- doesn't clutter page source.
- doesn't break caching.
- can be implemented via JS shim.
- to enforce uniformity we can build hash upon parsed and tokenized sources.

Some cons:
Hash can be broken. It even worse since web standard policy is "user intent
first" so we can't hide hash function or parameters or even add salt
because hashing needs to be recreated at the client side so attacker has
all data that he needs to "eventually" form code that will pass
verification. But with big hash or even raw source code in response header
instead of  hash we will have penalty on bandwidth, size limitations. And
if hash function  is too complex it will lock page render, eat device
battery etc.

So it might be reasonable to make hash function parametrize with
complexity, bandwidth and time to expire. For example for corporate sector
or e-trade it might  prefer security but for handheld mobile devices sites
it can  prefer speed and bandwidth.
One way to make hash faking nearly impossible is usage of lossy compression
instead of fixed hashing. Server should provide compression ratio in the
header.

Or even some hybrid approach. Response header provides code verification
service address(it might be source server or 3-d party) and security level.
When browser detects code block it passes js and evaluates potential risks(
is it purely CSS DOM manipulation script? Any requests? Access to fields?)
and allows it to run or send it in some form to the verification server.
 Also it can alarm when some type of new code detected. This  approach may
result in page locks but some optimizations could make it viable. For
example, if it's pure asunc function call without requests browser engine
can run and verify it in Non-blocking way. Or if it's DOM manipulation code
browser can run it in parallel with verification and build affects frame
that will be applied only after verification.  Processor pipeline works in
similar way.

PS. Added as requested.


On 17 June 2013 21:12, Brad Hill <hillbrad@gmail.com> wrote:

> I would suggest that using the RFC 6920 syntax (
> http://tools.ietf.org/html/rfc6920) as a source expression is a good fit
> with the current pattern of using nonces as such.  Nicholas Green had a
> start at a proposal, (
> http://lists.w3.org/Archives/Public/public-webappsec/2013Feb/0052.html)
> but I think it needs to be updated significantly given the direction we've
> taken with nonces.  A partial list of what remains to be done to spec out
> inline hashes is:
>
> Specify which hash algorithms CSP 1.1 would require support for.
>
> Specify whether and to what extent truncation is allowed.
>
> Specify what to do with the content-type attribute of ni: URIs if we allow
> this to be used for non-inline content... or shouldthis be used to
> determine the type (css, js, vbs, etc..) of the inline resource?
>
> Specify an algorithm to exactly determine the bytes-to-be-hashed in a
> reliable and cross-browser manner.  I would suggest that this should be
> defined in terms of the HTML5 parsing algorithm, with some restrictions
> such as requiring any resource employing hash sources declare an explicit
> encoding.
>
> *shudder* Is canonicalization necessary?  I hope not.
>
> Think about and determine what needs to be covered by the
> bytes-to-be-hashed:
>    - should attributes of the script tag be included?  (e.g. whether it is
> javascript, vbscript, ruby or json?)
>
> Specify algorithm agility behavior
>    - what to do if a policy specifies only SHA4 hashes and a user agent
> doesn't understand SHA4?  fail?  fallback to unsafe-inline?
>    - possibly: if a policy specifies SHA1 and SHA3 hashes of the same
> content what should user agent behavior be?  allow all as valid?  only
> trust the strongest hashes it understands how to process in a given policy
> string?  In the composite policy?
>
> -Brad
> <http://tools.ietf.org/html/rfc6920>
>
>
> On Mon, Jun 17, 2013 at 10:56 AM, Bryan McQuade <bmcquade@google.com>wrote:
>
>> Brad and I just chatted offline & concluded that an attack vector does
>> exist for static HTML with unsafe-inline: it would be possible for an
>> inline block with a vulnerability to be used to create another inline
>> script block that executed some exploit.
>>
>>
>>
>> On Mon, Jun 17, 2013 at 1:40 PM, Brad Hill <hillbrad@gmail.com> wrote:
>>
>>> Just to play devil's advocate, if the HTML is truly being served in a
>>> completely static manner, is "unsafe-inline" actually unsafe?  (there
>>> should be no inline-content injection vulnerabilities in such a resource)
>>>
>>>
>>> On Mon, Jun 17, 2013 at 10:36 AM, Bryan McQuade <bmcquade@google.com>wrote:
>>>
>>>> Does CSP support inline scripts and styles in statically served HTML
>>>> files? My impression was that nonce only works for dynamic serving. If
>>>> that's the case then IMO hashes are warranted to support the static case
>>>> alone.
>>>>
>>>>
>>>>
>>>> On Mon, Jun 17, 2013 at 1:22 PM, Joel Weinberger <jww@chromium.org>wrote:
>>>>
>>>>> I'm not particularly against, hashes, but I'm naturally hesitant to
>>>>> add more constructs to CSP, especially since the use of nonces seem to
>>>>> completely overlap with the use cases for hashes. I think the concern about
>>>>> nonce abuse as Yoav pointed out are valid concerns, but I'd be hesitant to
>>>>> add a new construct just to cover that particular concern. Put differently,
>>>>> I don't see any dramatically different uses for hashes from nonces.
>>>>> --Joel
>>>>>
>>>>>
>>>>> On Mon, Jun 17, 2013 at 4:09 AM, Yoav Weiss <yoav@yoav.ws> wrote:
>>>>>
>>>>>> +1 for discussing it further.
>>>>>>
>>>>>> The advantages I see:
>>>>>> * The author is authorizing a *specific* script/style and can do so
>>>>>> using static configuration
>>>>>>   - No need for a dynamic backend that changes the nonce for each
>>>>>> request..
>>>>>>   - This can simplify deployment, resulting in more people using it
>>>>>> * I'm afraid of authors abusing nonces, sending the same nonce over
>>>>>> and over as means to "bypass" CSP
>>>>>>   - Offering an alternative to nonce can reduce that risk
>>>>>>
>>>>>> The complications I can think of:
>>>>>> * Make sure that either hashes don't break with small white-spaces
>>>>>> removals, text encoding changes, etc.
>>>>>>   - An alternative is tools that can give authors the resulting hash
>>>>>> for a specific script/style. (e.g. inside the Web inspector tools). That
>>>>>> may be more fragile, though.
>>>>>>
>>>>>> All in all, I think hashes can make it easier for "copy&paste"
>>>>>> authors to integrate CSP. They can also make deployment of third party
>>>>>> scripts easier.
>>>>>>
>>>>>>
>>>>>> On Sat, Jun 15, 2013 at 8:00 AM, Neil Matatall <neilm@twitter.com>wrote:
>>>>>>
>>>>>>> This is the script-hash proposal. I would love it if we discussed
>>>>>>> this more as it has numerous benefits over a nonce as well as complications
>>>>>>> :)
>>>>>>> On Jun 15, 2013 1:11 AM, "Евнгений Яременко" <
>>>>>>> w3techplayground@gmail.com> wrote:
>>>>>>>
>>>>>>>> Is it possible to verify(whitelist) inline script block via
>>>>>>>> checksum of its logic(uniform) as alternative to "Nonce"?  ie send checksum
>>>>>>>> of the allowed script via header and if inlined script checksum is the same
>>>>>>>> it's allowed to execute.
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
Received on Monday, 17 June 2013 19:29:49 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:54:33 UTC