[SRI] review note 2

http://w3c.github.io/webappsec/specs/subresourceintegrity/

> 3.2.2 Priority

> User agents must provide a mechanism of determining the relative priority
of two hash functions and

> return the empty string if the priority is equal.

What's the justification for this?

What should this function do for "hash functions" of unknown strength?

Is this function accessible to content, or is it an algorithm detail?

> That is, if a user agent implemented a function like
getPrioritizedHashFunction(a, b)
> it would return the hash function the user agent considers the most
collision-resistant.
> For example, getPrioritizedHashFunction('SHA-256', 'SHA-512')
> would return 'SHA-512' and
> getPrioritizedHashFunction('SHA-256', 'SHA-256')
> would return the empty string.

Could there be a designated "weakest hash function" to be used to identify
unknown hash functions?

Or at least you should say that unknown hash functions should be rejected
along with known weak functions...

> 3.3.5 Does resource match metadataList?
> If resource’s URL’s scheme is about, return true.

About isn't always trusted content...

> 3.6 The integrity attribute

> In order for user agents to remain fully forwards compatible with future
options, the user agent must ignore all unrecognized option-expressions

Missing period.

> 3.8 Handling integrity violations

> NOTE
> On a failed integrity check, an error event is thrown.
> Developers wishing to provide a canonical fallback resource (e.g. a
resource not served from a CDN, perhaps from a secondary,
> trusted, but slower source) can catch this error event and provide an
> appropriate handler to replace the failed resource with a different one.

Shouldn't this mention how one can recognize the unhappy node? (A forward
link is fine)

> 5.1 Non-secure contexts remain non-secure

> Integrity metadata delivered to a context that is not a secure context,
> such as an only protects an origin

This doesn't make sense.

> against a compromise of the server where an external resources is hosted.

> 5.2 Hash collision attacks

> Digests are only as strong as the hash function used to generate them.
> User agents should refuse to support known-weak hashing functions like
MD5 or SHA-1,
> and should restrict supported hashing functions to those known to be
collision-resistant.
> At the time of writing, SHA-256 is a good baseline.
> Moreover, user agents should re-evaluate their supported hash functions
on a regular basis,
> and deprecate support for those functions shown to be insecure.

I'd insert "should" after "and", you used should more than once earlier.

Received on Wednesday, 27 May 2015 01:21:25 UTC