W3C home > Mailing lists > Public > public-html@w3.org > August 2013

RE: [Feature Proposal] New attributes "library" and "version" on script tags

From: François REMY <francois.remy.dev@outlook.com>
Date: Sun, 11 Aug 2013 15:15:53 -0700
Message-ID: <DUB406-EAS2738EF89E52D89D4F55AFDAA55A0@phx.gbl>
To: "'Nathanael D. Jones'" <nathanael.jones@gmail.com>
CC: "'Patrick H. Lauke'" <redux@splintered.co.uk>, "'HTML WG LIST'" <public-html@w3.org>, "'Glenn Adams'" <glenn@skynav.com>
> Francois, please do research before spreading FUD
> about hashes; they're already poorly understood
> by the general public. 

Just as a little background, one of the author of SHA3 (Gilles Van Assche) gave courses at my university, and I had the opportunity to discuss with him about the reasons behind the creation of SHA3. So, please keep your ad hominem attacks for someone else. 



> TLDR; We'd have to store 1 trillion petabytes per atom
> on earth to have a 1 in a trillion chance at a random collision
> in a 512-bit space. 

This is equally true as my own statement. The hidden assumption of your statement here is "RANDOM". You assume SHA2 will never get cryptanalyzed successfully. I see no reason to believe this, all algorithms (with sufficient amount of research and creativity) will get deciphered at some point.



> Since this is an opt-in feature, high-security sites can always
> choose to not use it.... but they'd have to be rather superstitious
> to believe that the first SHA-2 hash collision would be used to
> exploit their site instead of achieving worldwide fame.

To get worldwide fame, you need to disclose your method, right? How long before someone actually targets actual website with it?



> If a fundamental weakness is ever found in SHA-2, browsers can simply disable the optimization. 

Yes, that's true. You've a point here. But I didn't say SHA-2 was insecure. My proposal use it has well. My issue with this proposal is the fact it's not the role of HTML to deal with transport-layer issues. By the way, the browser will never trust any server to compute the hash correctly, because otherwise they could lie about the hashes to poison the cache...



My real point is: no browser should ever load a resource without asking the server that hosts it the authorization to do so, and the metadata (CORS,CSP,...) under which that server operates. I don't say SHA2 is insecure, I say loading a resource only based on a claim (an attribute on the <script> tag) which is potentially sent by someone else than the website which hosts the resource and subject to XSS attacks is a bad idea. Additionally, transmitting the hash of every file as part of the url (or an attribute found in the HTML, or whatever) is a bad idea, too. The identifier of a resource should never include content-based information because there's always a risk for this information to be out-of-sync.

My high-order belief is that this "super-cache" feature should be built in the HTTP layer and reuse the HTTP caching semantics and should not be defined at the HTML level because it deals with a transport-layer issue.
Received on Sunday, 11 August 2013 22:16:31 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 29 October 2015 10:16:34 UTC