- From: Adam Langley <agl@google.com>
- Date: Tue, 4 Nov 2014 17:58:52 -0800
- To: Mark Watson <watsonm@netflix.com>
- Cc: Mike West <mkwst@google.com>, Frederik Braun <fbraun@mozilla.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>
On Tue, Nov 4, 2014 at 5:46 PM, Mark Watson <watsonm@netflix.com> wrote: > I assumed the script was going to provide the hashes, since the content > would be coming over HTTP. That's a simple solution, but it wasn't what I had in mind at the time. Consider an HD movie that's 10GiB in size. Chunks of data cannot be processed before they have been verified and we don't want to add too much verification latency. So let's posit that 16KiB chunks are used. If all the hashes for those chunks were sent upfront in the HTML then there are 10 * 2^^30 / 2^^14 chunks * 32 bytes per hash * 4/3 base64 expansion = ~27MB of hashes to send the client before anything else. With the Merkle tree construction, the hash data can be interleaved in the 10GiB stream such that they are only downloaded as needed. The downside is that you either need a server capable of doing the interleaving dynamically, or you need two copies of the data on disk: one with interleaved hashes and one without. (Unless the data format is sufficiently forgiving that you can get away with serving the interleaved version to clients that aren't doing SRI processing.) Cheers AGL
Received on Wednesday, 5 November 2014 01:59:38 UTC