- From: Devdatta Akhawe <dev.akhawe@gmail.com>
- Date: Wed, 8 Jan 2014 16:23:59 -0800
- To: Michal Zalewski <lcamtuf@coredump.cx>
- Cc: Joel Weinberger <jww@chromium.org>, Mike West <mkwst@google.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Frederik Braun <fbraun@mozilla.com>
Hi [separate thread for fingerprinting discussion] > Also, to circle back to the fingerprinting angle: the logged-in state > aside, let's say that there's a HTML page or a JSON response that is > mostly static, except for a first name, e-mail address, or a phone > number somewhere in the body. Further, for the sake of simplicity, > let's say that it's cacheable on the client. Exactly. That is definitely a concern. Personally, I think one possible mitigation is: The web platform guarantees (well, tries to) that HTML/JSON etc files will be secret by default, while scripts/images/css have the weird 'run but not read' semantics. Maybe, integrity verification should also follow this: sub-resource integrity verification only works directly for files with an explicit mime-type that is for JS/CSS/img etc. With our advocacy of code/data separation in CSP, I imagine there could be lots of (small) JSON files with secret data that we don't want leaking. For all other cases (JSON/HTML files), we can require the relevant Access-Control-Allow-Origin header whitelisting the current document origin. The more extreme version of this is to require the CORS headers for all resources that go through integrity verification. But I believe that is just throwing the baby out with the bathwater. What do you think? Thanks Dev
Received on Thursday, 9 January 2014 00:24:46 UTC