- From: Sigbjørn Vik <sigbjorn@opera.com>
- Date: Tue, 03 Jun 2014 11:32:07 +0200
- To: Mike West <mkwst@google.com>
- CC: Daniel Veditz <dveditz@mozilla.com>, Joel Weinberger <jww@chromium.org>, "Oda, Terri" <terri.oda@intel.com>, Michal Zalewski <lcamtuf@coredump.cx>, Egor Homakov <homakov@gmail.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Eduardo' Vela <evn@google.com>
On 03-Jun-14 10:52, Mike West wrote: > Reporting may possibly be solved in other ways. > > Would you like to put together a more concrete proposal here? I'm > interested in more detail around what you think we can safely report, > and how we might go about doing it. To be honest, not really :P E.g. using something similar to crossdomain.xml for a domain to set which other domains reports may be sent to. Webmasters will then get reports from all their expected domains, and blank data for unexpected domains, which should be enough for them to detect anomalies. > I'll put together some sort of sampling proposal for the current spec, > something along the lines of "User agents MAY choose to send only a > subset of reports, [insert explanation here]." I am not entirely convinced about the merit of such a proposal. Even if user agents decide once for a given domain not to report, and attacker might try from a bunch of other (sub)domains, until he gets one that does report. And even if one user agent doesn't report data, another one will, which might still be sufficient for an attacker. One fancy technique for brute-forcing logins on sites which only allow three login attempts, is to keep the password static to e.g. "password1", and brute force usernames instead - the attacker doesn't always care about which user he exploits. If user agents only send some reports, and are resisting to attacks from multiple domains, it will reduce the problem significantly, but it will also reduce the reporting value. > Sorry, I was unclear. I don't think there are new _forms_ of side > channel leakage. I do suspect, however, that replacing a document with a > blank document (or image with a blank image, etc) will be detectable via > the existing forms of side channel leakage (e.g. filters on an image). Images can normally be served static from a site, so is not the main problem, and no different than the existing problem. For documents, the question is if the "blank" page can be distinguished from the normal login page. For a suitable definition of "blank", this should be hard, and it should be possible for webmasters to make their login pages look just like the "blank" page to further minimize that chance. One of my concerns is that we will open a new hole which webmasters cannot close. A solution might be to add a CSP HTTP header when doing cross-domain requests which may be used for redirection detection. This would enable webmasters so inclined to detect such requests, and always give the same response. I haven't analyzed this in detail yet, but combined with unsafe-redirect, this might be enough to allay my concerns: * Redirection detection is made easier (but it is relatively easy already anyway), but it will be possible for webmasters to close the new hole (just as they can close the old hole). * User confusion is reduced, and the solution is redirection-safe by default. -- Sigbjørn Vik Opera Software
Received on Tuesday, 3 June 2014 09:32:47 UTC