- From: Tanvi Vyas <tanvi@mozilla.com>
- Date: Mon, 19 Sep 2016 15:11:33 -0700
- To: "Emily Stark (Dunn)" <estark@google.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>
- Cc: Mike West <mkwst@google.com>, Joel Weinberger <jww@google.com>
- Message-ID: <46d8c549-5f1b-540a-998c-63381ef14827@mozilla.com>
This is great! Thank you for putting it together! I have added some comments on individual sections below. * **Section 2, Example 2 and 3* You make a good point about window.opener! In the Containers feature, we check to ensure that the referrer is stripped when opening a link in a different type of container, but I'm not sure we disable the window.opener() and open() references. I'll check that out and be sure to fix it if we don't. * **Section 2, Example 6* (and Section 4, Policy 2) If a website says "isolate-me", is the website essentially also setting the X-Frame-Options to SameOrigin? In the Containers model (and in Tor's First Party Isolation), there are no framing restrictions. For example, if foo.com told the browser to "isolate-me", any top level requests made to foo.com would be isolated with their own cookie jar. If foo.com was framed by bar.com, then framed foo.com wouldn't have access to the same set of cookies they would have had as a top level request. Instead, they would start with a fresh cookie jar, that could then be populated. The above method reduces breakage; perhaps foo.com has unauthenticated content that they want framed. On the other hand, if framed content did have access to a fresh cookie jar, the user could end up logging into foo.com via the iframe and then exposing themselves, despite foo.com's attempt to request isolation. So another option would be to allow framed content, but not give that content access to any cookie jars (i.e. sandboxed frames). What about other types of subresources - ex: non-same origin image or script loads from the isolated domain? *Section 3, Protection 1* It is difficult to prevent XSS via navigations without restricting navigations. Artur brought this up to the Containers team as well; if the browser isolates bank.com, a user could still click on a maliciously crafted bank.com link that could send their data to an attacker. Hence, I understand the reason to restrict navigations. But in practice, this may prompt the user to just copy/paste the link into the url bar. If they see a link to an interesting article on isolated news.com, they don't want to visit news.com and then search for that article, they want to get to the article immediately. So if clicking the link doesn't work, they are likely to just copy/paste it. So I wonder if restricting navigations is really going to prevent XSS, or just act as an unnecessary hurdle for users to jump through. Perhaps we could brainstorm to see if there are other alternatives. *Section 3, Protection 5* (and Section 4, policy 4) Consider this scenario: Top level - a.com Frame[0] - b.com Frame[1] - c.com Frame[1][0] - c.com creates a grandchild frame to b.com Should Frame[0] and Frame[1][0] share cookies? Or each have their own isolated cookies? In the Containers approach, they would share cookies. In Tor's First Party Isolation approach, they would have separate cookies. * **Section 4, Policy 1* If isolation is done properly, is SameSite a given? Is SameSite included as a policy here just to be explicit, or does SameSite provide some additional benefits over the isolation described? * **Section 4, Policy 3* What is this policy aiming to protect? Is it trying to prevent a third party from navigating the top level page, or something else? *Section 4, Policy 6* What if the new window is same origin? Should two isolated windows from the same domain have access to each other? Perhaps this should say: "When the isolated origin opens a new window to a different origin, disown/neueter the opened page’s window.opener." * **Section 4, Policy 8* How could this happen? Is this section meant to handle the foo.example.com and bar.example.com case, where one is isolated and another is not? As part of our work on Containers, we've had a lot of questions come up about what should and shouldn't be isolated. We try to weigh the benefits and risks when making these decisions, and have changed our minds a number of times. We should be specific about what isolate-me isolates i) always, ii) never, iii) at the discretion of the user agent. Examples below. (Note that if framing and subresource loads from the isolated site are disabled, as proposed, some of these are not applicable): Permissions HSTS OCSP Responses Security Exceptions (ex: cert overrides) Passwords saved by the Password Manager User Certificates Saved Form Data Cache Thanks! ~Tanvi On 9/16/16 8:15 AM, Emily Stark (Dunn) wrote: > Hi webappsec! Mike, Joel, and I have been discussing an idea for a > developer facing opt-in to allow highly security- or privacy-sensitive > sites to be isolated from other origins on the web. > > We wrote up the idea here to explain what we're thinking about, why we > think it's important, and the major open questions: > https://mikewest.github.io/isolation/explainer.html > > Please read and comment/criticize/etc. Thoughts welcome, either here > in this thread or as GitHub issues. Especially interested to hear from > Mozilla folks as it relates to and is heavily inspired by containers. > > Thanks! > Emily
Received on Monday, 19 September 2016 22:12:14 UTC