Re: Isolate-Me explainer

We had a pretty encouraging discussion around the document Emily put
together at TPAC. Minutes here:
https://www.w3.org/2016/09/21-isolation-minutes.html.

-mike

On Wed, Sep 21, 2016 at 2:17 PM, Krzysztof Kotowicz <kkotowicz@gmail.com>
wrote:

>
>
> 2016-09-21 2:19 GMT+02:00 Emily Stark (Dunn) <estark@google.com>:
>
>> Hi Tanvi, thanks for the detailed feedback! Thoughts inline.
>>
>> On Tue, Sep 20, 2016 at 3:04 AM, Artur Janc <aaj@google.com> wrote:
>>
>>> On Tue, Sep 20, 2016 at 12:11 AM, Tanvi Vyas <tanvi@mozilla.com> wrote:
>>>
>>>> This is great!  Thank you for putting it together!  I have added some
>>>> comments on individual sections below.
>>>>
>>>> *Section 2, Example 2 and 3*
>>>> You make a good point about window.opener!  In the Containers feature,
>>>> we check to ensure that the referrer is stripped when opening a link in a
>>>> different type of container, but I'm not sure we disable the
>>>> window.opener() and open() references.  I'll check that out and be sure to
>>>> fix it if we don't.
>>>>
>>>> *Section 2, Example 6* (and Section 4, Policy 2)
>>>> If a website says "isolate-me", is the website essentially also setting
>>>> the X-Frame-Options to SameOrigin?  In the Containers model (and in Tor's
>>>> First Party Isolation), there are no framing restrictions.
>>>>
>>>> For example, if foo.com told the browser to "isolate-me", any top
>>>> level requests made to foo.com would be isolated with their own cookie
>>>> jar.  If foo.com was framed by bar.com, then framed foo.com wouldn't
>>>> have access to the same set of cookies they would have had as a top level
>>>> request.  Instead, they would start with a fresh cookie jar, that could
>>>> then be populated.
>>>>
>>>> The above method reduces breakage; perhaps foo.com has unauthenticated
>>>> content that they want framed.  On the other hand, if framed content did
>>>> have access to a fresh cookie jar, the user could end up logging into
>>>> foo.com via the iframe and then exposing themselves, despite foo.com's
>>>> attempt to request isolation.  So another option would be to allow framed
>>>> content, but not give that content access to any cookie jars (i.e.
>>>> sandboxed frames).
>>>>
>>>
>> I was thinking that an isolated site should be treated as if it had
>> X-Frame-Options set to SAMEORIGIN. However, I could also get behind your
>> suggestion that an isolated site can be framed cross-origin but then does
>> not get access to cookie jars... with the caveat that it should also not
>> get access to localStorage, etc. (I'd like if it Isolate-Me protected sites
>> that store auth tokens in localStorage just as well as it protects sites
>> that store auth tokens in cookies.)
>>
>> [Note: I changed my mind about this below, at the very end of my email.]
>>
>>
>>>
>>>> What about other types of subresources - ex: non-same origin image or
>>>> script loads from the isolated domain?
>>>>
>>>
>> If all cookies on isolated origins are SameSite (and I think that for an
>> isolated origin, all cookies should be SameSite by default), then I think
>> we can safely allow these types of requests; do you agree? (Discussed more
>> below.)
>>
>> If we decide that isolated origins should be allowed to have non-SameSite
>> cookies... then we probably need to rethink this.
>>
>>
>>>
>>>> *Section 3, Protection 1*
>>>> It is difficult to prevent XSS via navigations without restricting
>>>> navigations.  Artur brought this up to the Containers team as well; if the
>>>> browser isolates bank.com, a user could still click on a maliciously
>>>> crafted bank.com link that could send their data to an attacker.
>>>> Hence, I understand the reason to restrict navigations.  But in practice,
>>>> this may prompt the user to just copy/paste the link into the url bar.  If
>>>> they see a link to an interesting article on isolated news.com, they
>>>> don't want to visit news.com and then search for that article, they
>>>> want to get to the article immediately.  So if clicking the link doesn't
>>>> work, they are likely to just copy/paste it.  So I wonder if restricting
>>>> navigations is really going to prevent XSS, or just act as an unnecessary
>>>> hurdle for users to jump through.  Perhaps we could brainstorm to see if
>>>> there are other alternatives.
>>>>
>>>
>>> The solution we've been talking about is to make navigation opt-in for
>>> the application. In that model, a user entering a link in the URL bar
>>> wouldn't navigate directly to that URL, but would instead tell the
>>> application the desired destination in a way that would require explicit
>>> agreement from the application (it could could happen via a client-side
>>> message, in an HTTP header as Craig is suggesting below, or in some other
>>> way). The server could then have logic to decide if the request should be
>>> allowed, i.e. if it matches some application-dependent criteria then it
>>> would accept the navigation.
>>>
>>> There are two difficulties here:
>>> 1) Developers could shoot themselves in the foot by allowing all
>>> navigations, removing the security benefit of isolation. This would be a
>>> bit better than the current state because the developer would have to make
>>> two mistakes (have the XSS/CSRF bug in the first place, and write code to
>>> allow external navigations to arbitrary parts of their app), but it would
>>> still be possible to shoot yourself in the foot.
>>>
>>> This could likely be solved by adding constraints in the API which sends
>>> the "Navigate-Me" messages. For example, maybe the browser only allows is a
>>> list of hardcoded URLs defined by the isolated app, or allows only paths
>>> (no query parameters), or something more reasonable.
>>>
>>> 2) Having this used to break hotlinking (which I think was raised as a
>>> concern with EPR). I believe Mike and Emily's solution to this -- which
>>> seems reasonable to me -- is to make isolation sufficiently powerful that
>>> an application which wants to break hotlinking to its resources would have
>>> to agree to a lot of other constraints on its behavior, making opting into
>>> isolation unattractive. Since breaking hotlinking is already pretty easy,
>>> this would "protect" isolation from being used for this purpose, simply
>>> because it would require more work on part of the developer.
>>>
>>
>> Oh, good point about the temptation to copy/paste, Tanvi; I hadn't
>> thought of that. I had previously been mildly opposed to making navigations
>> opt-in, because of the first difficulty that Artur listed: if the developer
>> can make a mistake and introduce an XSS, why do we trust the developer to
>> correctly allow/disallow navigation requests? However, given the copy/paste
>> risk and Artur's idea about making the opt-in API constraining to be as
>> safe as possible, I think this might be the best option we have so far.
>>
>
> I think Isolate-Me needs to allow deep-linking one way or another, there's
> a lot of applications that would definitely benefit from isolation
> properties (e.g. CMSes, or any sort of management panels), but they need to
> be able to be deep-linked from e.g. an email message or other pages. Given
> how prevalent a reflected XSS from URL parameters is, we could mitigate
> this by e.g. stripping query parameters and the fragment, but I feel this
> would be blocking for a lot of applications (and the possible workarounds
> developed would likely re-enable XSS anyway).
>
> I guess an opt-in mechanism to enable navigational requests is the way to
> go then. There should be a possibility of eventually navigating to a full
> URL, if the devs wish so. There's not many possibilities on how to make it
> opt-in:
>
> a) trigger navigation to / and send a custom postMessage.
> b) trigger navigation to / and dispatch a new type of event.
> c) rely on the isolated-site Foreign Fetch service worker (
> https://github.com/w3c/ServiceWorker/blob/master/
> foreign_fetch_explainer.md)
> d) something completely new (e.g. based on HTTP request header)
>
> Of those, c) looks the most interesting, as the isolated site must specify
> valid origins, and the API is expressive enough to describe various
> policies.
>
>
>
>
>>
>>
>>>
>>> Cheers,
>>> -A
>>>
>>>
>>>>
>>>> *Section 3, Protection 5* (and Section 4, policy 4)
>>>> Consider this scenario:
>>>> Top level - a.com
>>>> Frame[0] - b.com
>>>> Frame[1] - c.com
>>>> Frame[1][0] - c.com creates a grandchild frame to b.com
>>>>
>>>> Should Frame[0] and Frame[1][0] share cookies?  Or each have their own
>>>> isolated cookies?  In the Containers approach, they would share cookies.
>>>> In Tor's First Party Isolation approach, they would have separate cookies.
>>>>
>>>
>> I'm thinking that, to get the security properties we want, Frame[0] and
>> Frame[1][0] should *not* share cookies. The double-keyed storage is to
>> address Section 2 item 7: evil.com shouldn't be able to attack the
>> isolated site's third-party dependencies, which may have ambient authority
>> granted by the isolated site. Suppose that upon logging into a.com, a.com
>> communicates with b.com in some way that sets a b.com cookie
>> authenticating the user. I think that a.com framing c.com should not
>> allow c.com to attack the b.com cookie, just as a.com framing c.com
>> should not allow c.com to attack any of a.com's cookies.
>>
>>
>>>
>>>> *Section 4, Policy 1*
>>>> If isolation is done properly, is SameSite a given?  Is SameSite
>>>> included as a policy here just to be explicit, or does SameSite provide
>>>> some additional benefits over the isolation described?
>>>>
>>>
>> Hrm... I'm not sure I understand this question. By "done properly", do
>> you mean if the user agent implements it properly? My intention was that
>> when an app isolates itself, the browser automatically treats all its
>> cookies as if the SameSite flag were set. The goal is to address Section 2,
>> item 1 and item 6: a malicious site performing XSS, CSRF, HEIST, etc. by
>> loading authenticated cross-origin resources on the isolated site.
>>
>>
>>>
>>>> *Section 4, Policy 3*
>>>> What is this policy aiming to protect?  Is it trying to prevent a third
>>>> party from navigating the top level page, or something else?
>>>>
>>>
>> I was thinking of vulnerable postMessage APIs. Artur had also pointed me
>> to some other examples which I forgot to reference in the doc: for example,
>> a cross-origin site could traverse and count frames in the frame tree and
>> potentially learn something useful from that information.
>>
>>
>>>
>>>> *Section 4, Policy 6*
>>>> What if the new window is same origin?  Should two isolated windows
>>>> from the same domain have access to each other?  Perhaps this should say:
>>>> "When the isolated origin opens a new window to a different origin,
>>>> disown/neueter the opened page’s window.opener."
>>>>
>>>
>> Ah, thanks, just fixed that.
>>
>>
>>>
>>>> *Section 4, Policy 8*
>>>> How could this happen?  Is this section meant to handle the
>>>> foo.example.com and bar.example.com case, where one is isolated and
>>>> another is not?
>>>>
>>>
>> Yep, that's right. Or foo.example.com is isolated and example.com is
>> not.
>>
>>
>>>
>>>> As part of our work on Containers, we've had a lot of questions come up
>>>> about what should and shouldn't be isolated.  We try to weigh the benefits
>>>> and risks when making these decisions, and have changed our minds a number
>>>> of times.  We should be specific about what isolate-me isolates i) always,
>>>> ii) never, iii) at the discretion of the user agent.  Examples below.
>>>> (Note that if framing and subresource loads from the isolated site are
>>>> disabled, as proposed, some of these are not applicable):
>>>> Permissions
>>>> HSTS
>>>> OCSP Responses
>>>> Security Exceptions (ex: cert overrides)
>>>> Passwords saved by the Password Manager
>>>> User Certificates
>>>> Saved Form Data
>>>> Cache
>>>>
>>>
>> Now that I see this long scary list written out, I'm leaning back towards
>> restricting cross-origin framing entirely (that is, Isolate-Me implies
>> X-Frame-Options: SAMEORIGIN). Up above I mentioned that I could get behind
>> allowing cross-origin framing of isolated sites as long as there's no
>> access to localStorage or cookie jars. But now I'm thinking that the framed
>> isolated content also shouldn't have access to permissions or saved form
>> data or any number of other things that I can't think of right now. As you
>> noted, it sure would simplify things to just not allow framing isolated
>> sites. If a site wants to opt in to Isolate-Me, it's probably easy enough
>> for them to host any unauthenticated content that they want to be
>> frame-able on a separate origin; that's probably the least burdensome thing
>> that they have to do to make sure that their site still works after turning
>> on isolation.
>>
>>
>>>
>>>> Thanks!
>>>>
>>>> ~Tanvi
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 9/16/16 8:15 AM, Emily Stark (Dunn) wrote:
>>>>
>>>> Hi webappsec! Mike, Joel, and I have been discussing an idea for a
>>>> developer facing opt-in to allow highly security- or privacy-sensitive
>>>> sites to be isolated from other origins on the web.
>>>>
>>>> We wrote up the idea here to explain what we're thinking about, why we
>>>> think it's important, and the major open questions: https://mikewest.gi
>>>> thub.io/isolation/explainer.html
>>>>
>>>> Please read and comment/criticize/etc. Thoughts welcome, either here in
>>>> this thread or as GitHub issues. Especially interested to hear from Mozilla
>>>> folks as it relates to and is heavily inspired by containers.
>>>>
>>>> Thanks!
>>>> Emily
>>>>
>>>>
>>>>
>>>
>>
>
>
> --
> Best regards,
> Krzysztof Kotowicz
> koto@ / Krzysztof Kotowicz / Google
>

Received on Wednesday, 21 September 2016 13:28:46 UTC