Re: Single Trust and Same-Origin Policy v2

(This is super-long, so I'm trimming things I think we agree upon. It's
probably worthwhile to start a separate thread exploring the "Shared Web
Credentials" / "SmartLock" similarities, for example. :))

On Mon, Mar 27, 2017 at 8:51 PM, John Wilander <wilander@apple.com> wrote:
>
> One way to use Single Trust and avoid new UI is to restrict sensitive APIs
> or browser functions to single trust pages. That way the user doesn’t have
> to know anything or make security decisions.
>

That would indeed avoid new UI. How would you suggest that we approach the
decision about whether to expose a feature in a single trust page or not?
Is there some set of criteria you have in mind?

It's not clear to me what single trust is meant to protect, honestly. For
example, https://w3c.github.io/webappsec-secure-contexts/ has the aim of
mitigating the risk of network attackers by ensuring transport security,
ensuring that the code you're executing is coming from an authenticated
origin and is delivered without alteration.

It sounds like single trust is meant to reduce the risk of data
exfiltration, by ensuring that the only endpoints accessible to a given
context are in some way owned by the origin at the top-level. Is that the
goal? If not, perhaps it would be helpful for you to spell out a threat
model more explicitly?


> Examples of what these restrictions could be: Autofill, Credential
> Management, file upload, integration with fingerprint readers or other
> device-specific tokens, and stickiness of granted permissions such as
> camera access.
>

At the risk of missing the forest for the trees, the APIs you list here
worry me. Autofill, credential management, and fingerprint
readers/tokens/webauthn, in particular, all feed a future that's less
reliant on passwords than today. Denying those kinds of features to sites
that load third-party resources seems overly draconian.


> User trust is an enabler, just like most of security. I believe the web is
> struggling in the trust space. Single trust, especially with some backing
> of liability (more on that below), can enable more trustworthy things to be
> done on the web.
>

Reading this, it sounds like a reasonable chunk of the technical
underpinnings of "enabling trustworthy things" might be well-addressed via
isolation mechanisms like those Emily, et al. have proposed in
https://wicg.github.io/isolation/. Have you taken a look at her proposal?


>    Today there are various pushes to silo things out of the general web.
> It happens through native apps or WebViews in native apps. To some extent
> you see the mistrust in the use of private browsing or incognito mode. On
> the web there’s this constant worry that “I can’t have this conversation or
> transaction be between me and this one organization I trust."
>

As an aside, WebView has fairly terrible security properties, in that it
allows code injection by the embedding app. SafariViewController/Chrome
Custom Tabs is a significantly better model, though they both blur the line
between your presence on the web and your presence in apps.


> Do you imagine single-trust as something users would be exposed to once in
> a blue moon, when a trust decision really matters (the "confidential news
> tips" case), or do you want to encourage it to be a pervasive expectation
> for most sites a user visits?
>
>
> If there’s a user signal tied to it I assume it would be seldom. But it
> could be recognizable – “My healthcare provider’s site should look this
> way.” I know from my work at the bank that recognizing the site is a strong
> trust signal with users
>

I think that intuition is somewhat in contrast to with the study results we
discussed above; there's pretty good evidence for the claim that users
don't notice the absence of a security indicator. If the page looks like
their bank, they're pretty likely to treat it as their bank, regardless of
what's in the address bar. Phishing works, in other words.


> and it’s the thing that’s exploited in for instance tab napping.
>

Does single trust address attacks that reach through `window.opener`? Would
you restrict things like `window.open` or `target="_blank"`?

> If single trust gives real advantage to developers, I worry that it will
> simply devolve into delegating a set of subdomains to a third-party (`
> ads.example.com`, `provider1.ads.example.com`, `provider2.ads.example.com`,
> and so on). Given revenue concerns, the slope doesn't seem that slippery. :)
>
>
> This will probably be the case.
>

>From a technical perspective, I think this undermines the justification for
the restrictions above.


> What I mean by responsibility is liability. Today, third-party requests
> are bounced through highly dynamic redirect chains. Yes, the site could
> deploy CSP and get in control of where things are loaded from and that
> would convey some kind of intent or liability.
>

https://w3c.github.io/webappsec-csp/embedded/ might be helpful here.


> But every request going to a domain owned by the same entity is much
> stronger. Example Inc. can’t get away from badness happening from requests
> made to ads.example.com.
>

I'm also not a lawyer, so I won't weigh in too heavily with regard to
regulation. It would surprise me if changing the name of a bad origin from `
bad.com` to `bad.example.com` would have real regulatory impact, but many
things about the law surprise me. :)

Does single trust aim to provide regulatory hooks? Or is it intended to be
a technical barrier to badness?

> This worries me a bit. As Eduardo noted earlier, Google uses subdomains
> explicitly to break the integration between applications living on/under `
> google.com`. I think it's pretty unlikely that Google would opt-into a
> system that gave `accounts.google.com` script access to `youtube.com`
> (and would be even more unlikely to adopt the inverse), for instance.
>
>
> I never intended script access as in execution in e.g. YouTube’s context.
> The access I meant was access to state and simplified messaging.
>

That's good to hear! Still, sharing access to cookies and `localStorage`
would have similar effect for many applications. Docs stores sensitive
document data for offline use, for instance, and wouldn't be terribly
interested in exposing that to `built-by-a-contractor.google.com`.

Do you have specific use-cases in mind for shared storage? It's just not
clear to me that Google's products would opt-into this mechanism.

> Do you have some concrete use-cases you'd like to enable here that are
> difficult in the status quo via `postMessage()`, etc? You suggest that we
> could make this "much more elegant", but I don't understand how. :)
>
>
> The cumbersome checking of who to send to and who sent a certain message
> can be simplified for messages across co-owned domains. The developer
> wouldn’t have to check them the same way, unless he/she wants to.
>

Have y'all heard requests from developers in this area? I didn't realize
that `postMessage` origin checks were cumbersome.


> Also, we’re subjecting resources to unnecessary third-party treatment. You
> probably know about WebKit’s partitioning of third-party localStorage,
> sessionStorage, IndexedDB, etc. That should not be needed if the two
> domains have the same owner.
>

Whether or not two origins are treated as first- or third-party for the
purposes of a user agent's cookie controls is a very different question (in
my head, anyway!) then whether two origins have access to each other's
data. I can imagine that Google would opt-into a system that removed some
of the hoops that `accounts.google.com` jumps through to do SSO across TLDs
(country codes and youtube), even though I'm skeptical above about the
additional implications it sounds like you want to draw from that
affiliation.

I wonder, though, how many companies/users this would really effect. As a
terrible strawman that no one should ever ship: if you hardcoded `google.*`
and `youtube.com` as being "the same", what % of the problem is addressed?

-mike

Received on Tuesday, 28 March 2017 12:29:02 UTC