Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

On Dec 17, 2009, at 9:15 AM, Kenton Varda wrote:

>
>
> On Thu, Dec 17, 2009 at 2:21 AM, Maciej Stachowiak <mjs@apple.com>  
> wrote:
>
> I'm not saying that Alice should be restricted in who she shares the  
> feed with. Just that Bob's site should not be able to automatically  
> grant Charlie's site access to the feed without Alice explicitly  
> granting that permission. Many sites that use workarounds (e.g.  
> server-to-server communication combined with client-side form posts  
> and redirects) to share their data today would like grants to be to  
> another site, not to another site plus any third party site that the  
> second site chooses to share with.
>
> OK, I'm sure that this has been said before, because it is critical  
> to the capability argument:
>
> If Bob can access the data, and Bob can talk to Charlie *in any way  
> at all*, then it *is not possible* to prevent Bob from granting  
> access to Charlie, because Bob can always just serve as a proxy for  
> Charlie's requests.

Indeed, you can always act as a proxy and directly share the data  
rather than sharing the token. However, this is not the same as the  
ability to share the token anonymously. Here are a few important  
differences:

- As Ian mentioned, in the case of some kinds of resources, one of the  
service provider's goals may be to prevent abuse of their bandwidth.
- Service providers often like to know for the sake of record-keeping  
who is using their data, even if they have no interest in restricting  
it. Often, just creating an incentive to identify yourself and ask for  
separate authorization is enough, even if proxy workarounds are  
possible. The reason given below states such an incentive.
- Proxying to subvert CORS would only work while the user is logged  
into both the service provider and the actually authorized service  
consumer who is acting as a proxy, and only in the user's browser.  
This limits the window in which to get data. Meanwhile, a capability  
token sent anonymously could be used at any time, even when the user  
is not logged in. The ability to get snapshots of the user's data may  
not be seen to be as great a risk as ongoing on-demand access.


I will also add that users may want to revoke capabilities they grant.  
This is likely to be presented to the user as a whitelist of sites to  
which they granted access, whether the actual mechanism is modifying  
Origin checks, or mapping the site to a capability token and disabling  
it.

>
> What CORS does do is make it so that Bob (and Charlie, if he is  
> proxying through Bob) can only access the resource while Alice has  
> his site open in her browser.  The same can be achieved with UM by  
> generating a new URL for each visit, and revoking it as soon as  
> Alice browses away.

How would the service provider generate a new URL for each visit to  
Bob's site? How would the service provider even know whether it's Bob  
asking for an update, or whether the user is logged in? If the  
communication is via UM, the service provider has no way to know. If  
it's via a hidden form post, then you are just using forms to fake the  
effect of CORS. Note also that such elaborations increase complexity  
of the protocol. To enable permissions to be revoked in a granular  
way, you must vend different capability tokens per site. Given that,  
it seems only sensible to check that the token is actually being used  
by the party to which it was granted.

>
>
>> Perhaps, though, you're suggesting that users should be able to  
>> edit the whitelist that is applied to their data, in order to  
>> provide access to new sites?  But this seems cumbersome to me --  
>> both to the user, who needs to manage this whitelist, and to app  
>> developers, who can no longer delegate work to other hosts.
>
> An automated permission grant system that vends unguessable URLs  
> could just as easily manage the whitelist. It is true that app  
> developers could not unilaterally grant access to other origins, but  
> this is actually a desired property for many service providers.  
> Saying that this feature is "cumbersome" for the service consumer  
> does not lead the service provider to desire it any less.
>
> You're right, the same UI I want for hooking up capabilities could  
> also update the whitelist.  But I still don't see where this is  
> useful, given the above.
>
>
>> (Of course, if you want to know the origin for non-security reasons  
>> (e.g. to log usage for statistical purposes, or deal with  
>> compatibility issues) then you can have the origin voluntarily  
>> identify itself, just as browsers voluntarily identify themselves.)
>>
>> 2) It provides additional defense if the "unguessable" URL is  
>> guessed, either because of the many natural ways URLs tend to leak,  
>> or because of a mistake in the algorithm that generates unguessable  
>> URLs, or because either Site B or Site A unintentionally disclose  
>> it to a third party. By using an unguessable URL *and* checking  
>> Origin and Cookie, Site A would still have some protection in this  
>> case. An attacker would have to not only break the security of the  
>> secret token but would also need to manage a "confused deputy" type  
>> attack against Site B, which has legitimate access, thus greatly  
>> narrowing the scope of the vulnerability. You would need two  
>> separate vulnerabilities, and an attacker with the opportunity to  
>> exploit both, in order to be vulnerable to unauthorized access.
>>
>> Given the right UI, a capability URL should be no more leak-prone  
>> than a cookie.  Sure, we don't want users to ever actually see  
>> capability URLs since they might then choose to copy/paste them  
>> into who knows where, but it's quite possible to hide the details  
>> behind the scenes, just like we hide cookie data.
>
> Hiding capability URLs completely from the user would require some  
> mechanism that has not yet been proposed in a concrete form. So far  
> the ways to vend the URL to the service consumer that have been  
> proposed include user copy/paste, and cross-site form submission  
> with redirects, both of which expose the URL. However, accidental  
> disclosure by the user is not the only risk.
>
>> So, I don't think this "additional defense" is really worth much,  
>> unless you are arguing that cookies are insecure for the same  
>> reasons.
>
> Sites do, on occasion, make mistakes in the algorithms for  
> generating session cookies. Or for that matter for CSRF-prevention  
> secret tokens. Cookies have some protections that explicit secret  
> tokens do not. First, there is no need to ever embed them in a page.  
> This means they are not prone to be revealed to attacks that can  
> observe the page content but not intercept network traffic or inject  
> script. CSS injection is an example of such an attack vector. Secret  
> tokens are often embedded via <input type="hidden"> or in URI- 
> containing attributes on elements in the DOM.
>
> Second, Cookies can further be as HttpOnly, which makes them  
> invisible to script, in such cases even a full XSS exploit cannot  
> steal the cookie (short of some additional exploit to get the victim  
> server to reflect it back).
>
> Finally, session cookies can be transparently reissued as often the  
> origin server cares to, thus limiting the time window for a  
> potential attack based on stealing them.
>
> Now, similar protections could be provided for capability tokens.  
> It's hard to evaluate that kind of idea in the abstract, without a  
> concrete proposal. But I have a hard time seeing how to do it other  
> than by the browser adding tokens to requests passively, and  
> collecting them from the service provider passively. However, that  
> would create a form of ambient authority and thus presumably would  
> miss the point.
>
> Sites also have a stronger incentive to protect their own cookies  
> (to defend their own resources) than they do to protect capability  
> tokens received from a third party (which merely protect some third  
> party's resource).
>
> I agree you have valid points here, but they are implementation  
> issues that are fundamentally solvable with some engineering.  I  
> would not allow secret tokens to appear in page content, but instead  
> always fetch them using XHR or some such, so sniffing them would  
> require scripting.  I could even imagine engineering a way to send  
> the tokens in HTTP headers such that scripts cannot actually read  
> the values.

Before they become implementation issues, they are design issues. No  
one has yet proposed an actual design that would give the same level  
of protection to tokens as we have for cookies. If we had such a  
design on the table we could evaluate its merits. There's not much we  
can do with just a claim that it's possible. For reasons stated above,  
it's not obvious to me that it actually is possible without creating a  
new form of ambient authority.

Note that not allowing tokens to appear in page content (I'm not  
really sure how this could be enforced) would make them unusable for  
use case like XBL or extended access to cross-site <img> or <video>,  
since in such cases a URL in the page content is the only way to make  
the request.

> Servers can re-issue cookies, but they can revoke and re-issue  
> capabilities too with the right design, so I don't think that's a  
> real benefit.

Updating cookies is much easier though - the server can do it on every  
page load. In any case, my goal was not to compare the relative merits  
of cookies and capability tokens.

My goal was merely to argue that adding an origin/cookie check to a  
secret-token-based mechanism adds meaningful defense in depth,  
compared to just using any of the proposed protocols over UM. I  
believe my argument holds. If the secret token scheme has any weakness  
whatsoever, whether in generation of the tokens, or in accidental  
disclosure by the user or the service consumer, origin checks provide  
an orthogonal defense that must be breached separately. This greatly  
reduces the attack surface. While this may not provide any additional  
security in theory, where we can assume the shared secret is generated  
and managed correctly, it does provide additional security in the real  
world, where people make mistakes.

Regards,
Maciej

Received on Thursday, 17 December 2009 18:09:59 UTC