Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

On Thu, Dec 17, 2009 at 2:21 AM, Maciej Stachowiak <mjs@apple.com> wrote:

>
> On Dec 17, 2009, at 1:42 AM, Kenton Varda wrote:
>
> Somehow I suspect all this has been said many times before...
>
> On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak <mjs@apple.com> wrote:
>
>> CORS would provide at least two benefits, using the exact protocol you'd
>> use with UM:
>>
>> 1) It lets you know what site is sending the request; with UM there is no
>> way for the receiving server to tell. Site A may wish to enforce a policy
>> that any other site that wants access has to request it individually. But
>> with UM, there is no way to prevent Site B from sharing its unguessable URL
>> to the resource with another site, or even to tell that Site B has done so.
>> (I've seen papers cited that claim you can do proper logging using an
>> underlying capabilities mechanism if you do the right things on top of it,
>> but Tyler's protocol does not do that; and it is not at all obvious to me
>> how to extend such results to tokens passed over the network, where you
>> can't count on a type system to enforce integrity at the endpoints like you
>> can with a system all running in a single object capability language.)
>>
>
> IMO, this isn't useful information.  If Alice is a user at my site, and I
> hand Alice a capability to access her data from my site, it should not make
> a difference to me whether Alice chooses to access that data using Bob's
> site or Charlie's site, any more than it makes a difference to me whether
> Alice chooses to use Firefox or Chrome.  Saying that Alice is only allowed
> to access her data using Bob's site but not Charlie's is analogous to saying
> she can only use approved browsers.  This provides a small amount of
> "security" at the price of greatly annoying users and stifling innovation
> (think mash-ups).
>
>
> I'm not saying that Alice should be restricted in who she shares the feed
> with. Just that Bob's site should not be able to automatically grant
> Charlie's site access to the feed without Alice explicitly granting that
> permission. Many sites that use workarounds (e.g. server-to-server
> communication combined with client-side form posts and redirects) to share
> their data today would like grants to be to another site, not to another
> site plus any third party site that the second site chooses to share with.
>

OK, I'm sure that this has been said before, because it is critical to the
capability argument:

If Bob can access the data, and Bob can talk to Charlie *in any way at all*,
then it *is not possible* to prevent Bob from granting access to Charlie,
because Bob can always just serve as a proxy for Charlie's requests.

What CORS does do is make it so that Bob (and Charlie, if he is proxying
through Bob) can only access the resource while Alice has his site open in
her browser.  The same can be achieved with UM by generating a new URL for
each visit, and revoking it as soon as Alice browses away.


>
> Perhaps, though, you're suggesting that users should be able to edit the
> whitelist that is applied to their data, in order to provide access to new
> sites?  But this seems cumbersome to me -- both to the user, who needs to
> manage this whitelist, and to app developers, who can no longer delegate
> work to other hosts.
>
>
> An automated permission grant system that vends unguessable URLs could just
> as easily manage the whitelist. It is true that app developers could not
> unilaterally grant access to other origins, but this is actually a desired
> property for many service providers. Saying that this feature is
> "cumbersome" for the service consumer does not lead the service provider to
> desire it any less.
>

You're right, the same UI I want for hooking up capabilities could also
update the whitelist.  But I still don't see where this is useful, given the
above.


>
> (Of course, if you want to know the origin for non-security reasons (e.g.
> to log usage for statistical purposes, or deal with compatibility issues)
> then you can have the origin voluntarily identify itself, just as browsers
> voluntarily identify themselves.)
>
>
>> 2) It provides additional defense if the "unguessable" URL is guessed,
>> either because of the many natural ways URLs tend to leak, or because of a
>> mistake in the algorithm that generates unguessable URLs, or because either
>> Site B or Site A unintentionally disclose it to a third party. By using an
>> unguessable URL *and* checking Origin and Cookie, Site A would still have
>> some protection in this case. An attacker would have to not only break the
>> security of the secret token but would also need to manage a "confused
>> deputy" type attack against Site B, which has legitimate access, thus
>> greatly narrowing the scope of the vulnerability. You would need two
>> separate vulnerabilities, and an attacker with the opportunity to exploit
>> both, in order to be vulnerable to unauthorized access.
>>
>
> Given the right UI, a capability URL should be no more leak-prone than a
> cookie.  Sure, we don't want users to ever actually see capability URLs
> since they might then choose to copy/paste them into who knows where, but
> it's quite possible to hide the details behind the scenes, just like we hide
> cookie data.
>
>
> Hiding capability URLs completely from the user would require some
> mechanism that has not yet been proposed in a concrete form. So far the ways
> to vend the URL to the service consumer that have been proposed include user
> copy/paste, and cross-site form submission with redirects, both of which
> expose the URL. However, accidental disclosure by the user is not the only
> risk.
>
> So, I don't think this "additional defense" is really worth much, unless
> you are arguing that cookies are insecure for the same reasons.
>
>
> Sites do, on occasion, make mistakes in the algorithms for generating
> session cookies. Or for that matter for CSRF-prevention secret tokens.
> Cookies have some protections that explicit secret tokens do not. First,
> there is no need to ever embed them in a page. This means they are not prone
> to be revealed to attacks that can observe the page content but not
> intercept network traffic or inject script. CSS injection is an example of
> such an attack vector. Secret tokens are often embedded via <input
> type="hidden"> or in URI-containing attributes on elements in the DOM.
>
> Second, Cookies can further be as HttpOnly, which makes them invisible to
> script, in such cases even a full XSS exploit cannot steal the cookie (short
> of some additional exploit to get the victim server to reflect it back).
>
> Finally, session cookies can be transparently reissued as often the origin
> server cares to, thus limiting the time window for a potential attack based
> on stealing them.
>
> Now, similar protections could be provided for capability tokens. It's hard
> to evaluate that kind of idea in the abstract, without a concrete proposal.
> But I have a hard time seeing how to do it other than by the browser adding
> tokens to requests passively, and collecting them from the service provider
> passively. However, that would create a form of ambient authority and thus
> presumably would miss the point.
>
> Sites also have a stronger incentive to protect their own cookies (to
> defend their own resources) than they do to protect capability tokens
> received from a third party (which merely protect some third party's
> resource).
>

I agree you have valid points here, but they are implementation issues that
are fundamentally solvable with some engineering.  I would not allow secret
tokens to appear in page content, but instead always fetch them using XHR or
some such, so sniffing them would require scripting.  I could even imagine
engineering a way to send the tokens in HTTP headers such that scripts
cannot actually read the values.

Servers can re-issue cookies, but they can revoke and re-issue capabilities
too with the right design, so I don't think that's a real benefit.

Received on Thursday, 17 December 2009 17:19:31 UTC