W3C home > Mailing lists > Public > www-tag@w3.org > February 2010

Re: comment on distributed capabilities

From: Mark S. Miller <erights@google.com>
Date: Fri, 12 Feb 2010 19:08:47 -0800
Message-ID: <4d2fac901002121908ic50a9a1t1c8239f9721f3db5@mail.gmail.com>
To: Jonathan Rees <jar@creativecommons.org>
Cc: noah_mendelsohn@us.ibm.com, www-tag@w3.org
When it comes to matters of terminology and history, a certain amount of
pedantry is called for. Please excuse any excess pedantry in the following

On Fri, Feb 12, 2010 at 7:14 AM, Jonathan Rees <jar@creativecommons.org>wrote:

> I wanted to recognize and address a point you raised on the call
> yesterday, which is that "distributed capabilities" in the web-key
> sense are not the same as "distributed capabilities" in the sense used
> in some capability systems from the 1970s and 1980s. This is true. I
> think you were referring to systems in which each node in the network
> has a trusted capability kernel that all other nodes can trust.

Hold on, what systems are we talking about? The first distributed cap system
I am aware of, Jed Donnelley's DCCS from 1976 <
http://tools.ietf.org/html/rfc712>. DCCS, as well as Jed's 1979 <
http://www.webstart.com/jed/papers/Components/>, made only minimal
assumptions of mutual trust, awaiting only a better understanding of the
then just emerging modern cryptography to enable these minimal trust
assumptions to be repaired. IIRC, the only way the '79 system assumed mutual
trust was on the authentication side: When machine B dereferences a
capability to object Carol hosted on machine C, is the machine B contacts as
the alleged host of Carol in fact machine C, or an imposter?

Amoeba <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=> and
Secure Distributed Mach <
http://portal.acm.org/citation.cfm?doid=1013812.18202> are both from 1986.
Amoeba had the same weakness as Jed's systems -- only on the authentication
side, not on the authorization side. IIRC, Secure Distributed Mach had
proper cryptographic checks at both ends. It may have been the first system
to do so.

Tyler's web-keys are explicitly not a true cryptographic capability system
for the same reason the first three of the above systems are not -- web-keys
follow capability logic on the authorization side but not on the
authentication side. Rather, web-keys rely on https authentication, and
therefore is vulnerable to the long list of CAs implicitly "trusted" by all
browsers. A cryptographic capability should be self-authenticating -- it
should provide all the information its wielder needs to know (beyond static
protocol definitions) to verify that it is speaking to the right party --
one authorized to host the object it designates. Tyler has separately coined
the term YURL for a URL with this property <
http://www.waterken.com/dev/YURL/>. A URL with both web-key authorization
nature and YURL authentication nature may well be a web-cap. We haven't been
vocal about URLs or web-caps because we're picking our battles. The
authorization side of the web is much more clearly broken, and much more in
need of repair, than the authentication side.

If I may, might I suggest that people read chapter 7 of <
http://erights.org/talks/thesis/> for a compact self-contained description
of a simple full distributed cryptographic capability protocol, as well as a
description of how such protocols provide a distributed analog of pointer

> In
> this situation it is possible to some extent to limit copying by
> treating the right to copy as one of the rights controlled by
> capability discipline.

Hydra is the only old capability system I remember having copy-limited
capabilities motivated by security, but I would not be surprised if there
were others. (Mach's receive ports were non-copyable, but IIUC this was for
kernel scheduling reasons rather than security.) SPKI is a recent
cryptographic capability-like distributed security architecture with a
no-copy bit.

> Clearly this regime cannot apply in a web
> context where such general trust isn't available: an attacker can just
> run a kernel that ignores the directive not to copy.
> Even where you have trusted kernels, copy prevention may help prevent
> accidental leaks, but it can't really prevent attacks. (Suppose A
> shares a copy-limited capability X with an attacker B. B can share X
> with crony C just by setting up a proxy Y that forwards messages to
> and from X, and sharing Y with C.) Limits on copying only have
> significant effect in the total absence of side communication channels
> that would let B and C communicate, and that kind of confinement is
> too... confining to be useful in the computing contexts we've been
> talking about. Limits on the ability to copy individual capabilities
> have fallen out of favor in the capability community, with attention
> instead being shifted to more general mechanisms for leak control.

Yes. All modern capability systems I am aware of reject copy-limits for this

However, please do not confuse confinement with copy limitations! The first
OS to demonstrate a practical solution to the overt confinement problem was
KeyKOS. KeyKOS was inspired by Hydra but rejected copy limits for the
reasons you state above. Mutually suspicious machines on open networks
cannot be confined, so confinement is not a property of possible protocols
among such machines. However, within each of these machines, confinement can
be quite useful. For example, Caja supports internal confinement of objects
within a JavaScript context. Thus, I can give a Caja gadget authored by you
access to something that I do not wish you to have access to. Were Caja's
internal capabilities based on unguessable information, rather than
unforgeable references, such confinement would be impossible. Amoeba and
Jed's second system made this mistake. Likewise, Tyler's web-key-based
waterken web server can confine the Joe-E objects it hosts (Joe-E is an
object-capability subset of Java), but one waterken server can do nothing to
confine another.

Section 11.5 of <http://erights.org/talks/thesis/>, "The Limits of
Decentralized Access Control", gives more depth on
this confine-ability difference as well as explaining some other differences
between what is possible within a machine (or among a set of mutually
trusting machines) vs the limits of what is possible between mutually
suspicious machines on open networks. Note that these limits are not limits
only of capability-based decentralized access control. They are limits on
possible decentralized access control BY ANY PROTOCOL, under these mutual
suspicion constraints.

(Several historical systems do claim to provide decentralized confinement
under these constraints. Every one I am aware of is either confused or meant
something else by these terms. I would be happy to provide details.)

> Because of the de-emphasis of confinement and copy limitation, many
> people have been happy to drop the distinction between traditional
> capabilities and string-representable keys and use "capability"
> generically.

I think this gets the history backwards. Jed's first system, DCCS, had local
unforgeability and, with a proper crypto base, would have had distributed
unguessability. Although DCCS could have provided local confinement, it
wasn't until KeyKOS used local unforgeable capabilities for confinement in
the mid 80s that anyone could have known that. Not appreciating the power of
local unforgeability, Jed's 1979 system used string-representable
capabilities everywhere, and so lost the possibility of confinement that lay
unrecognized in his 1976 system.

I have read many excellent papers from the Amoeba group. However, I don't
recall any of them showing any awareness that confinement would have been
locally possible, within a machine, if they had made a different
architectural choice. (There is also an intriguing category breaker -- <
http://www.csse.monash.edu.au/~rdp/research/Papers/apwcs.pdf>, which
achieves local confinement using what seem to be string representable
capabilities. But the trick it uses does not work between suspicious

> We could argue about what is the proper application of the term
> "capability" but that's not important. I don't think anyone is trying
> to pull a fast one by using the word "capability", but if it's a
> sticking point for you we can agree to say that secret URIs such as
> web-keys are used analogously with capabilities (as opposed to being
> capabilities), or that the secret URI pattern is analogous to the
> capability pattern (as opposed to being an instance of it).

I do think words, history, and distinctions are very important, as there is
an access control literature in which many of these questions are not new.
As I explained above, web-keys are indeed explicitly not capabilities. They
are capability-like on their authorizing side, *assuming* that the existing
CA-vulnerable system of https authentication is adequate.

> The question of how easy it is to copy a key, either by mistake or by
> attack or a combination, is relevant and we'll continue talking about
> it.


Received on Saturday, 13 February 2010 03:09:22 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:56:33 UTC