Hygiene for a computing pandemic: separation of VCs and ocaps/zcaps

Hello!  Sorry, this one is a bit long.  But this is important.  I spent
all day writing this, at a very busy time for me.  The topic couldn't be
more serious, as far as I'm concerned, if our work is truly to be
successful.

Unsurprisingly, as original co-author of the zcap-ld spec (along with
Mark Miller, and based on ideas from various places but also Alan Karp's
prior work), I Have Opinions (TM).  In fact, I have already written out
some of those opinions at length in the zcap-ld spec at length:

  https://w3c-ccg.github.io/zcap-ld/#relationship-to-vc

I'll rehash some of it though and try to make clear three things:

 - You *could* implement zcap-ld on top of VCs...

 - However, you're actually squishing together what should be a
   separation of concerns in a way that will become *unhygienic*.  Like
   a lack of proper biological hygiene, the result is sickness in our
   computing systems.  Unexpected bugs, poor behavior, unintended
   effects, many trips to the doctor, occasionally death (probably more
   commonly here in computing systems form, but hey, as computing
   systems are integrated with everything, there too).

 - The observation of "these things seem so similar though!" is true,
   but you can already make that claim even if you're just looking at
   the linked data proofs layer.  VCs and zcap-ld diverge from there
   for two very separate purposes: what is said, and what is done.

I say separation of concerns here is a hygiene issue, and if that isn't
clear to you, I've bet you've heard a phrase before akin to "you don't X
where you eat."  You can fill in the X.  Today, with germ theory, we
understand why we need to have a barrier between these activities, even
if biologically they seem to be highly related.

Note that my comments on this being akin to hygiene and germ theory are
not a pot-shot of ignorance; germ theory as being "obvious" is only
because of the present era we live in.  At the time that germ theory was
developed, the *leading surgeons of the day* were strongly against
it... and handwashing between surgeries!  Was it because they were
fools?  I don't think so... again, they were the experts of the day, and
*I believe that they were the experts of their time*, as in, the
surgeons with most active-in-the-field-experience.  So I praise Ignaz
Semmelweis for his work to turn the tide, though I also understand,
having heard more the history, why what feels obvious now was so
strongly opposed by the experts of the day.  A nice podcast summary here
for the curious:

  https://www.iheart.com/podcast/stuff-you-missed-in-history-cl-21124503/episode/ignaz-semmelweis-and-the-war-on-29118226/

I am now trying to convince you too of the importance of this hygiene
lesson.  Right now hygiene should be on the mind, in the midst of our
global biological pandemic... however, I am going to argue for you that
we are in the midst of a computing pandemic too.  The difference is that
the people on this list are in a position to make a major change in one
of them.  I'd argue it's our duty to understand and improve things for
users everywhere.

So let's think structurally: what *is* the separation of concerns here?
We are really caught between two worlds:

 - Action/command (ocaps/zcaps (same thing for our purposes here), functions)
 - Identity/description/judgement (VCs, DIDs, etc)

These worlds cannot be fully separated (neither can our biological food
input/output mechanisms, and the ecosystem surrounding them)... and one
approach is to fully integrate them.  A common version of this is Access
Control List systems, but the dangers of this are well understood, and
nicely explained by the paper "ACLs Don't":

  http://waterken.sourceforge.net/aclsdont/current.pdf

(Ocap-based operating systems are not widely available, but are known not
to have this problem.  The ideas I explore in the rest of this post do
work there too though!  Your whole OS could be safer!)

Here identity and action/command are directly tied together.  Am I
allowed to run this program?  Am I allowed to access this file?  Given
the way that we talk about things, those might actually be the questions
you or I ask out loud, so it might be natural to make the mistake that
we should design our computers this way.  But here we run into the same
problem we do that results in me not being able to trust my computer:
every program runs *as me*.  Solitaire, which should be a mundane
program on my computer, runs *as me*... whether buggy or maliciously
programmed, running Solitaire means that it can take my encryption keys
and upload them somewhere else, cryptolocker my computer, etc etc etc.

Linked data systems famously already follow the pattern, as TimBL says,
that "anyone can say anything about anything".  However, RDF triples and
quads couldn't actually tell you *who* was saying the thing!  That's
actually exactly what VCs provide, a way of knowing *who* said this set
of things.  A powerful addition, and one that surprisingly did not show
up earlier in RDF land (how did it take this long?).

So naturally, why not say, "Chris can run solitaire", and "Chris can
open this file".  But then we end up in the world of ambient authority
and confused deputies, and suddenly solitaire runs as me, and suddenly
my whole computer is crypto-locker'ed, and I'm very, very sad.


So the alternative is the "object capability paradigm", and you've
probably seen it encoded in zcap-ld form (also known as "certificate
form" of ocaps), but I'm going to argue: that's not the best way to
think about it.  And it may be leading you down the wrong path.  So
let's look at three others, and then with a better mental model for
ocaps, we'll look at how to combine both worlds.


Baby steps towards ocaps/zcaps: car key metaphor
================================================

Okay, first let's start with the .5 new version, which is really only
half of a good example... but it does actually resemble zcap-ld a lot.
But it's the wrong "mindset" to get totally good hygiene, even with
zcap-ld.  But it's a start.

So okay, here it is: the car key metaphor!  Instead of saying, "Chris
can drive the car!", and having the car scan my face and determining
"yes this is Chris you may begin driving"... well for a long time we
didn't even have that option, but we did have car keys.
Aha... *posession and invocation of the car key in the ignition* is what
lets me drive the car!

The usual ocap thing is that we follow it up with yeah you can delegate
by copying a car key, yeah you can attenuate with a valet key, yeah you
can revoke (Alice can press a button and blow a fuse in the key and now
Bob can't drive anymore), blah blah blah... it gets one really good
thing right: we've now shifted to the perspective of "possession is
authority to act" instead of "identity is authority to act".


Teenager strides towards ocaps/zcaps: Ocap URIs
===============================================

One problem with the above metaphor is that we've done what ocap people
call "separating designation from authority".  Sometimes there's good
reason to do this, but even when you do it's best to conceptually frame
yourself within the realm of combining both.  Thankfully there's a
simple example of this: "Object Capability URIs":

  https://foo.example/obj/wy46gxdweyqn5m7ntzwlxinhdia2jjanlsh37gxklwhfec7yxqr4k3qd

Most famously at the moment is Google Docs' share links, but there are
plenty of other examples.  This isn't quite perfect because ocaps, in
theory, are "unforgeable": if you don't possess them, you can't just
make them up.  Here we're relying on a large random number/string... so
we're down to mere "unguessability" (but hey, cryptographic keys work
that way too).  But, it's a good enough substitute: you shouldn't be
able to access the resource at that URL without the URL itself.

It's also not a perfect metaphor due to odd incidence of the ways the
web has rolled out: URIs can be shoulder surfed, leak in logs, etc etc.
So it's not perfect (though that might be different if browsers were
designed to encapsulate ocap uris from the get-go, etc etc... not the
world we live in though).  But it sets us with the right paradigm
of reference-oriented authorization with a nice example that doesn't
separate designation from authority.

By the way, something that's very careful to ocaps is OAuth 2.X bearer
tokens.  Here we have separated designation (the URI) and authority (the
bearer token).  You could combine them though, and I wrote up an example
of how to do that:

  https://github.com/cwebber/rwot9-prague/blob/bearcaps/topics-and-advance-readings/bearcaps.md

I'm not arguing in this particular post that you want to do it, more
that it's possible in multiple contexts to squish together the
designation and authority, mentally, as one package-deal in your mind.
I encourage you to do the same when thinking about zcap-ld capabilities,
even though they don't separate them there either (for good reasons).

Still, we're not at the ideal path yet... but it turns out you already
know it.  You already think it's the best way to program, even.


Adult stroll: Ocaps/zcaps are just argument passing!
====================================================

Ocaps are just argument passing between functions (and object methods)!
OMG!  Think of it this way... imagine that your functions and objects
were born in a cold, sterile world where they had logic, yes, but no
access to interesting things like the network or the filesystem.

Think about it: how do your functions/objects usually get ahold of
things?  One of three ways, probably:

 - Either they were born with it (or Maybe It's Maybelline... sorry);
   ie, they were born with access to it in their scope

 - Or they didn't have it initially, but it was passed in to them
   *as an argument* (!!!)

 - Or they actually *made the thing*... constructed a new object or
   procedure within their scope.  Now they have the reference!  They can
   choose whether not to pass it to anyone else.  Except, they can only
   make the effects out of references to things they have which cause
   effects.  (They can combine cool effect-causing things together in
   new ways though.)

For a brief moment, imagine the game of solitaire I was playing was a
function... and let's imagine it's born in a cold, sandboxed
environment... no access to the filesystem, no access to the network,
nothin'.

If I just run:

  solitaire()

It can't pwn my filesystem.  It can't steal my dogecoins.  It can't
cryptolocker my filesystem.

However it actually can't do much interesting of anything... we can't
see what it's doing, even!  So maybe we make a window with a canvas for
it to draw to, and we also give it access to use the mouse and keyboard,
but only when the window has focus.

  [canvas, canvasIO] = makeCanvasWindow();

Okay now let's pass that to solitaire:

  solitaire(canvas, canvasIO);

Now Solitaire can read and write from our canvas, but it still can't pwn
our filesystem or access the network.

But maybe we want to allow it to access one *specific* file, the high
scores file.  We could make a capability to just that:

  highScoreFile = openFile("/home/cwebber/.solitaire.txt")

Now let's pass it in:

  solitaire(canvas, canvasIO, highScoreFile);

Cool.  What can solitaire do?

 - It can display to *its own window*
 - It can access keyboard and mouse input *while its window is focused*
   (but can't keylog us while using other programs!)
 - It can read or write to the high scores input file

Are you spooked yet?  Well, I'm not!  Because those are a reasonable set
of things for it to do!  But it can't upload my private keys, it can't
crypolocker me... etc etc etc.  This is what you hear ocap people throw
around as "the Principle Of Least Authority (POLA)".  That's why it's
cool.

Ocaps are already good programming hygiene.  They resemble the way you
like to program: passing around arguments and etc.

We can layer this same idea:

 - Why not treat programs like functions/objects?  Programs start with
   no authority for any interesting effects, and we explicitly pass in
   as much filesystem/network/display/input access as we deem
   appropriate.  This is how ocap filesystems work!

 - Why not treat modules like functions/objects?  Instead of them being
   able to "reach out" for filesystem, network access, etc, they have to
   have such access *passed in*.  See also Kate Sills' article:
   "POLA Would Have Prevented the Event-Stream Incident":
     https://medium.com/agoric/pola-would-have-prevented-the-event-stream-incident-45653ecbda99
     
Lexical scope is now your programming paradigm.  Beautiful!  Lambda: the
Ultimate Security Paradigm!  (More on this in Jonathan Rees' incredibly
fun "A Security Kernel Based on the Lambda Calculus" where Bart, Lisa,
and Marge try to figure out if they can safely compute together:)

  http://mumble.net/~jar/pubs/secureos/secureos.html 


ACLs Considered Harmful
=======================

Ocaps *really are* "good hygiene" for programming, and I'll demonstrate
it to you by showing you that they're the opposite of what you consider
"bad hygiene": mutating global variables, using GOTOs everywhere, etc.

"GOTOs Considered Harmful" therefore pointed part of the way towards a
more hygienic future.  Ocaps are, "more of that please!"

Reference-oriented / possession-oriented authority is what you want.  As
for ACLs... well, I've already linked to ACLs Don't.  I've already
explained the solitaire problem.  But... we could make things even
worse... and in fact, the proposal to combine VCs and zcap-lds will
result in greater sickness and suffering than ACLs have already
provided.  I will explain, but to get there we have to dig deeper...


Why encoding zcap-ld *in linked data* makes this more confusing
===============================================================

zcap-ld looks like data.  In fact, it's a way of encoding ocaps in data
form, aka certificate ocaps.  There are some serious benefits (and
tradeoffs) to certificate ocaps relative to the other approaches listed
above (though language-style ocaps are always preferable, when
available).  However, there is one curious aspect about them: since we
immersed them in data, we made them quite readable.

zcap-ld is a bit like data.  But in that sense, it's more like *writing
code*... the above "text" for passing around arguments to the solitaire
function is maybe a bit closer.

But actually, zcap-ld looks incredibly high-level... but that's kind of
a "trick of the eye".  It's really closer to low-level VM instructions
than it is to interesting readable data.  But we made it highly readable
so we could *immerse* it in a data-oriented context.  But really, it
should be *conceptually* be thought of as closer to passing around
arguments to functions.

This is confusing!  It's confusing for two reasons: we *do* need to
combine with the world of identity/claims/credentials/judgement (but we
have a way to do that!).  And it's confusing because we haven't learned
how to look at time yet.  To do that we're going to need a time-lord to
help us out.  But let's take these one at a time... how to safely
combine them first.


Pure cities and their highways
==============================

Ocap folks aren't the only ones with a notion of a "pure execution
environment".  You might also happen to know one of those "pure
functional programmer" folks, who gush on and on about how much they
love Haskell, a language that strictly forbids "side effects" like
beeping and booping and drawing on the screen and database access.  Yet
still, their programs manage to do such things.  What the heck???  How
do they do it without compromising their purity?!

Here's what they do: they have a beautiful functional city, free and
pure from the world of side effects.  But there's a world outside called
"the effects wasteland" (which, despite being toxic to the functional
programming world, appears to be mostly composed of the things you and I
*want* in the end, like beeps and boops and glowing pixels... maybe it's
not such a wasteland after all).  The Haskell folks have a clever trick:
a highway system.  A truck can enter from one end of the city, drive
down the monad highway, and exit the other end, taking in inputs and
leaving outputs, without the integrity of the functional system ever
being compromised.  Way cool!

Well, the ocap people seem to have similar allergies (and I, for one,
seem to have developed them through osmosis): the outside wasteland is
one called "identity and judgement".  What a weird thing to be allergic
to... once again, it appears to be exactly full of the things we want!
So it isn't really a wasteland... it's very desirable... but for some
reason we have to keep them separate at least.  I guess all those ocap
folks are confused.  Wait what?  We have a solution?!

Well, the ocap people just used the same damn trick.  They set up a
beautiful identity-free city and left all the identity interactions to
the entrance and exits.  The internal plumbings of the system are all
the beautiful reference-passing programming style, ocap invocations
everywhere, horray!  But yeah actually, for this damn system to *do* or
*mean* anything useful, it pretty much needs a hook into identity.

And it turns out, it's a great system.  It works beautifully.  Some
people call that highway "Horton" but I'm going to give you a more
particular version appropriate here.  The city is zcap-ld and the
outside world (okay, we've established now it wasn't a
wasteland!... don't tell the people living inside) is VCs (DIDs end up
spanning both worlds in a different way and isn't really relevant
here).

So how to combine it, in practice?  Well I've already written this whole
thing up but okay okay let's do it again:

  https://w3c-ccg.github.io/zcap-ld/#relationship-to-vc

 - Carol wants to hire a sysadmin, and is interested in whether or not
   to hire Alice.

 - Alice presents VCs showing:
   - Her university experience
   - Some references
   - That she has done jobs like this before

 - Carol hires Alice, and hands Alice a zcap-ld capability to let
   her begin to administrate the machines!  (Even better: instead
   of authorizing Alice, it authorizes a key generated by Alice
   for this purpose.)

So that's the input, or entry-point, of the city, as far as Alice's
ability to begin performing tasks is concerned.  Now we are in the
middle of the city: Alice is doing various tasks.

So now: what about the exit point?

 - Carol might find that something bad is happening to her computers.
   She eventually determines that Alice is responsible (perhaps Alice
   was wasting computer resources to mine bitcoins).  This could happen
   through whatever accountability structure is in place.  Either we
   could review zcap-ld invocations, or we could look at logs triggered
   by the invocation.

 - Regardless, Carol decide that Alice is no longer welcome as an
   employee, and revokes Alice's capability.

It is very important to note: every time the computing cluster handles
an invocation from Alice, as far as the mere initialization of the
zcap-ld invocation is concerned... *it does not pause to ponder the
nature of identity and think about whether or not Alice is on the list
of approved sysadmins*.  It merely checks that the certificate chain is
valid.  (*After that step* it could make some judgement calls if it
likes but not while checking that the invocation is valid... this is the
equivalent to Alice holding onto a function and invoking that
function... the logic of the function that Alice invokes might decide to
halt activity, but the VM does not pause at the invocation of the
function and decide whether or not such a function invocation is a good
idea.)

However, as part of that step, it might log information about Alice.  In
fact, since this is a *certificate* ocap system, we can reuse the
capability certificate as that step.  However, I really want to
emphasize that this is a *quirk* of ocap certificate systems that we can
do this reuse, and is part of where I think people get confused.
Other ocap systems are still able to do accountability without
certificates at all, by firing off a log event associated with such a
user as Alice *as part of execution*.

Nonetheless we have, like the Haskell folks, developed a clean
separation of concerns, without introducing the same dangers posed by
ACLs and similar issues.

But... why is it that we need this separation of concerns?  It is
one thing to learn about confused deputies and ambient authority.  It is
another thing to understand the mechanisms that lead to them as emergent
behavior.

In order to understand this, we're going to need to become
engineers... of space-time itself.


The time lord's view of the car
===============================

Consider a simple car.  In fact, an extremely simple car (perhaps one
that visually resembles a "spherical cow").  It only has the following
features:

 - A gas tank, which when the car rolls off the lot is full but is
   depleted while driving.  However, the car is completely sealed so
   that we cannot visually inspect the state of the gas tank.

 - A fuel inlet, where we can pump in gas (never mind the issues of gas
   cost, fungibility, and consumption for this example; for those who
   want to simulate such a thing in an ocap system, check out ERTP)

 - An ignition system, which when called with the "on" method, starts
   driving at exactly the same speed consuming a consistent rate of
   gas over time

 - An odometer, which tells us how many miles have *allegedly* been driven

 - A fuel tank, which tells us how much gas is *allegedly* in the tank

This is a used car, and we would like to buy it from a used car
salesman.  But we have exactly enough money to buy the car and not to
refill it; we have to make money at our job tomorrow to refill the tank,
which would take half a tank of gas all around.

Cars also don't last forever, this one is known to last for exactly 200k
miles, no more, no less... which is good, because we've calculated that
we'd be able to stay employed, keep a roof over our heads, and keep
ourselves fed... but only if this car can drive us at least 75k miles.

So we need to know... exactly how many miles has this car been driven?
How much gas is in the tank?  How do we find out?

Our first instinct is: let's check the odometer and the fuel tank.

So we ask the Odometer.  "Odometer, how many miles has this car been
driven?"  "I assert this car has been driven 100k miles, no more, no
less!" says the Odometer.  "Thanks, Odometer!" we say!  That should be
okay according to our long-term budget!

So we turn to the Fuel Tank and say, "Fuel Tank, how many miles are in
the gas tank?"  "I assert that this car has 3/4 of a tank of gas!"  "Oh
great, thank you Fuel Tank!  That should be just enough for me to get to
work tomorrow!"

These are claims being made by Odometer and Fuel Tank and we have
verified that they are the ones who said it, but do we *believe* them?
What if the Used Car Salesman fiddled with them?  Hm, we could check the
Used Car Salesman's credentials.  Hm, but what if a *previous owner*,
trying to get a better deal, fiddled with them?  We don't even know who
that would be.

Suddenly we remember that our next door neighbor is a time-lord who
knows a bit about traveling machinery.  We call her up and she pops over
in her blue police box.  "I have a better idea, why don't we *watch* the
car being driven over time?"

So we travel all the way back in time, and now we can follow the car and
watch every interaction: from the moment it rolls off the lot, to when
it's turned on, to when it's turned off, to when it's refilled, and so
on and so on and so on... and we can calculate, deterministically, the
exact state of the car.  All of these operations are invocations of
capabilities, they are *actions* happening within the *composed pathways
of the universe* (program flow!) and we can watch them unveil over time
and in that way, the universe is a mere VM... by replaying all messages,
we are privy to all state (not unlike how blockchains work!).

Except, oh, we just remembered we don't actually have such a
time-traveling next door neighbor, so uh, we check the credentials of
the used car salesman and say okay, those check out I guess, he hasn't
lot his license *yet*... he probably has a pretty good eye for tampered
cars... I guess I'll just trust these readings and roll the dice.

Gosh.  We just saw how our world works.

 - Moving forward in time, our world is like a virtual machine,
   replaying instructions and producing state.  Barring quantum
   uncertainty, it really seems like we live in a deterministic
   universe, and many philosophers toss and turn in their beds over the
   consequences of this for free will, but we really do seem to be able
   to construct state.  Well if we're the gods authoring this VM, we
   probably should pick a pretty good design, and it doesn't seem to be
   GOTOs or dynamic scope or etc etc so I guess we'll make our physics
   out of argument passing and lexical scope.

 - But we mostly don't live in a god-like state, so those of us on the
   ground are stuck *listening* to people and deciding how much we trust
   them and what to do and do I really believe my doctor that this cyst
   is benign or should I get a second opinion... but we're all left in
   the world of verifiable credentials and identity and judgement.  But
   it isn't so bad because actually this is kind of cool, this is also
   the world of language and communication and storytelling, and hey
   when you look at it this way I guess we want some of that stuff
   around!


Should you add GDPR to your programming language VM?
====================================================

And ye o reader of this post, ye who decideth the nature of your
computing system, o yea, thou shalt make mighty a decision that doth
affect the safety and security of thy users, and what decideth thee?

I hope you decideth: use some damn good hygiene.  Recognize each piece
for what it's worth.

zcap-lds seem barren: they have just a few fields, and they seem kinda
sorta but not really like VCs, but maybe they're close enough, so maybe
*actually* we ought to build them *on top of* VCs!  Well, I had a call
with Alan Karp and he asked me to explain what I thought would happen
and I said "I bet people won't understand *why* zcap-ld doesn't include
all those other fields, why we've tried very hard to leave it to being
more like a VM operating on instructions rather than add these other
fields you have to think about and ponder in there, so I'm pretty sure
even if we rewrote the spec and layered it on top of VCs (which wouldn't
reduce our work by a spec btw, you'd still have two specs you'd just
change the dependency topology), people would add all those other fields
and muck it up.  And identity has already mucked up computing for quite
some time so I'm pretty sure we'd end up with ACLs and worse things."
(Okay this is a paraphrasing.)  And sure enough, that's exactly what was
proposed in this thread: adding these other fields are a feature!  Even
worse, adding *more fields than identity*, but which go into the realm
of identity-judgement!

ACLs alone have have plagued computing, introducing ambient authority
problems and confused deputies for decades, and that's why I'm paranoid
about using my computer all the time and I don't want to be damnit.  And
that was merely by introducing identity into the equation!

I mentioned that I tricked you: zcap-ld is closer to VM instructions
than interesting data to read, but it's immersed in a data environment,
so it's tempting to mix up more data in there.  But let's think about
why the moving-forward-in-time VM execution is separated from the
allegations-of-information retrospective that is "claims" and
"credentials".  And it comes down to this phrase:

Your VM is dumb.

It turns out this is fine, on its own layer.  Imagine playing forward
the state of a blockchain that had squishy judgement-call information in
it as it went.  It's true that the agents in the system are making
judgement calls, but the VM itself is not, it's executing the will of
the agents.  The VM is not deciding whether the Used Car Salesman is
trustworthy and turning the tide of things one way or the other.  This
is why you can make a deterministic blockchain that operates like a
computer out of zcap-lds and you can't out of VCs, because different
judgement calls happen all over the place.

THIS DOES NOT MEAN THAT VCs ARE NOT USEFUL!  It means they handle a
different *purpose*.  A necessary one, because we are not time-lords
(and even if we were, we might only have so much space-time
computational power that we might resort to checkpoints... which are
themselves very VC-like).

I also agree with encoding consent and privacy and so on into systems.
I agree with that a *lot*!  But instead, it should be done in the
entrance/exit way I've described: we have agents which perform
judgements who are evaluating and contemplating VCs and choose to issue
and revoke capabilities at different places.  Violating privacy may be
good reason for a capability to be revoked.  But by separating these, we
can consider how to *wire in* the right things at the right places.


We are in a computing pandemic
==============================

At the time I am writing this, we are in the midst of two pandemics: the
COVID-19 pandemic, and the other one is the computing-safety pandemic.
I feel really depressed at how badly the world has done about the
former, but my capacity to make a difference there is fairly marginal.
However, as technologists defining the future, we can make an enormous
change in terms of the latter.

The understanding of the distinction between the realm of
identity/description/judgement and the realm of command/ocaps is the
equivalence of germ theory for the modern information age.

I can only plea that others listen, and work on my own systems in the
meanwhile, following these principles.

Save a life, follow good hygiene and wash your hands.
Save a user, follow good hygiene and separate identity and authority.

Maybe somewhere in there we'll end up with a lot less people on
respirators.

Thanks,
 - Chris


Dominic Wörner writes:

> This an interesting discussion!
>
>
> Authorization, Delegation and Provenance are important use cases in the
> ecosystem.
>
>
>>From a marketing perspective I can understand the view to use VCs for
> everything. The terms DID and VC are quite widely known. Do we want to
> educate people about zCaps as well?
>
> Also, we have defined “issue credential” protocols and “credential
> manifests”.
>
>
> However, it still feels very odd to me that every piece of authentic,
> tamper-proof data is a VC.
>
> I think Manu is spot on here.  From a JSON-LD perspective, Linked Data
> Proofs are this universal layer/container for this, and VCs are a layer on
> top to express specific statements.
>
>
>>From my perspective the example Daniel gave feels like an anti pattern:
>
>
> {
>
>   "@context": [ "https://www.w3.org/2018/credentials/v1", "
> https://roboticlab.cam.ac.uk/grants" ],
>
>   "type": ["VerifiableCredential", "RobotPrivilege"],
>
>   "issuer": "https://facebook.com/Amy",
>
>   "issuanceDate": "2021-01-01T19:73:24Z",
>
>   "credentialSubject": { "id": "
> https://roboticlab.cam.ac.uk/~amyt/capstone-project", "allow": "operate"},
>
>   "proof": {...}
>
>  }
>
>
> Interpreting this from the view point of a VC I’d  always read this as the
> issuer provides the  credentialSubject with some rights and not that the
> credentialSubject is the target resource.
>
> Of course you can describe this at https://roboticlab.cam.ac.uk/grants
>
> But people will get this wrong.
>
>
>
> I recently stumbled across Orie’s proposal on Authorization Credentials:
> https://transmute-industries.github.io/authorization-credentials/
>
> This could be a sensible approach to keep the brand, but keep the confusion
> to a minimum.
>
> At least for the Identity Credentials / AuthZ Capabilites divide. General
> data provenance is again another topic.
>
>
> Best,
>
> Dominic
>
> Am Sa., 5. Dez. 2020 um 04:37 Uhr schrieb Adrian Gropper <
> agropper@healthurl.com>:
>
>> This is an interesting thread and among the longest I recall without any
>> link to an actual use-case. I don't have a horse in this race but I do have
>> an application perspective in the form of zero-trust architecture (ZTA) and
>> the use-cases for authentication, authorization, and audit in healthcare
>> and other regulated industries.
>>
>> VCs are economically useful in 'write once, read many' situations. This
>> makes them economically well suited to identity-related uses where
>> stability and correlation avoidance are key.
>>
>> Authorizations are, from the ZTA perspective, 'write-once, read-once'
>> whose economic value is derived from separation of concerns between data
>> processors and data controllers. EDVs are an extreme example of the value
>> of this separation in that they have zero control by design. An EDV is a
>> data processor in only the most limited sense. More sophisticated data
>> processors are called confidential computing, inference engines, or simply
>> data sources.
>>
>> The introduction of an authorization server (AS) is the data controller
>> complement to a data processor. In the typical case, the AS does not see
>> the data being processed no matter how many different entities are involved
>> in the processing. To grossly oversimplify the economic incentives,
>> identity-related risks are mitigated at the AS and security-related risks
>> are mitigated at the processor. I hope we can agree that regardless of the
>> use-case, this separation of concerns is desirable and key to our success
>> with SSI.
>>
>> The problem with pitching SSI as part of the solution to ZTA comes from
>> revocation and audit. In general, persistent identity claims presented to a
>> controller (AS) have to be linked to a revocation service and ephemeral
>> authorization claims presented to a processor have to be audited in order
>> to mitigate the security risks.
>>
>> In the real world, both revocation and audit spoil the architectural (dare
>> I say religious?) purity of the issuer-holder-verifier SSI model because
>> they introduce additional parties to most transactions, including ZTA
>> applications.
>>
>> Yes, I know that we have privacy-preserving dead-drop solutions to
>> revocation and zero-knowledge ideas that will support audit and reputation
>> someday but my point is that SSI is incomplete if we don't consider
>> authentication, authorization, and audit as a bundle as we develop the
>> first generation of SSI standards.
>>
>> From this perspective, admittedly my personal perspective as an expert in
>> regulatory reality, the introduction of ZCap-LD is not helping and
>> replacing ZCap-LD with VCs that act like ZCaps will not help either. Our
>> problem, IMHO, is refusing to give the authorization, revocation, and audit
>> sufficient respect. We have a lot to learn from the folks working on GNAP
>> and it's time we give them the respect they deserve.
>>
>> Adrian
>>
>>
>>
>>
>>
>> On Fri, Dec 4, 2020 at 7:54 PM Wayne Chang <wyc@fastmail.fm> wrote:
>>
>>> Fascinating discussion--thanks for the share. Also wanted to bring up the
>>> possible overlap and interoperability opportunities with GNAP (
>>> https://www.ietf.org/archive/id/draft-ietf-gnap-core-protocol-02.html).
>>> It's curious how the polymorphic JSON objects described in the draft could
>>> be a natural fit for VC-like objects.
>>>
>>> Furthermore, it would be interesting to figure out if there was a
>>> straightforward mapping between SAML 2.0 assertions and their
>>> representations as VCs, as this could provide a great upgrade path for
>>> enterprises already committed to SAML 2.0 for authn/z.
>>>
>>> Best,
>>> - Wayne
>>>
>>> On Fri, Dec 4, 2020, at 8:34 PM, Kaliya IDwoman wrote:
>>>
>>> About a week ago I sparked a discussion between Manu and Sam Smith about
>>> VCs and zCaps / oCaps.
>>>
>>> The conversation has pulled in a few more folks and it was agreed that
>>> the discussion should move over to this list.
>>>
>>> Below is the prior threads cut and paste. If you would rather read it in
>>> a document form that is attached. The active participants will continue the
>>> discussion here.
>>>
>>> - Kaliya
>>>
>>>
>>> From: *Kaliya Identity Woman*
>>>
>>> Date: Sun, Nov 22, 2020 at 10:52 AM
>>>
>>> Subject: VCs & OCap - please talk
>>>
>>> To: Manu Sporny, Samuel Smith
>>>
>>>
>>> HI Sam and Manu,
>>>
>>> I'm opening up a thread because I have heard via Drummond that Sam is
>>> thinking about a new VC to do access management.
>>>
>>>  I know this is an area where ZCaps are being used and I think it is
>>> really critical to have discussions across the community amongst the deep
>>> experts before 'just starting new things'.
>>>
>>>  I hope you two can talk sooner rather than later.
>>>
>>>  Warm Regards,
>>>
>>> - Kaliya
>>>
>>>
>>>
>>>
>>> From: *ProSapien Sam Smith*
>>>
>>> Date: Sun, Nov 22, 2020 at 11:34 AM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Kaliya Identity Woman  Cc: Manu Sporny, Drummond Reed, Hardman Daniel
>>>
>>>
>>>
>>> Kaliya,
>>>
>>>
>>> The proposed task force  is  to look at VC for authorizations in general,
>>> not merely access management. Access management (aka ZCaps) is an important
>>> use case of authorization but a generic broad definition of authorization
>>> includes much more than access management. Object capabilities are a very
>>> useful model for access management and this is not an anti-zcap effort but
>>> a broader approach where an object capability semantic may be one of many.
>>>   These include but are not limited to custodial relationships,
>>> guardianship relationships,  business process management relationships,
>>> supply chain relationships etc. The important feature is that the semantics
>>> are embedded into existing VCs, i.e. VC Native, and not part of a separate
>>> conveyance as is the stated objective of
>>> https://w3c-ccg.github.io/zcap-ld/
>>>
>>>
>>> Indeed the primary semantic we are looking at is not an authorization
>>> semantic per se but a chaining semantic for establishing chained VCs.  A VC
>>> chaining semantic supports more than authorization/delegation it also
>>>  supports provenance of data transformations, data custody source to sink,
>>> trust provenance (reputation), audibility, fine grained issuance semantics
>>> etc. In this regard authorization may be viewed as a sub-semantic of a
>>> chaining super semantic. This is especially useful in open loop systems
>>> (not closed loop like access control).
>>>
>>>
>>> In summary, access control (aka z-caps) may best be implemented via a
>>> distinct process from VC chaining and hence benefit from a separate home
>>> and conveyance. In this regard, pure access control via Z-caps is not
>>> impinged upon by adding a general chaining semantic to VCs of which trust
>>> provenance and/or authorization are sub-classes.
>>>
>>>
>>> See this:
>>>
>>>
>>>
>>> https://github.com/hyperledger/aries-rfcs/blob/master/concepts/0104-chained-credentials/README.md
>>>
>>>
>>> https://github.com/evernym/sgl
>>>
>>>
>>>
>>> Sam
>>>
>>>
>>>
>>>
>>> From: *ProSapien Sam Smith*
>>>
>>> Date: Sun, Nov 22, 2020 at 1:33 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Kaliya Identity Woman
>>>
>>> Cc: Manu Sporny Drummond Reed, Hardman Daniel
>>>
>>>
>>> Kaliya,
>>>
>>>
>>> To give you a little more background.
>>>
>>>
>>> There are two primary approaches to privacy protection in online
>>> exchanges of information.
>>>
>>>
>>> 1) Manage the degree of disclosure
>>>
>>> 2) Manage the degree of exploitation given disclosure
>>>
>>>
>>> The first approach is inherently leaky. Over time any disclosure becomes
>>> more and more correlatable. It is best employed to provide temporary
>>> ephemeral privacy protection
>>>
>>>
>>> The second approach imposes liability or counter incentive to the
>>> un-permissioned exploitation of correlated data. It is a persistent and
>>> enforceable check on exploitation that removes the incentive to correlate.
>>> This may be accomplished with contracts or with contracts in combination
>>> with regulations such as GDPR.  A really good legal analysis of privacy
>>> protection using contracts is given here called "chain link
>>> confidentiality".
>>> https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2045818
>>>
>>>
>>> Consent is a type of authorization. It is usually open-loop and
>>> persistent. It benefits from a chaining semantic. It is hard to shoe horn
>>> consent semantics into access control semantics. They are both types of
>>> authorizations in the broad sense of the word, but they do not work the
>>> same.
>>>
>>> Chain link confidentiality (or the like)  authorization semantics could
>>> be implemented with self-contained VCs that employ a native chaining
>>> authorization semantic.
>>>
>>>
>>> When sharing data via VCs it makes the most sense that any associated
>>> authorizations/consents/restrictions etc.  be conveyed in-band,
>>> self-contained, attached, embedded, or entrained with that data as it is
>>> shared. Not conveyed in a separate out-of-band mechanism.
>>>
>>>
>>> I am sure you are well acquainted with the Kantara Initiative's Consent
>>> Receipt spec.  https://kantarainitiative.org/download/7902/
>>>
>>>
>>> In addition to the links in my prior email, some other relevant links are
>>> as follows:
>>>
>>>
>>> https://www.iso27001security.com/html/27560.html
>>>
>>>
>>> https://wiki.idesg.org/wiki/index.php/Consent_to_Create_Binding
>>>
>>>
>>> https://www.rfc-editor.org/rfc/rfc2693.txt
>>>
>>>
>>> These all benefit from an embedded VC native chaining semantic for which
>>> a separate out of band Z-cap conveyance is not well suited.
>>>
>>>
>>> A further example of a use for a chaining semantic. The IETF and TCG
>>> (trusted computing group) are promulgating several standards based on
>>> verifiable remote attestations of the configuration of trusted executions
>>> environments.  https://datatracker.ietf.org/wg/rats/about/
>>> https://trustedcomputinggroup.org/wp-content/uploads/TCG-NetEq-Attestation-Workflow-Outline_v1r9b_pubrev.pdf
>>>
>>> The semantics of these verifiable attestations could be embedded in and
>>> conveyed by VCs with appropriate chaining and provenancing semantics
>>>
>>>
>>>
>>> The goal here is to make VCs the universal locus of interoperability so
>>> that implementers that want  Verifiable Containers of Data need use only
>>> one infrastructure.  The VC spec with VC schema is sufficiently expressive
>>> syntactically to encompass all these semantics we just have to standardize
>>> the syntax for the semantics.
>>>
>>>
>>> I hope this helps clarify the technical motivations and distinctions.
>>>
>>>
>>> Regards
>>>
>>>
>>> Sam
>>>
>>>
>>>
>>>
>>> From: *Daniel Hardman*
>>>
>>> Date: Wed, Dec 2, 2020 at 8:54 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Kaliya Identity Woman
>>>
>>> Cc: ProSapien Sam Smith, Dmitri Zagidulin, Tobias Looker , Manu Sporny,
>>> Drummond Reed
>>>
>>>
>>> Kaliya, I think it's noble to look for a way to bring separate
>>> communities together, but I feel like there are some misunderstandings
>>> about the history and nature of our divergence. They may not doom such
>>> efforts to failure, but they do get in the way. What follows is my attempt
>>> to describe the problem from my perspective. I don't claim it to be
>>> objective truth -- only an accurate capture of the way I see the situation.
>>>
>>>
>>> VCs can be verified by anyone *without* going back to the issuer. I
>>> consider this one of their most important characteristics. It changes power
>>> dynamics, technical architecture, regulatory compliance, trust, and many
>>> human factors upon which SSI depends.
>>>
>>>
>>> OCaps are generally validated/redeemed/invoked in a way that circles back
>>> to their issuer. The OS gives a file handle; an application that wants to
>>> work with the file makes an OS call and presents that handle as proof that
>>> they have the right to use it. I believe this direct or indirect dependency
>>> on the issuer is what Sam meant when he described OCaps as a "closed loop"
>>> system.
>>>
>>>
>>> Because the use cases for VCs and OCaps sound a bit different when
>>> described like I have above, it is not crazy to implement them differently.
>>> But it is also not necessary. You can use a VC like a bearer token that
>>> should be presented to and verified by its issuer. You can add to such a VC
>>> various capabilities like attenuation chains, composite delegation, etc -- *with
>>> only schema choices, not changes to the VC spec*. In other words, VCs
>>> can prove delegated privilege just like OCaps can. (On the other hand, you
>>> can't address the breadth of VC use cases with the narrower featureset of
>>> OCaps.) Note how this claim I'm making is similar to ones I have heard Dave
>>> Longley and Manu make about other possible innovations proposed in the VC
>>> space: "We don't need serviceEndpoint; just emit a VC that makes your
>>> endpoint into verifiable data. We don't need type on a DID doc; use a VC
>>> to describe attributes of a DID subject."
>>>
>>>
>>> When I have advocated for ideas like serviceEndpoint and type, Dave and
>>> Manu have sought to put the burden of proof on me: *Please demonstrate
>>> why you can't do what you want with what we already have. *Why have we
>>> not applied that same logic to the zCaps work item that CCG has undertaken?
>>> What can we not do with VCs that forces us to invent an entirely new spec
>>> for a kind of signed, verifiable, revocable data? (Note that I'm NOT asking
>>> why we want OCap-like behavior in a security context; I'm asking why there
>>> has to be a separate spec/impl for such a mechanism, when VCs -- the very
>>> work product this group is known for -- are capable of expressing all the
>>> same semantics.)
>>>
>>>
>>> OCaps were emphasized at an RWOT that many of us attended (Boston, I
>>> think). Some OCap experts were there, and I loved what they had to say.
>>> OCaps are useful. However, when Manu and Chris W wanted to explore at that
>>> same conference an OCap-centric vision of how the authorization section of
>>> a DID Doc ought to work, I felt a lot of dissonance. So did others. We
>>> eventually agreed that we wouldn't bog down other dimensions of progress
>>> with a tough battle to consensus on the authorization topic. zCaps
>>> (originally OCAP-LD, IIRC) were born soon thereafter as a method that DB
>>> wanted to use for authorization in the DB ecosystem. I was fine with
>>> that, as a feature of their DID method. I read an early draft of the spec
>>> with some care to stay informed. Meanwhile, I began imagining how OCaps
>>> could be done with a VC and any DID method, giving them selective
>>> disclosure, strong privacy, powerful revocation, governance, regulatory
>>> compliance etc -- without inventing a new mechanism for signing and
>>> verification that would need DID method support and its own story about all
>>> of these supporting topics. The outgrowth of my work was first a webinar
>>> about delegated credentials, and later Aries RFC 0104
>>> <https://github.com/hyperledger/aries-rfcs/blob/master/concepts/0104-chained-credentials/README.md>
>>> about chained credentials. I did not make an attempt to standardize my
>>> work, because it only depends on the VC standard which already exists.
>>> Everything else is just picking schemas cleverly. And my approach worked
>>> from day 1 with all variants of VCs -- JSON-LD, JWT, Indy's flawed impl,
>>> etc.
>>>
>>>
>>> When DB proposed zCaps as a CCG work item, I did not object. I figured
>>> that any DID method that wanted to use zCaps should be able to. However, I
>>> had no interest in collaborating on what I felt was an unnecessary new
>>> mechanism that was missing important features VCs already had. I also felt
>>> that chained credentials addressed another use case that zCaps can't handle
>>> and that the general VC ecosystem sorely needed, which is general data
>>> provenance. So I was getting 2 for 1. (Provenance of authorization is a
>>> subset of a larger problem. Think of how academics and journalists cite
>>> sources, and imagine that data provenance mechanism allowing someone to
>>> attribute a legalName field in an employer credential to the government
>>> ID it came from. But I digress...)
>>>
>>>
>>> Anyway, I have been consistent about my disinterest in zCaps for several
>>> years now. I spoke about my reasons (and about my alternate approach) at
>>> IIW (unfortunately not attended by zCaps proponents). I have pushed back on
>>> the way that capabilityInvocation enthrones an OCap worldview in the DID
>>> core. I've linked to the chained credential RFC in several CCG emails. When
>>> the CCG sponsored a special learning session about zCaps, I wrote the group
>>> to note that an alternate solution was possible, where OCaps were built
>>> from VCs. Joe engaged in a public debate with me about it on the mailing
>>> list
>>> <https://lists.w3.org/Archives/Public/public-credentials/2020Feb/0064.html>.
>>> His arguments were the only substantive engagement I got. He invited me to
>>> come present to the group about it, but when the date didn't work, and then
>>> COVID happened, we never circled back.
>>>
>>>
>>> My point in all this is not recrimination -- it's simply to acknowledge
>>> that Sam's task force is not a new, casual, or ill-considered divergence.
>>> It is an old one that stems from philosophical, architectural, and use case
>>> differences. These differences ARE NOT related to our other sore spot of
>>> divergence around ZKPs; they are entirely independent. The approach Sam is
>>> imagining is not a new effort to ignore a nascent zCaps standard; it is as
>>> old as OCAP-LD itself, and just as well documented. And if anything, it is
>>> Sam's approach that is standard; zCaps requires that a new standard be
>>> developed, instead of using the one we already matured and have worked so
>>> hard to implement.
>>>
>>>
>>> None of this needs to prevent us from converging -- but the precondition
>>> to convergence would be an agreement on the requirements we're trying to
>>> address. I don't think we have that. I have modest interest in seeing if we
>>> could find common ground, and would be willing to attend a meeting to
>>> discuss. I would also be willing to revisit the offer from CCG to explain
>>> this alternate approach, if there is interest. But achieving a convergence
>>> here doesn't feel like a rational top priority -- only a nice-to-have. So
>>> I'm waiting for someone else to take the bull by the horns. If no one does,
>>> I'll continue to support Sam's task force because it is more closely
>>> aligned with my priorities and my view of which standard I want to focus on
>>> (the VC standard). I don't view divergence like this as a terrible tragedy;
>>> often it's better to standardize after the market has weighed in on what it
>>> wants. I feel like many in the CCG want to standardize too early.
>>>
>>>
>>> --Daniel
>>>
>>>
>>>
>>> From: *ProSapien Sam Smith* <sam@prosapien.com>
>>>
>>> Date: Thu, Dec 3, 2020 at 5:55 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Tobias Looker
>>>
>>> Cc: Hardman Daniel, Kaliya Identity Woman, Dmitri Zagidulin, Manu Sporny,
>>> Drummond Reed
>>>
>>>
>>> Correcting typos and better wording.
>>>
>>>
>>> Should read:
>>>
>>>
>>> Not authorization statements in general. But even in the realm  of
>>> authorization statements, they do not even begin to encompass all the types
>>> of authorization statements indigenous to  real world of data processing.
>>> Access control is only one thin slice of all authorizations.
>>>
>>>
>>>
>>>
>>> From: *Tobias Looker*
>>>
>>> Date: Thu, Dec 3, 2020 at 6:37 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: ProSapien Sam Smith
>>>
>>> Cc: Hardman Daniel, Kaliya Identity Woman, Dmitri Zagidulin , Manu
>>> Sporny, Drummond Reed
>>>
>>>
>>>
>>> Sam,
>>>
>>>
>>> > I appreciate the ongoing discussion. It is helpful to see how various
>>> viewpoints and definitional understanding differ.  I want to comment on one
>>> statement because I believe it is one of those that are pare  of a core
>>> divergences in understanding.
>>>
>>>
>>> Appreciate the feedback and discussion too. I think I see the source of
>>> our disagreement quite well. Essentially you view the verifiable
>>> credentials spec through a more abstract lense than I (which I totally
>>> understand), whereas I view verifiable credentials as a cryptographically
>>> secure identity claim assertion format, not one for describing
>>> authorization, because I see them having divergent requirements meaning
>>> they should not be conflated and technologies like OAuth and OpenID Connect
>>> are examples where they have maintained this separation (id_token vs
>>> access_token).
>>>
>>>
>>> > The verifiable in verifiable credential means cryptographically
>>> verifying signatures. A more generic way of expressing this is that a
>>> verifiable credential is verifying the authenticity of a statement.
>>>
>>> Really nothing more and nothing less. The meaning of that verified
>>> statement is open, its free. The schema and semantics of a VC have nothing
>>> to do with its primary purpose, that of verifying authenticity of a
>>> statement. The "identity" in this case is the holder of the private key
>>> from the public private key pair. Nothing more nothing less.  It is
>>> identity at its simplest expression, that is a cryptographic identifier.
>>>
>>>
>>> Again I see it differently, what I believe you are *indirectly *talking
>>> about is the concept of transferability of a capability, for instance
>>> bearer tokens in OAuth (access_tokens) have virtually no protection against
>>> transfer, he who possesses the access_token can exercise the authority it
>>> permits. Employing a cryptographic binding layer to tokens of
>>> authorization, imposes the requirement that the invoker of the capability
>>> (token) prove possession of the cryptographic key the token is bound to.
>>> However this is NOT about the identity of the invoker, its about ensuring
>>> the invocation of that capability is sound in the eyes of the authority who
>>> issued it (or chain that delegated it). An example is a resource server in
>>> an OAuth architecture can readily validate requests from authorized parties
>>> by simply checking the validity of the access_token, they do not need to
>>> know (nor should they in many instances) the concrete identity of the
>>> authorized party.
>>>
>>>
>>> Thanks,
>>>
>>> *Tobias Looker*
>>>
>>>
>>>
>>>
>>>
>>>
>>> From: *ProSapien Sam Smith*
>>>
>>> Date: Thu, Dec 3, 2020 at 7:23 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Tobias Looker
>>>
>>> Cc: Hardman Daniel, Kaliya Identity Woman, Dmitri Zagidulin Manu Sporny,
>>> Drummond Reed
>>>
>>>
>>> So to close the loop.
>>>
>>>
>>> Given a generic authentic data container. One need merely specify precise
>>> semantics for a given syntactical schema to unambiguously perform an
>>> associated task. There would be no confusion as to the task because the
>>> syntactical schema and semantics are precisely defined.  A container can
>>> hold multiple sets of syntactical blocks each with associated semantics.
>>> Again completely unambiguously.  One mechanism establishes authenticity of
>>> all the contents. A chained authentic data container would allow granular
>>> provenance of each set. Moreover a chained container may be a node in
>>> multiple disjoint trees which are completely unrelated semantically. Each
>>> chained block contains references or proofs of the external provenance of
>>> contents  the chained block in the container.
>>>
>>>
>>> Now it may be a bad idea to chain different types of syntactical blocks
>>> with different semantics. But that would be an easily detected invalid use
>>> of chaining. Chains can easily require coherent semantics. But the
>>> container itself is merely establishing the authenticity of its contents to
>>> the source of the container. A single source may make multiple statements
>>> that all share the same source but have no other relationship to each
>>> other..
>>>
>>>
>>> For example, suppose that there are three disjoint blocks in a container,
>>> namely A, B, and C. By virtue of those blocks being in the container, the
>>> authentic data container specification allows a verifier to ascertain the
>>> origin of the container, i.e is authenticity relative to a key-pair or a
>>> set of key-pairs and by implication to the holder of those key pairs
>>> whomever that may be. The verification happens only once. And all three
>>> blocks are now verified independent of the semantics and syntax each block.
>>>
>>>
>>> Let suppose that block A has a sub block A.1 that comes from some other
>>> source. I.E another authentic data container Z. By including a reference to
>>> that other container Z in A.1, a verifier is able to verify the provenance
>>> of A.1 as authentically sourced from Z. And so on up a chain of provenance.
>>>
>>>
>>> Likewise independently blocks B and C in the container, merely by virtue
>>> of being in the container are simultaneously verified as being authentic to
>>> the source of the container independent of their semantics. They are not in
>>> anyway necessarily dependent on each other. The container conveys authentic
>>> information from a source.   The authentic data container is merely
>>> multiplexing one or more authentic statements, that are, A, B, and C. And
>>> each of A,B, and C may belong to completely mutually disjoint provenance
>>> chains.
>>>
>>>
>>> Suppose that block A is an authorization statement. Its semantics do not
>>> in any impede the semantics of Block B or C. Nor do the semantics of block
>>> B or C impede or confuse A.  They are not semantically related. They just
>>> all happen to come from the same authentic source.
>>>
>>>
>>> The hard part of security is establishing authenticity of statements.
>>> Having one generic method for establishing authenticity of multiplexed
>>> statements via one conveyance is way more secure in general than having
>>> separate methods of establishing authenticity one for each type of
>>> statement each via multiple non-multiplexed conveyances.
>>>
>>>
>>> Other than  narrow corner cases, it defies best practice security
>>> principles to not multiplex.
>>>
>>>
>>> To put a fine point on it. Suppose I want to access a web site. Should I
>>> establish a secure communication channel via TLS and then tunnel all my
>>> requests for any resource on that website via that secure channel or should
>>> I create multiple simultaneous channels one for each mime type of resource
>>> or even each resource.   Well if there are some highly unique
>>> characteristics of different types of resources like asynchronous video
>>> streams versus database queries then maybe yes. But the authenticity of the
>>> website is the same in all cases. So it I have a secure way of establishing
>>> authenticity for any type of  resource, it makes sense to reuse that one
>>> secure way whenever its practical. An to not use a different authenticity
>>> mechanism resource.   The authentic source is the same so establish
>>> authenticity the same way. In general one just establishes one multiplexed
>>> authentic channel and lets the mime type of each multiplexed resource
>>> determine the semantic behavior.
>>>
>>>
>>>
>>> If we build a single standard for authentic data containers with
>>> extensible and separable semantics for the data conveyed inside that
>>> container then we have a very high degree of reuse of tooling. This fosters
>>> much higher interoperability and adoption. If instead every time we have a
>>> different type of semantic for data conveyed in a container we build a new
>>> non interoperable type of authentic data container then we will have a hot
>>> mess. We are just recreating at the VC layer, the DID method proliferation
>>> mess that happened at the DID layer.
>>>
>>>
>>>
>>>
>>> From: *ProSapien Sam Smith*
>>>
>>> Date: Thu, Dec 3, 2020 at 7:33 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Tobias Looker
>>>
>>> Cc: Hardman Daniel, Kaliya Identity Woman , Dmitri Zagidulin , Manu
>>> Sporny, Drummond Reed
>>>
>>>
>>>
>>> Finally,
>>>
>>>
>>> It may never be that one wants to put all of A, B, and C in the same
>>> container. That is not the point. The point is that the verification of the
>>> container as authentic does not care about A, B, or C. If it does then we
>>> have designed a bad container. We do not have separation of concerns. We
>>> have confusion of concerns. Arguing that VC somehow are meant for identity
>>> purposes is fundamentally confusing what a VC truly is.  The term
>>> Verifiable Credential may be part of the problem. The definition of
>>> Credential is a right or privilege, i.e. it is synonymous with
>>> authorization. It is indeed ironic to be in a discussion where one side is
>>> arguing that verifiable authorizations (credentials) must not be
>>> authorizations.
>>>
>>>
>>>
>>> From: *Drummond Reed* <drummond.reed@evernym.com>
>>>
>>> Date: Thu, Dec 3, 2020 at 7:39 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: ProSapien Sam Smith <sam@prosapien.com>
>>>
>>> Cc: Tobias Looker <tobias.looker@mattr.global>, Hardman Daniel <
>>> daniel.hardman@evernym.com>, Kaliya Identity Woman <
>>> kaliya@identitywoman.net>, Dmitri Zagidulin <dzagidulin@gmail.com>, Manu
>>> Sporny <msporny@digitalbazaar.com>
>>>
>>>
>>>
>>> I am not a real software architect like the rest of you. But I must admit
>>> that Sam's argument that a verifiable container is required in all these
>>> scenarios and that having one consistent way of handling verifiable
>>> containers independent of the semantics in the container seems to be a
>>> serious advantage, especially considering all the infrastructure we are
>>> talking about building on top of this foundation.
>>>
>>>
>>> What am I missing?
>>>
>>>
>>> =Drummond
>>>
>>>
>>>
>>> From: *Tobias Looker*
>>>
>>> Date: Thu, Dec 3, 2020 at 7:46 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: =Drummond Reed
>>>
>>> Cc: ProSapien Sam Smith , Hardman Daniel, Kaliya Identity Woman, Dmitri
>>> Zagidulin, Manu Sporny
>>>
>>>
>>>
>>> > What am I missing?
>>>
>>>
>>> I understand, in essence I think Verifiable Credentials are more
>>> opinionated than the generalized verifiable data container vision that Sam
>>> is putting out. For instance if you use the credentialSubject property in
>>> the VC you issue, you as the issuer are aiming to describe a subject in
>>> some capacity, which is fundamentally different than describing the
>>> authority to do something (a capability), it is that commingling that is
>>> problematic in my eyes and leads to a well document set of problems that
>>> often plague identity systems.
>>>
>>> *Tobias Looker*
>>>
>>>
>>>
>>>
>>> From: *Daniel Hardman*
>>>
>>> Date: Thu, Dec 3, 2020 at 8:41 PM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Tobias Looker
>>>
>>> Cc: ProSapien Sam Smith, Kaliya Identity Woman, Dmitri Zagidulin Manu
>>> Sporny, Drummond Reed
>>>
>>>
>>>
>>> > in essence I think Verifiable Credentials are more opinionated than the
>>> generalized verifiable data container vision that Sam is putting out.
>>>
>>>
>>> Okay, excellent. I believe we've zeroed in on one root of our divergence,
>>> which is *whether VCs are a general mechanism for verifiable statements,
>>> or are specially dedicated to establishing identity*. This is progress.
>>> Commentary on that subtopic is indented below.
>>>
>>>
>>> It is true that identity use cases predominate in the minds of VCs
>>> proponents today. However, I don't believe it's correct to consider them
>>> inherently related to identity. Here are several quotes from the VC spec
>>> that might be relevant:
>>>
>>>
>>>
>>>    - "Verifiable credentials represent statements made by an issuer in a
>>>       tamper-evident and privacy-respecting manner." (section 1.3; note that
>>>       identity isn't mentioned)
>>>       - Issuers can issue verifiable credentials about any subject."
>>>       (section 1.3)
>>>       - "credential: A set of one or more claims made by an issuer. A
>>>       verifiable credential is a tamper-evident credential that has authorship
>>>       that can be cryptographically verified." (official definition from section
>>>       2)
>>>       - "It is possible to have a credential that does not contain any
>>>       claims about the entity to which the credential was issued." (note in
>>>       section 3.2)
>>>       - "When expressing statements about a specific thing, such as a
>>>       person, product, or organization, it is often useful to use some kind of
>>>       identifier so that others can express statements about the same thing. This
>>>       specification defines the optional id property for such identifiers... If
>>>       the id property is present..." (section 4.2)
>>>       - "A holder might transfer one or more of its verifiable
>>>       credentials to another holder." (section 5.1)
>>>       - "This section describes possible relationships between a subject
>>>       and a holder and how the Verifiable Credentials Data Model expresses these
>>>       relationships... [Diagram: Subject present? no->Bearer Credential;
>>>       yes->Subject = Holder? no->Credential Uniquely Identifies Subject?
>>>       no->Subject Passes VC to Holder? no-> Issuer Independently Authorises
>>>       Holder?...]" (Appendix C)
>>>
>>>
>>>
>>>
>>>
>>>
>>> Key points: VCs are defined by their composition from verifiable
>>> assertions, which can be about anything and for any purpose; identity is
>>> only one of many uses. VCs can be bearer tokens. VCs are explicitly
>>> contemplated for authorization use cases, including ones where the
>>> credential (authorization instrument) does not uniquely identify the
>>> subject. VCs can be transferable. This is all explicitly in the spec. Do
>>> you agree?
>>>
>>>
>>>
>>> >For instance if you use the credentialSubject property in the VC you
>>> issue, you as the issuer are aiming to describe a subject in some capacity,
>>> which is fundamentally different than describing the authority to do
>>> something (a capability)
>>>
>>>
>>> This is a second point of divergence, also important. *Is it really true
>>> that a capability describes the authority to do something, that this
>>> doesn't describe a subject in some capacity, and that this makes a
>>> capability a different animal from a VC?*
>>>
>>>
>>> Here is the definition of a capability from wikipedia
>>> <https://en.wikipedia.org/wiki/Capability-based_security>: "A
>>> capability... is a communicable, unforgeable token of authority. It refers
>>> to a value that references an object along with an associated set of
>>> access rights."
>>>
>>>
>>> Note the part in red. A capability is not a capability unless it asserts
>>> that privileges attach to an object; it never describes privileges in the
>>> abstract. The distinction with VCs collapses by this definition, does it
>>> not?
>>>
>>>
>>> Suppose a robotics student asserts that her course of study at Cambridge
>>> is very rigorous. I think we can all agree that this sort of assertion is
>>> not identity-centric. (It does reference a subject just like a capability
>>> references its resource or an RDF triple references its subject, but the
>>> student isn't asserting (and her listeners don't evaluate her statement) to
>>> establish the identity of Cambridge, or of herself. Right?)
>>>
>>>
>>> It seems indisputable to me that this assertion is easily represented as
>>> a verifiable credential, without any distortion or abuse of the standard:
>>>
>>>
>>> {
>>>
>>>   "@context": [ "https://www.w3.org/2018/credentials/v1", "
>>> https://studentu.org/campuslife" ],
>>>
>>>   "type": ["VerifiableCredential", "AcademicExperienceReport"],
>>>
>>>   "issuer": "https://facebook.com/AmyLovesRobots",
>>>
>>>   "issuanceDate": "2021-01-01T19:73:24Z",
>>>
>>>   "credentialSubject": { "id": "https://www.cam.ac.uk/",
>>> "opinionAboutRigor": "Super challenging."},
>>>
>>>   "proof": {...}
>>>
>>>  }
>>>
>>>
>>> So, now let's imagine that Amy wants to let a few of her friends play
>>> with the robot prototype she's been developing as her capstone project. She
>>> makes another verifiable statement framed in a way that satisfies the VC
>>> spec:
>>>
>>>
>>> {
>>>
>>>   "@context": [ "https://www.w3.org/2018/credentials/v1", "
>>> https://roboticlab.cam.ac.uk/grants" ],
>>>
>>>   "type": ["VerifiableCredential", "RobotPrivilege"],
>>>
>>>   "issuer": "https://facebook.com/Amy",
>>>
>>>   "issuanceDate": "2021-01-01T19:73:24Z",
>>>
>>>   "credentialSubject": { "id": "
>>> https://roboticlab.cam.ac.uk/~amyt/capstone-project", "allow":
>>> "operate"},
>>>
>>>   "proof": {...}
>>>
>>>  }
>>>
>>>
>>> This VC also happens to be an OCap, in that it contains an unforgeable
>>> object reference combined inseparably with a statement that authorizes or
>>> confers privileges on its holder. It's an assertion that the bearer is
>>> authorized.
>>>
>>>
>>>
>>> >it is that commingling that is problematic in my eyes and leads to a
>>> well documented set of problems that often plague identity systems.
>>>
>>>
>>> Can you give some specific examples -- not examples of identity system
>>> problems solved by OCaps (which I think we all agree with), but of problems
>>> that would be introduced if we implemented OCaps with VCs instead of
>>> writing a new spec for them?
>>>
>>>
>>> I suggest that we explore whether the separation into identity tokens vs
>>> authorization tokens is advisable or not. That might be an interesting
>>> question. However:
>>>
>>>
>>> 1. I suggest we define the purpose of VCs by the language in the
>>> standard, not by tribal wisdom -- not assuming that VC != OCap.
>>>
>>> 2. If we decide that two different tokens are desirable, it does not free
>>> us from the responsibility of explaining why we need two different specs
>>> for them.
>>>
>>>
>>>
>>> From: *Manu Sporny*
>>>
>>> Date: Fri, Dec 4, 2020 at 7:28 AM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Tobias Looker, =Drummond Reed
>>>
>>> Cc: ProSapien Sam Smith Hardman Daniel, Kaliya Identity Woman, Dmitri
>>> Zagidulin
>>>
>>>
>>> On 12/3/20 10:46 PM, Tobias Looker wrote:
>>>
>>> > I understand, in essence I think Verifiable Credentials are more
>>>
>>> > opinionated than the generalized verifiable data container vision
>>>
>>> > that Sam is putting out. For instance if you use the
>>>
>>> > credentialSubject property in the VC you issue, you as the issuer are
>>>
>>> > aiming to describe a subject in some capacity, which is fundamentally
>>>
>>> > different than describing the authority to do something (a
>>>
>>> > capability), it is that commingling that is problematic in my eyes
>>>
>>> > and leads to a well document set of problems that often plague
>>>
>>> > identity systems.
>>>
>>>
>>> This is the crux of the philosophical divergence.
>>>
>>>
>>> Fundamentally, Verifiable Credentials were meant to be used to do
>>>
>>> authorization. You *can* use them to do authorization in the same way
>>>
>>> that you can use a hammer to drive in a screw.
>>>
>>>
>>> The abstraction that Sam is applying to VCs is really achieved one layer
>>>
>>> lower with Linked Data Proofs/Signatures, which allows any information
>>>
>>> to be digitally signed and verified as authentic.
>>>
>>>
>>> Just because we *can* do authorization using VCs doesn't mean we should.
>>>
>>> One reason we shouldn't is because a number of things that can be
>>>
>>> capabilities (such as a DID Document on a ledger) are definitely not VCs
>>>
>>> (square peg, round hole problem). Another reason is because we don't
>>>
>>> want people to confuse what a VC is used for and what a ZCAP is used for
>>>
>>> (confused developers problem).
>>>
>>>
>>> I certainly don't disagree with a number of the points that Daniel and
>>>
>>> Sam have made, but I don't think they're enough to get us over the two
>>>
>>> concerns I have above. That said, seeing a concrete proposal of the use
>>>
>>> cases Sam outlined might help analyse the benefits and drawbacks of the
>>>
>>> approach.
>>>
>>>
>>> I also want to point out that this si a very useful discussion and it's
>>>
>>> a shame that it's happening in a private channel. The entire community
>>>
>>> would benefit from the discussion. Can we move it to the CCG mailing list?
>>>
>>>
>>> -- manu
>>>
>>>
>>> --
>>>
>>> Manu Sporny - https://www.linkedin.com/in/manusporny/
>>>
>>> Founder/CEO - Digital Bazaar, Inc.
>>>
>>> blog: Veres One Decentralized Identifier Blockchain Launches
>>>
>>> https://tinyurl.com/veres-one-launches
>>>
>>>
>>>
>>> From: *Daniel Hardman*
>>>
>>> Date: Fri, Dec 4, 2020 at 8:02 AM
>>>
>>> Subject: Re: VCs & OCap - please talk
>>>
>>> To: Manu Sporny
>>>
>>> Cc: Tobias Looker, =Drummond Reed, ProSapien Sam Smith , Kaliya Identity
>>> Woman, Dmitri Zagidulin
>>>
>>>
>>>
>>> I don't mind moving the conversation to the CCG channel. How should we
>>> provide the background context? Would it be good to spend some conversation
>>> time in an interactive meeting doing that, and connecting the email thread
>>> to that interactive discussion either before or after?
>>>
>>>
>>> >Fundamentally, Verifiable Credentials were meant to be used to
>>> do authorization.
>>>
>>>
>>> Just checking. You meant to say "weren't" there, right?
>>>
>>>
>>> Regarding this statement from Manu:
>>>
>>>
>>> >Fundamentally, Verifiable Credentials were meant to be used to do
>>>
>>> authorization. You *can* use them to do authorization in the same way
>>>
>>> that you can use a hammer to drive in a screw.
>>>
>>>
>>> I strongly disagree with two aspects of this statement.
>>>
>>>
>>> First, Manu can absolutely speak with authority about his own intentions
>>> here -- and as an editor of the spec, he can even speak about his
>>> experience getting the spec matured, where that perception was reinforced.
>>> But Manu is not the only editor of the spec, and I know his
>>> characterization has never been true about a significant subset of the
>>> community who contributed.
>>>
>>>
>>> Second, I have yet to see ANY justification for the hammer/screw analogy.
>>> I gave an example of using a VC to express an ocap. It was simple and
>>> elegant, and 100% compatible with the VC spec. To assert that this is using
>>> a hammer to drive a screw implies a much stronger misfit than that. So if
>>> we're going to make an assertion like that, I want concrete, specific
>>> examples to justify it. I repeat the invitation I gave to Tobias:
>>>
>>>
>>> Can you give some specific examples -- not examples of identity system
>>> problems solved by OCaps (which I think we all agree with), but of problems
>>> that would be introduced if we implemented OCaps with VCs instead of
>>> writing a new spec for them?
>>>
>>>
>>>
>>>
>>> *Attachments:*
>>>
>>>    - OCap-VC Discussion.pdf
>>>
>>>
>>>

Received on Saturday, 5 December 2020 22:54:49 UTC