How VC API Really Works (was: Re: Publication of VC API as VCWG Draft Note)

On Mon, Nov 21, 2022 at 1:42 PM David Chadwick wrote:
> this is where I take issue with you (as I said during the plugfest).

Yes, I heard the commentary from the back of the JFF Plugfest room and
from across the Atlantic! :P

So let's talk about it… because some of us took issue with the way the
OID4 interop stuff was presented as well. Let's see if we can describe
the results from each group in a way we can both sign off on. :)

> You will note that all the OID4VCI implementations had holistic VC Issuers, which is why it was a lot more implementation work than that undertaken by the 17 cryptographic signers.

TL;DR: Your argument suggests that interoperability that was achieved
via CHAPI and VC API doesn't count based on your interpretation of
what an "Issuer" is and, in my view, an oversimplification of what the
VC API issuing API does. This argument has also been leveraged to
excuse some of the interop difficulties that were found related to
OID4. In other words, the argument goes: CHAPI + VC API only appeared
to achieve better interop than OID4, but it would have struggled just
as much as OID4 had "real interop" been attempted. The OID4 struggles
mentioned were the findings that 1) OID4 interop split into two
non-interoperable end-to-end camps (but was combined into one camp for
reporting purposes, inflating the number of issuers beyond what was
actually achieved), and 2) almost every OID4 issuer was a software
vendor, not an institution that issues workforce credentials.

This is a LOOONG post, apologies for the length, but these details are
important, so let's get into them. This is going to be fun! :)

> You might have 17 implementations of the signing VC-API, but these are not VC Issuers.

An Issuer is a role. The issuer may execute that role using a variety
of system components. The OID4 specification hides or is agnostic
about all those system components "behind the API". It doesn't speak
about them because "How an Issuer chooses to manage the process of
issuance is out of scope, the only thing that matters is how the
credential is delivered to the wallet". This puts all of the focus on
the simple delivery or hand off of the credential to the wallet,
ignoring the rest of the process. That is just one possible design
choice – it does not mean other design choices are somehow invalid or
are not related to credential issuance. It also comes with its own
tradeoffs. For example, putting the process of
issuing/verifying/revoking/challenge management out of scope creates a
vendor lock-in concern. An alternative approach specifies individual
components, allows them to be swapped out and reduces the
implementation burden on the frontend delivery services. In other
words, the delivery mechanism becomes plug-and-play – and extremely
simple. This plug-and-play mechanism was demonstrated via the CHAPI
playground in the CHAPI + VC API interop work.

Now, the VC API group started out making the same mistake of confusing
the role of "Issuer" for a set of one or more software components, but
it became obvious (over the course of a year) that doing so was
causing the group to miscommunicate in the same way that we are
miscommunicating right now. It was better to talk about specific
functions – each of which is used to help the Issuer role accomplish
the issuance and delivery of one or more credentials to a wallet. It
also became clear that "issuance" and "delivery", as just stated, are
different functions – as a credential is issued when it is fully
assembled and has proofs attached to it, and then it is passed from
one holder to another until it reaches the wallet (delivery). This
approach also fits cleanly with the VCDM.

There are at least three roles in the VC Data Model – Issuer, Holder,
and Verifier. Each one of those roles will utilize system components
to realize that role in the ecosystem. Some of the system components
that we have identified are: the Issuer Coordinator, the Issuer
Service, the Issuer Storage Service, the Issuer Status Service, and
the Issuer Admin Service. There will be others, with their own APIs,
as the ecosystem matures. In general, there are two classes of system
components that an issuer ROLE utilizes – Issuer Coordinators and
Issuer Services.

You can read more about this in the VC API Architecture Overview (but
be careful, the diagram hasn't been updated from "App"->"Coordinator"
yet… it's also a draft work in progress, there are errors and
vagueness):

https://w3c-ccg.github.io/vc-api/#architecture-overview

All that being said, I think we're making a mistake if we think that
the name we apply to the interop work performed matters more than what
was actually technically accomplished. Did we accomplish plug-and-play
or not – and how many different parties participated in providing
their plug-and-play components to the process?

> A VC Issuer talks to the wallet/holder (as per the W3C eco system model) and has much more functionality than simply signing a blob of JSON.

What you are referring to in your comment is the concept of the
"Issuer Coordinator" in VC API terminology. It is the entity that does
all of the business rule processing to determine if the entity that
has contacted it should receive a VC or not. In CHAPI + VC API, this
can be done via a simple username/password website login, multifactor
login, federated login, login + DIDAuth, or via the exchange of
multiple credentials in a multi-step process. The Issuer Coordinator
is capable of delegating these steps to multiple service backends.
OID4 does not define an API around those steps of delegation (vendor
lock risk), VC API claims that they really do matter and defines them
(choice in vendors).

Also, the Holder role referenced above refers to any party that
currently holds the credential. The same party that plays the Issuer
role always also plays the Holder role until delivery to a wallet. In
the VC API, the VC is issued via the issuing API and then will be
delivered through some delivery protocol. Delivery can be done with a
simple HTTP endpoint or using other protocols such as OID4. My
understanding is that there is a similar concept contemplated in OID4
(called "Batched issuance" or something? I couldn't find a reference)
where the VC will be held (by a Holder, of course) until the wallet
arrives to receive it. Just because OID4 hides issuance behind a
delivery protocol (intentionally being agnostic about how it happens),
does not mean that every protocol must work this way.

Now, for the JFF Plugfest #2, The CHAPI Playground was ONE of the
Issuer Coordinators, but others were demonstrated via Accredita, Fuix
Labs, Participate, RANDA, and Trusted Learner Network. So, even when
we use your definition of "issuance", which I expect would struggle to
achieve consensus, there were multiple parties doing "issuance". Not
only did the VC API cohort demonstrate that you could do standalone
Issuer Coordinator sites, we also demonstrated a massive Issuer
Coordinator that had 13 Issuer Services integrated into the backend.
We also had an additional 4 Issuer roles that used their own Issuer
Coordinators to put a VC in a wallet, demonstrating not only choice in
protocols (Issuance over CHAPI and Issuance over VC API), but choice
in Issuer Service vendors as well.

> An issuer that simply signs any old JSON blob that is sent to it by the middleman (the CHAPI playground) is not a holistic issuer. It is simply a cryptographic signer.

No, that's not how the VC API works.

An Issuer Coordinator and an Issuer Service are within the same trust
boundary. If we are to only look at the CHAPI Playground as an Issuer
Coordinator, it had the ability to reach out to those 13 Issuer
Services because those services had given it an OAuth2 Client ID and
Client Secret to call their Issuer Service APIs. Those Issuer Services
(that implement the VC Issuer API), however, are run by completely
different organizations such as Arizona State University, Instructure,
Learning Economy Foundation, Digital Credentials Consortium, and
others. You are arguing that they are not real Issuers even though: 1)
the API they implement is specific to issuing Verifiable Credentials;
they do not implement "generic data blob signing", 2) they handle
their own key material, 3) they implement Data Integrity Proofs by
adding their issuer information (including their name, imagery, and
the public keys), using the referenced JSON-LD Contexts, performing
RDF Dataset Canonicalization, and by using their private key material
to digitally sign the Verifiable Credential that is then handed back
to the Issuer Coordinator. Some issuers internally used a separate
cryptographic signer API, called WebKMS, to perform the actual
cryptographic signing, but *that* API was not highlighted here and
would actually be an API that more approximates signing "any old JSON
blob". More must be done in an issuance API implementation than just
performing cryptographic operations. So your description of the
issuance API is not accurate.

The whole issuance process is proprietary in OID4 today; it's just
simply not defined by design. Only the delivery mechanism (the request
for a credential of a certain type and its receipt) is defined. As
mentioned, the VC API separates issuance and delivery to help prevent
vendor lock-in and to enable multiple delivery mechanisms without
conflating them with the issuance process.

> The middleman and the signer together constitute a holistic VC Issuer as it is the middleman that talks to the wallet, says which VCs  are available to the wallet, authenticates and authorises the user to access the VC(s) and then gets the VC(s) signed by its cryptographic signer.

You're basically describing an Issuer Coordinator in VC API parlance:
the entity that executes VC-specific business logic that determines if
a VC should be issued or not to a particular entity. The VC API has a
layered architecture such that the entity implementing the Issuer
Coordinator, the entity performing DIDAuth, and the entity in control
of the private keys and Issuer Service don't have to be the same
entity, component, or service in the system. It's also true that you
don't have to keep re-implementing the same logic over and over again
with CHAPI, VPR, and VC API and instead can re-use components and put
them together in ways that saves weeks of developer time (as was
demonstrated during the plugfest by the companies that started from
scratch). This enabled people to participate by implementing the
pieces that they wanted to (and as reported in the CHAPI matrix)
without having to do everything as a monolithic application. This
meant even more interoperability and component-reuse. So it is true
that almost everyone had a fairly smooth experience achieving the
interoperability bar for VC API – but it was because of the layered
and component-based design of CHAPI + VPR + VC-API. I'm sure that OID4
implementers did benefit from reusing existing OAuth2 tools, but my
understanding is that everything behind the API is a full rewrite for
each participant instead of allowing for reuse or interoperability
between components. It could also have been that the OID4 implementers
struggled more because the VC API was easier to implement in a number
of other ways.

Just taking a guess at some of the things, from my perspective, that
may have slowed down OID4 implementers...

The organizations implementing OID4 had to do this extra work to
achieve interop:

* Publish which OIDC profile they were using
* Create a login-based, QRCode-based, and/or deep link initiation page
* Decide if they were going to support VC-JWT or VC-DI; reducing
interop partners
* Publish a Credential Issuer Metadata Endpoint
* Create an OAuth token endpoint for pre-authorization code flow
* Publish a Credential Resource Endpoint
* Publish an OAuth server/Issuer shared JWKS URL
* Publish an  Issuer JWKs URL
* Depend on non-publicly available wallet software to test their
interoperability status

The folks that used CHAPI + VC API didn't have to do any of that,
which made things go faster. Does that mean that different parties did
not implement VC delivery? No – see the above comments on that. It
just seems it was easier for some VC API implementers to implement
delivery. Some of this may be due to the use of CHAPI, which they
perhaps found easier to implement than some items in the above list.
Future OID4 implementations could also avoid some of the items above
by relying on CHAPI instead.

Now, does that mean CHAPI + VPR + VC API doesn't have its challenges?
Of course not! At present, the native app CHAPI flow needs usability
improvements, and we're going to be working on that in 2023 Q1 by
integrating native apps into the CHAPI selector and using App-claimed
HTTPS URLs. The VC API and VPR specs need some serious TLC and the
plugfest gave us an idea of where we can put that effort. VPR
currently only supports two protocols (browser-based VPR, and VC
API-based VPR) and will be adding more early next year since CHAPI is
protocol agnostic and it's clear at this point (at least, to me) that
we're looking at a multi-protocol future in the VC ecosystem. I'm sure
I'm missing other places where CHAPI + VPR + VC API needs to improve,
and I'm sure that people on this mailing list won't be shy in
suggesting those limitations and improvement areas if they feel so
inclined. :)

Let me stop there and see if any of the above resonates, or if I'm
papering over some massive holes in the points being made above.

I'm stepping away for US Thanksgiving now, and am thankful to this
community (and DavidC) for these sorts of conversations throughout the
years. :)

-- manu

-- 
Manu Sporny - https://www.linkedin.com/in/manusporny/
Founder/CEO - Digital Bazaar, Inc.
News: Digital Bazaar Announces New Case Studies (2021)
https://www.digitalbazaar.com/

Received on Tuesday, 22 November 2022 22:44:04 UTC