Re: Layering/Composability of Verifiable Credentials (was: Re: Market Adoption of VCs by Fortune 500s (was: Re: DHS Verifier Confidence/Assurance Level Expectation/Treatment of non-publicly defined Vocabulary/Terminology -- by using @vocab))

It's always hard when people in disagreements seem to be talking past one
another. But I think a lot of people are on this list because they like the
work that has been done by this community over the past 10 years and are
relatively happy with the specification artifacts that have come out of
that work.

For myself as an example, I don't think that the JSON-LD or RDF concepts in
the VC Data Model need fundamental rethinking. We're on a roll with the
design patterns as they are, and as a developer implementing a product
using VCDM, the concepts in the spec appear to me to be a good fit for the
task.

Statement #1
>
> Yes, there’s no doubt there is huge value represented by the use of
> JSON-LD (and RDF graphs) but is it necessary to have these as
> extensions/requirements in the basic Verifiable Credential data model
> specification? …why is it necessary to complicate a “data model”
> specification with JSON (and RDF) extensions?
>

It might have been possible at one point for this work to have started off
taking this path, but we didn't, because the people who negotiated the
original version examined the use case of "a developer who doesn't know
much about JSON-LD" and came to a set of compromises to make it fairly
straightforward for this persona to implement the requirements of the spec.
I am happy to see the forthcoming developments with @vocab to make it even
a little bit easier for some use cases to be served without thinking too
hard about what IRIs to map certain claims to, but I suspect that most
communities that make use of VCDM for interoperability will just do the
extra step to anchor their terms to appropriate IRIs intentionally. Apps
where the same entity is producing and consuming the credential may have
less use for the RDF features, and that is OK.


Restatement #2
>
> If the true primary goal was to drive wider, deeper, early adoption of
> VCs, the strategy should be to make the VCDM simpler and more compact; not
> more complicated, more niche, and less desirable to use.
>

I think we're at the point in the adoption curve where a large number of
organizations large and small are spinning up initiatives, putting VCDM
into production, and scaling their production deployments. Fundamental
rethinking of key requirements at this point would cause confusion and
market chaos, I think, versus a strategy of stability and enhancement to
expand supported use cases.

It's a statement of opinion that existing components of VCDM are "more
complicated", "niche", "less desirable" etc. That's a valid opinion for you
or some other developer to hold, but I'm sure you understand that when you
jump into a long-running community that has been building something
together (that they love) for nearly a decade to tell them that they're
doing it all wrong, you will find many people whose opinion is different
from yours.


Restatement #3
>
> I think we need a layered Internet Credential architecture reference model
>

I agree with Manu that the VC Data Model is already "layered". If you find
collaborators to work on some different conceptualization of layering with
you, that is also OK, but I would be surprised if you got much agreement in
this community that VCDM is not layered. In some cases the number of layers
is a little bit frustrating and causes risks for market fragmentation when
different implementers support incompatible options for a certain layer. In
my presentations last year, I was even using a metaphor of a "layer cake
<https://docs.google.com/presentation/d/1tfex0VCrro_Ph8_gf-dFzdl1E94x1qxJJTOHVCE2FYk/edit#slide=id.g14bf41bcebb_0_171>"
to describe the tech stack for how Open Badges layers into the VCDM
ecosystem, and what challenges this poses. (Open Badges 3.0 is a spec
layered on top of VCDM that's implemented at the "schema" layer for VCDM,
taking advantage of JSON-LD to make claims about learning.


Does it make sense to you to read the responses to your posts as honest
disagreement with your opinions, versus a refusal to consider them and
respond to them?

That's my angle. I see your opinion as valid. Sure, maybe JSON-LD is a
little complicated (it took me about 18 months working on things that were
JSON-LD-adjacent before I felt pretty competent at understanding the
concepts, and I still do not consider myself an expert after 8 years). But
I disagree that it is too complicated to be a fundamental building block of
the VC ecosystem that I'm collaborating with others here to create. Sure,
maybe the complexity will hamper some developers in implementing VCDM
(we'll see based on adoption; I found it took about 2 days to get the
mechanical bits of creating and signing the credential done, versus several
months on the workflows surrounding the experience of credentials.) But my
opinion is that the compromises already present in the spec and those that
are forthcoming in 2.0 are pretty good ones to achieve the benefits of
RDF-based semantic interoperability without putting too much of an onus on
a developer who isn't a JSON-LD expert.

Cheers! I hope you have a good time in the community people here have spent
the last decade building, and I hope you find great collaborators for the
pieces of credentials-related work you want to contribute to.

*Nate Otto*
nate@ottonomy.net
Founder, Skybridge Skills
he/him/his

Received on Friday, 3 February 2023 21:58:50 UTC