Re: Layering/Composability of Verifiable Credentials (was: Re: Market Adoption of VCs by Fortune 500s (was: Re: DHS Verifier Confidence/Assurance Level Expectation/Treatment of non-publicly defined Vocabulary/Terminology -- by using @vocab))

On Tue, Feb 7, 2023 at 9:21 PM Christopher Allen
<ChristopherA@lifewithalacrity.com> wrote:
> I do agree with him that there are unaddressed issues in layering and composability that have become barriers of entry for both engineering teams and companies, who've largely now left the CCG table.

I agree that there are barriers to entry that we continue to try to
address, and believe we're making headway (at least, it seems like it
now, time will tell if the decisions made in VC 2.0 made things better
or worse).

One of the questions I have is, have the people that have "left the
CCG table" for greener pastures... actually found those greener
pastures? What is being done elsewhere that's more successful than
what we're doing here? What approaches are DEMONSTRABLY on a faster
adoption curve than the work we're doing here?

> However, I believe the problem is that there are a lot of assumptions or unstated requirements that require greater skill & knowledge from developers.

Yes, and this is a focus of the VC 2.0 work... to try and reduce the
amount of skill/knowledge that developers need to work with VCs. Some
of us, including me, believe this is largely a tooling issue (as
evidenced by JFF Plugfest #2 scaling faster than any of us imagined it
would due to tooling provided to enable interop). Others believe it's
a fundamental problem in architectural layering. None of us have the
data to demonstrate it one way or the other, it seems, so we're having
to debate it on mailing lists in the hopes that someone is going to
present something that improves what they believe the issue is going
to be.

Our focus at present is on improving the ecosystem/community tooling
so developers don't have to do as much "low-level coding" as they had
to do in the past and can focus more at the application layer...
templates, editors, linters, interop tooling, linters, sensible
defaults... stuff like that.

> (For instance, you really need to use SPARQL if you are serious about using JSON-LD data in a large database given an open-world model, and using that requires you to have a deeper understanding of RDF that JSON-LD abstracts out.) Stop saying that RDF knowledge is not required — as far as I have found, RDF skills are needed for anything at production scale. Focus on helping them use RDF.

This was the only part of your email that I found myself disagreeing
with. We're at production scale and we don't need SPARQL... we've
never needed SPARQL or an RDF database for any of our use cases over
20+ years. Our experience might be unique, but one of the driving
factors of JSON-LD is that you don't need SPARQL, RDF databases, or
deep knowledge of RDF to just get going. It's true that to do some of
the more advanced stuff, you might have to learn some concepts, like
the difference between a tree-based data structure and a graph-based
data structure.

No doubt, there are rough spots here that we're trying to shave down
as the years roll on. It's hard to see some of the rough spots
because, as new use cases come into this ecosystem, we hit things that
we haven't had to deal with before... but that's true of any
technology.

In general, people overestimate how much breakthroughs and new
architectures advance a field and grossly underestimate how much
progressive iteration and refinement actually advances a field.
Everyone's excited about conversational AI these days, yet little is
written about the 40+ years and the thousands of refinements it took
to get here. :)

So, no, I reject the notion that you have to use SPARQL and RDF
databases, or have deep knowledge of the semantic web to do useful
stuff with VCs. If you do, we're failing in some way and that's a
rough edge that we need to sand down (as long as it doesn't destroy
the ecosystem we've built in the process). :)

> I also feel there are also layer violations between layers that make them rather complex to implement. The separation between layers is not clean. But they are livable.

Perhaps we should start there... what are the layer violations that
you believe exist today?

If I had to guess, one might "digitally signing requires RDF Dataset
Canonicalization"... but I don't want to guess, I'd like you to be
specific (perhaps we should start a new thread for each layer
violation you feel is occuring in order to focus the conversation on
each one?).

> That all being said, I still believe VCWG should focus on VC-LD and do amazing things with it and not get lost in trying to address JWT-CBOR-mDL-etc. focused concerns. Completing a VC 2.0 spec leveraging well-defined JSON-LD and testable interoperable tools will be of great utility to the community.

While I agree with your general notion, that's definitely not what the
VCWG is doing right now and it's unlikely to be what they do. The VCWG
has grown to the point where there are at least two, possibly more,
"Securing the VC Data Model" groups... there are a critical mass of
people that want to use some variation of JWTs/JWS/SD-JWT/JWP/COSE
(hopefully the VCWG will be able to reduce the duplicate options
there)... and there are people that want to use Data Integrity.

-- manu

-- 
Manu Sporny - https://www.linkedin.com/in/manusporny/
Founder/CEO - Digital Bazaar, Inc.
News: Digital Bazaar Announces New Case Studies (2021)
https://www.digitalbazaar.com/

Received on Wednesday, 8 February 2023 15:15:57 UTC