Re: Recommendations for Storing VC-JWT

Having thought about this a bit more in the context of the vc api, I think
the correct thing to do is to rely on the current type definitions the spec
defines, and accept the "JSON storage / analytics" issue at some other
layer.

This means defining a type system that harmonized compact JWTs and JSON-LD
Object formated VCs,

For example:

https://github.com/transmute-industries/api.did.actor/blob/main/public/spec/schemas/SerializedVerifiableCredential.yml

https://api.did.actor/docs#post-/api/credentials/issue (this is not
exactly correct yet, but it shows one way to ask for a credential in either
format).

The VC API should not care about the format of the VC, as long as it's spec
compliant.

For VC-JWT that means compact JWTs as strings.

For LD Proofs that means objects encoded as json strings.

Harmonizing issuers, subjects or other JSON related concerns should be
handled at a completely separate layer.

Another reason for making the input and out of VC-JWT use the compact
representation is the issues with supporting the DEF header.

If compression becomes more popular in JWTs, there won't be any other
choice, and that seems likely to happen given the approach taken by smart
health cards.

VC API implementers should probably assume only the compact JWT
representation of VC-JWT will be supported, until the spec says something
else.

OS


ᐧ

On Tue, Feb 22, 2022 at 11:31 AM Dave Longley <dlongley@digitalbazaar.com>
wrote:

>
> On 2/22/22 11:52 AM, Mike Prorock wrote:
> > Manu,
> > I think you are pointing out the right concerns that I share with a
> > dencode, store, reencode approach only.  For our side right now we are
> > going down a path that links the data via a relationship off a UUID, but
> > splits the encoded data and stores that separately from the un-encoded
> > "data" proper for analysis and other operations.  Don't see a good way
> > around that given the variety of things that can pop in seemingly out of
> > nowhere that would prevent proper verification of that data's
> > integrity.  Note, that this is for JWT stuff only.
>
> Yes. What we've found at DB is that some form of "normalization" is
> going to be necessary when you're dealing with complex, structured data
> -- as opposed to simple user IDs or similar. The only question is
> whether it gets pushed to the application developer such that they are
> dealing with it in some form or another at many layers ... or it's
> isolated around the crypto layer.
>
> We have found that a decision to avoid normalization/canonicalization
> around the crypto layer was a trade off -- that pushed it out of the low
> level data integrity-related tools and into everything else.
>
> My view is, for the complex, structured data use case, that
> such a trade off leads to a failure to separate concerns and a failure
> to properly prioritize constituencies. However, this kind of use case
> may not have been the major target for the designers of standards where
> this trade off was made. But an assumption that these systems should
> "just work" for that use case seems to be off the mark. No one wants to
> deal with normalization/canonicalization, but it's better to put it in
> the corner than all over the room.
>
>
> --
> Dave Longley
> CTO
> Digital Bazaar, Inc.
>


-- 
*ORIE STEELE*
Chief Technical Officer
www.transmute.industries

<https://www.transmute.industries>

Received on Tuesday, 22 February 2022 23:10:40 UTC