Re: [Agenda] W3C CCG 3/7/23 - Lists of Verifiable Issuers and Verifiers

On Sun, Mar 12, 2023 at 10:53 PM Bob Wyman <bob@wyman.us> wrote:
>  Am I missing something?

Yes, though I'm struggling to figure out what it is... have you looked
at the data model section of the specification? It lays out how a list
of verifiable issuers or verifiable verifiers differs from a generic
"list of items":

https://w3c-ccg.github.io/verifiable-issuers-verifiers/#data-model

Namely, each list entry states an operation scheme (how the list is
managed), a set of accreditations (why you should believe that the
list is authoritative, if you believe in the authority), and the
specific credential types that each entry in the list is authorized to
issue. Those things go above and beyond a generic list of items.

> If embedding lists in VCs is something that is to be done, would it make sense to define a general format for VC's which contain non-trivial lists?

We don't have to do that, the base data model upon which VCs are built
(JSON-LD) already provides for the expression of unordered sets and
ordered sets (i.e., lists).

>  And, if so, should a paging mechanism be defined to ease the handling of large lists?

That would start wandering into the protocol tar pits, which are out of scope.

> If paging is to be supported, would it be reasonable to adopt or use the ActivityStreams Collection format? Of, would Linked Data Containers with Linked Data Platform Paging or some other existing spec be preferable? If so, why?

I expect one would start inviting all sorts of conversations that
would slow the data model work down if you started pulling this sort
of thing into the conversation. Things like: "I hate Linked Data,
don't push your Linked Data Platform crap on me!" or "ActivityStreams
is for social messaging, which is not what we're doing... why are you
forcing the Fediverse onto me!?"... and so on. Yes, there is usually
some overlap between a data model and a protocol... and in this case,
we might have to consider it when the list sizes get large.

I'll note that each entry in a verifiable issuer/verifier list, given
the example in the specification, weighs in at around 1KB. A list that
contains every state agency of a particular type in the US would be
~50KB in size. A similar one in the EU would be ~27KB. Every state
medical board in the US would be around ~71KB. A list with every
university in the US would be 4MB. Every university in the EU would be
2.8MB. Those are not scary numbers... in fact, a single individual
watching a single streaming show consumes about that much bandwidth in
one second of showtime... and I haven't even factored compression into
the list sizes (they're highly compressible given that A LOT of the
information is repetitive... even more so if you use CBOR-LD).

The real question is around complexity... publish a single list and be
done with it... or modify the data model and create a protocol that
supports paging for use cases that are probably less than 1% of what
these lists are intended to be used for? Unless someone can think of a
really good use case where we need to fit tens of thousands of entries
into one of these lists, my suggestion is that we don't need to design
for it. That use case, whatever it is, will require another solution.

-- manu

-- 
Manu Sporny - https://www.linkedin.com/in/manusporny/
Founder/CEO - Digital Bazaar, Inc.
News: Digital Bazaar Announces New Case Studies (2021)
https://www.digitalbazaar.com/

Received on Monday, 13 March 2023 13:48:22 UTC