[MINUTES] Data Integrity 2025-04-25

W3C Data Integrity Community Group Meeting Summary - 2025/04/25

*Topics Covered:*

   1.

   *BBS Pseudonym Updates:* The group discussed solutions to maintain the
   unlinkability property of BBS pseudonyms in the face of cryptographically
   relevant quantum computers. Greg Bernstein presented a proposal to use a
   vector of secrets, allowing for up to 'n' unlinkable pseudonyms, with
   performance considerations discussed. The next steps involve refining the
   proposal based on implementer feedback, integrating it into the W3C spec,
   and adding explanatory text and updated test vectors. Hooks will be added
   to allow for swapping out pseudonym types in the future.
   2.

   *Post-Quantum Crypto Suites:* The group decided to support selective
   disclosure for post-quantum crypto suites, aiming to reuse existing
   selective disclosure functions from the ECDSA spec wherever possible. The
   plan is to include MLDDSA, SHSDSA, Falcon, and a placeholder for isogynes.
   The lowest security parameter sets will be used initially, with the option
   to increase them if vulnerabilities are discovered. Test vectors will be
   added.
   3.

   *Zero-Knowledge Proof (ZKP) Deep Dive (Postponed):* A planned deep dive
   into ZKP applications for data integrity was postponed to the following
   week due to the unavailability of Ying Tong. Preliminary discussions
   focused on using ZKPs to create anonymous credentials from non-anonymous
   ones, potentially starting with an ECDSA signature as a base proof. Other
   potential ZKP applications, such as proving revocation from a revocation
   list, were also identified.

*Key Points:*

   - The BBS pseudonym proposal will move forward with the
   vector-of-secrets approach to address quantum computing threats,
   prioritizing timely implementation.
   - Selective disclosure will be supported in the post-quantum crypto
   suites, leveraging existing functions and minimizing redundancy.
   - Work on integrating ZKPs into data integrity specifications will
   begin, focusing initially on creating anonymous credentials from
   non-anonymous ones. Revocation proof using ZKPs was also noted as a future
   area of interest.
   - Several verifiable credential working group specifications, including
   data integrity, have been approved for global standard release around May
   15th, 2025.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-04-25.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-04-25.mp4
*Data Integrity - 2025/04/25 09:55 EDT - Transcript* *Attendees*

Dave Longley, Eddie Dennis, Greg Bernstein, Hiroyuki Sano, John's
Notetaker, Kayode Ezike, Manu Sporny, Parth Bhatt, Phillip Long, Sam
Schlesinger, Ted Thibodeau Jr, Ying Tong
*Transcript*

Manu Sporny: Okay, everyone. We'll get started in two

Manu Sporny: All right, let's go ahead and get started. on the welcome
everyone. This is the data integrity call for April 25th, 2025. we do have
on the agenda today, let me go ahead and just get it on the screen. we are
planning on just getting an update on the pseudonyms works. If Greg needs
anything from us for that we'll take a look at the postquantum crypto
suites and the things that might need to be changed for will to make some
progress on that.

Manu Sporny: And then, Ingong and Zoe said they might be able to make the
last half of the call. and if they're able to do that, we'll continue going
over their deep dive into the ZKP stuff that Intang was speaking to last
week. and if they can't make it, then we'll just postpone that to next week
and end early. are there any other updates or changes people would like to
make to the agenda? anything else we should discuss today? All right. If
not, let's go ahead and jump into our first item,…

Manu Sporny: which is, the BBS stuff. Greg posted this in the channel.
please take us through it.

Greg Bernstein: So this is a solution.

Greg Bernstein: Let me remind folks and put the link to the issue in
discussion.

Greg Bernstein: Because that one showing you is quite the detail. So the
issue here once again is if we have a quantum computer that's
cryptographically relevant, we could pot with our current pseudonym
strategy on BBS, you could lose the unlinkability property.
00:05:00

Greg Bernstein: Okay, that means they have this quantum computer. They want
to choose to figure out where somebody's going and link different uses of
different pseudonyms across verifiers. Now I've said before that the bar is
very high for BBS because without pseudonyms BBS proofs have this cap
property of everlasting privacy meaning that two different BBS proofs the
cryptographic information not information that the holder might reveal to
the verifier.

Greg Bernstein: the cryptographic proof information for two different BBS
proofs has this everlasting privacy unlinkability information because it's
based on fresh random numbers each time you generate one of these proofs.
And so it doesn't matter if you have a quantum computer and can break
discrete logs and things like that. Each proof is statistically different
than the other enough And that's That's a very high bar. That would be
called information theoretic security. what we can do or was suggested is
use a vector not a single secret but a vector of secrets. Okay.

Greg Bernstein: And that way we've talked to cryptographers who've told us
they can give us a proof saying that you can generate up to given n of
these secrets. So you combine into a vector and we do a little bit more
complicated computation of a pseudonym. We can use that up to n times
meaning n different pseudonyms are generated based on n different verifiers
who all choose to collude and we have a cryptographically relevant quantum
computer up to n uses they still couldn't figure out and be able to link
those.

Greg Bernstein: Mano. Yes,…

Manu Sporny: It's end uses at different verifiers, isn't it?

Greg Bernstein: it's n different pseudonyms.

Manu Sporny: Okay. Yeah. Okay. Got it.

Greg Bernstein: Yeah. And that's how do you explain this? How do we ter use
the right inology? N different pseudonyms. And so that's why trying to make
this if it's tough for us making this clear to somebody else. So in the
proposal that I put together and this is being reviewed by the other
authors and people on the BBS group I said let's get something in as quick
as possible doesn't require any other standards no other optimizations.

Greg Bernstein: And last time we talked about kind of performance numbers
if you have a hundred of the calculations less than a second it takes up
about 3k bytes. If you have a thousand of them then it looks more like a
couple seconds and takes up about 32 k bytes. So a 100 sounded okay but a
thousand was getting a little tough. that's without Those optimizations
these things known as bulletproofs or compressed sigma protocols the theory
exists but they're not standardized in a spec someplace that we can just
point to.

Greg Bernstein: So the first thing we wanted to get in. So we have this as
part of BBS pseudonyms was an unoptimized version that once again I said
you get up to a hundred maybe couple hundred might be pushing it for a
thousand on a cell phone. and that's what this proposal gets at is we
implement it, how we do it within the pseudonym spec and how we try and
make it simple when it gets exposed all the way up to the VCDI BBS level.
Okay? So, we don't want to complicate things up at the higher level.
00:10:00

Greg Bernstein: We basically want to be able to just say, the default is
just one use because nobody has a cryptographically relevant quantum
computer. But if you want to use N, you're going to specify N secrets and
you're gonna the only thing you actually will be sharing with the prover I
mean sorry the issuer is you'll actually be telling them the number of
secrets but you won't be sharing the secrets with them. We do that in a me
mechanism known as a commitment and we prove that we know that information
and stuff like that. So it actually ends up looking like one additional
parameter gets passed and that would be the effect on the VCDI spec one
additional parameter and that ends up going even to the verifier.

Greg Bernstein: So that's will be the net complication. The size will get
bigger, the computations get bigger as the number of uncorrelatable
pseudonyms grow. But that's what this proposal is getting at. And this is
the details that are going directly to the BBS CFRG people. So that's where
we're at right now. There are other alternatives once again that we don't
have standards for that involve CKPS. Okay. And that was discussed and
that's the one. So that's a postquantum unlinkable. it's what we call
computationally unlinkable.

Greg Bernstein: It's not this everlasting or information theoretic, but
it's still good because it's against the quantum computer. It doesn't have
this N issue where they'd have to keep track of it. However, we need some
new mechanisms which we'll be hearing about one of them on Tuesday at the
CCG call. Questions?

Manu Sporny: I guess Greg, do you need anything from us to make progress on
this? It feels like I mean you've done a wonderful job explaining it in
these issues and…

Manu Sporny: on the calls and proposing multiple solutions. I think what
we're trying to do right now is just say look we're going to go with the
nseudonym approach because we don't want to delay BBS…

Greg Bernstein: Yes. …

Manu Sporny: because of this but we do want to have a reasonable tunable
mechanism if someone is concerned about cryptographically relevant quantum
computers.

Manu Sporny: and so you're going to go forward with this and make a
proposal and put it in the spec and all that kind of stuff. What's the next
step here?

Greg Bernstein: so we're going we'll tweak this based on feedback from any
implementers. because I know we have at least one other one on this call.
And then I roll it into the W3C spec, right? So I have to get it into the
BBS pseudonyms spec and then the take these changes put them into the BC RR
spec at W3C. I'll have to do some explanatory text on privacy and its use
and update test vectors. so there's kind of a straightforward regression.

Greg Bernstein: So we get it into the o over with the CFRG group, get this
worked through. Then I roll in those changes to our RSpec and explain its
use. So it's kind of straightforward. And like I said, we can leave hooks
for these other approaches and such like that, but that would be the
approach I would take.

Manu Sporny: Okay. Right.

Greg Bernstein: Dave. Yes.

Dave Longley: Yeah, I was just going to comment. We'll want a hook at the
ITF spec layer for swapping out pseudonyms. And then in the data integrity
spec, we'll probably just want to take an extra bite or flag or whatever
that says that's a pseudonym of type,…
00:15:00

Dave Longley: zero or type one or something. And so we can add different
types as we go.

Greg Bernstein: Exact. Yeah.

Greg Bernstein: Yeah. that's exactly what we were thinking because the
folks over at BBS Pseudonyms, it's like, we got this great mechanism. We
also like the idea of the postquantum flavor if those other specs that we
could just point to come around.

Greg Bernstein: but we wanted to just nail this down because the demand for
pseudonyms as more people understand anonymous credentials or credentials
that have these nice properties, they go, " we need pseudonyms." And it's
like, "Okay, we want to make sure you have a flavor and extra protection if
you need it and we can go from there, but we don't want to delay any
longer." folks seem like they're understanding the application. I mean,
even I was telling the people that are doing ZKP things, I go, " that's
great. You've got a way to anonymize a credential, but you add pseudonyms
because you need that otherwise people don't want to accept them." Okay.

Manu Sporny: All that sounds great. Thank you very much, Greg, for
continuing to work on this. it is critical that we get it done before we
can move things forward. I guess the other point I'll make here is that so
it is now publicly known because we had a verifiable credential working
group call yesterday that all of the VCWG specifications including data
integrity have been approved for global standard release.

Manu Sporny: We are prepping the final versions of those specifications for
a publication date around May 15th. So I mean pretty soon from now. and
what that'll mean is that means that the foundation on which the BBS stuff
is built is solid meaning it's there.

Manu Sporny: It's a global standard. It's not going to be changed. So
that's good. but then the other thing it does is it's like okay we got to
get BBS done right.

Manu Sporny: And so I think let's try and push for getting the test vectors
done at least two inter interoperable implementations done and that sort of
thing and…

Greg Bernstein: Yeah.

Manu Sporny: that'll put us close enough so we're basically just waiting on
CFRG to do the final approvals on these extension things right okay all
right and…

Greg Bernstein: That sounds good.

Greg Bernstein: Sounds real good.

Manu Sporny: then it'll just be you

Manu Sporny: as quickly as those can get done and we can potentially maybe
ask for scheduling and when's it going to happen and maybe we can have Anna
and…

Greg Bernstein: Yes. Yes.

Manu Sporny: her colleagues also weigh in on the pseudonym stuff and the
other items. okay so still decent bit of work to get done but I mean I
think we don't have any blockers anymore. else on Any other questions from
the group? All right. If not, let's go ahead and move on to the next item
which is the postquantum crypto suites.

Manu Sporny: quantum so there is a crypto suite called data integrity
quantum safe which will contain a number of postquantum mechanisms during a
previous call when discussing the quantum safe crypto suites for data
integrity I think we made decision

Manu Sporny: to include MLDDSA what far I always mispronounce secure
stateless the stateless hashbased signatures SHS DSA and then provide kind
of a placeholder for Falcon as well in

Manu Sporny: and maybe the isogynes stuff as well. So, kind of four crypto
suites. and I think we were just going to select the quote unquote weakest
parameter set because, it's not clear that these things are going to stand
the test of time. And it's not clear that cranking the parameter set all
the way up to the top will really help us all that much. if we find that
there's a vulnerability or something at that point, we will crank up the
parameter set. but until then, we'll just use the lowest security levels
for this. I don't remember if we discussed selective disclosure in putting
that support in as well.
00:20:00

Manu Sporny: so we might want to cover that today. if we do, we'll have to
figure out how to do that in a way that doesn't result in, tens of pages of
text per crypto suite for selected disclosure. and then I think the other
thing we need to do is take a look at these algorithms and are they the
greatest thing we should be using? or what should we do here? let's go
ahead and get started. maybe the first decision we need to make is do we
support selective disclosure with the postquantum crypto suites.

Manu Sporny: I suggest we probably should just because it is a useful thing
to have even though we don't have a DBS in unlinkable style version of it
being able to selectively disclose in a postquantum way is I would imagine
something that people are going to want. any other thoughts on that? other
folks feel like we should support selective disclosure for postquantum
suites. any arguments against any opinions that …

Greg Bernstein: It's just I was just …

Manu Sporny: go ahead.

Greg Bernstein: I was Dave's going to chime in, but I was just the way we
do it with ECDSA where we have an ephemeral key and we have individual
signatures. per item could get quite large, but Dave may have some ideas on
that.

Dave Longley: Yeah, I was going to say we got a couple of options. Whatever
we end up doing, it seems to me like we should design it such that each one
of the different schemes is just a drop in replacement for whatever we do
for the selector disclosure. the only caveat there is if the SQI sign stuff
works, we can reuse, I would think, what we did for DSAS SD with almost no
changes at all. for these ones with much larger signatures, we might want
to explore alternatives, keeping in mind that what we did with ECDSA SD
made sure that you did not leak information when you exposed an element in
a set. You did not leak the size of the set.

Dave Longley: If you do something that's hashbased that's similar to what
stjot does then you leak the size of the set when you just share one
element of the set and…

Dave Longley: so that there are things like that we will need to consider.

Manu Sporny: Yep. I guess the other question is…

Manu Sporny: how much of the ECDSA spec can we just reuse the selective
disclosure functions in ECDSA or do we have to effectively copy and paste
this entire section over? And then the same thing with these algorithms,
the SD functions. Do we have to rewrite these or can we just reuse them? go
ahead Dave.

Dave Longley: I would think I don't know if just re just reuse might work
for almost all the selective disclosure functions or all of them. but for
those other sections we might be able to reference them and say but you
have to use a different header or something. I don't think there's anything
else that's specific to ECDSA other than identifying that it is ECDSA
through a header value. so all of that other stuff is already generic. and
the only question is if we end up doing something that's hashbased,…
00:25:00

Dave Longley: we would still use most of what's there. We would just add a
couple of different functions that would for the hashbased approach. Cut
brick.

Greg Bernstein: Exactly.

Greg Bernstein: That's what we did with We were able to reuse almost all
the basic selective disclosure functions and we even had to do something
slightly different as far as to keep the anonymity on linkability. We had
to do something different what's the right term Dave for the reordering?

Dave Longley: This is the blank node identifiers.

Greg Bernstein: Yeah, the blank node identifier and the ordering thing. So,
tons of reuse. I think the only thing we didn't want to do is we didn't
want to pull out those functions into yet another spec that would have been
a harder, more specs and they're abstract dish and so they really belong
someplace, but they are reusable. I mean even for the ZKP stuff we might do
and such like that they are a good set of functions. and particularly also
the way we use JSON pointers which is a programmer friendly way of deciding
what to reveal.

Greg Bernstein: It's good stuff. Even if we have to do some other way
that's not as big because the signatures are big. So,

Manu Sporny: Got it.

Manu Sporny: So, that's good news. I guess we would still end up with
wondering if we end up with section 33 and 34 equivalents to that for every
single postquantum scheme. go ahead Dave.

Dave Longley: I would expect we could write those sections once and
parameterize them for the specific scheme.

Manu Sporny: All right.

Manu Sporny: That's good. And if we do that, I'm thinking way forward into
two years from now where I think we are probably going to want to pull out.
So, this is the ECDSA spec I'm looking at the selective disclosure
functions in 34 and 35. definitely in 34, maybe even in 35, pull those out
into maybe the data integrity spec as just a core set of algorithms. go
ahead Dave.

Dave Longley: Yeah, I'm not sure. I mean, maybe that's the right spec for
them to in. Maybe they should land in their own data integrity selector
disclosure spec, but that is definitely something we wanted to do if we had
more time because all of these pieces are reusable.

Manu Sporny: Yep. Yeah.

Manu Sporny: the only let's see in maintenance mode in the verifiable
credential working group we're only allowed to make things that don't
change implementations which these wouldn't right I mean we would just be
moving these from one spec to another however we are not cleared to publish
a new spec selective disclosure functions for data integrity that's the
only reason I'd kind of move it out and I'm trying to think and It's pro. I
don't know if we really need a separate selective disclosure functions for
data integrity spec. I'm wondering if we can just put it in just the base
data integrity spec and basically say any data integrity suite can or use
these functions if they find them useful. I think that that might be just
easier so that developers don't have to jump across multiple different meta
meta specs.

Manu Sporny: some W3C members tend to be annoyed by meta-specs. So, in
theory, we could move section in the ECDSA spec, move section 3, 4, and 35,
which contain the generalized selective disclosure functions. and then
maybe some genericized, functions out into the data integrity spec. and
that would simplify somewhat BBS. It would greatly simplify ECDSA.

Manu Sporny: And then we could just reuse those functions in the
postquantum suites where we'd really have to or we'd really want to because
the postquantum we just don't want to copy and paste a bunch of algorithms
and only change one or two words in each one of them. All right. I think we
have clarity so the decision there is we are going to support selective
disclosure for the postquantum suites. we're going to try to use
generalized algorithms for it. So we may have some shuffling to do in ECDSA
to move the selective disclosure functions out to the data integrity spec
in time over the next year or so.
00:30:00

Manu Sporny: And then we'd reuse those in the postquantum crypto suite
specs which should make it nice and tight the algorithm sections nice and
tight in the postquantum things. and then just rehash, we're going to
support four MLDDSA, SHSDSA, Falcon, and then we'll put in a placeholder
for isogynes as experimental and hope that it survives.

Manu Sporny: Because that would be awesome.

Manu Sporny: and I think what we would do we keep experimental on the front
or do we take it off at this point? any desires one way or the other? It
doesn't hurt to keep it right now, I don't think. until we get a little
further along to implementations. Go ahead, Greg.

Greg Bernstein: Yeah, I mean those are in this spec.

Greg Bernstein: So, whether or not we keep it and we have to figure out
where we put any parameters.

Manu Sporny: No.

Greg Bernstein: I was just wondering did we put in any test vectors yet?
because I did notice the per same person that wrote up that I use for
EDDDSA and ECDSA. They did add MLDDSA and I think maybe stateless hash. So
we should be able to just pop in the exact same test vectors and kind of
get started.

Greg Bernstein: And because that would hit the issue I was wondering I know
this is very detailed…

Greg Bernstein: but since they're longer signatures will that make the test
vectors be too big to show in the text and such like that. I know that's a
very detailed thing but they are longer right. okay. Yes.

Manu Sporny: Yeah, I think it's fine…

Manu Sporny: if they're longer. I mean, it's going to be in an appendix
and, people have asked for the exact text test vectors they can test
against. So I think that's fine. that if it's 10 pages of base encoded
signature information. that's what you get with some of these postquantum
suites. And that's why we want to use the isogynes stuff over the stuff
that got standardized eventually hopefully again if isogynes ends up being
secure.

Greg Bernstein: I guess it was Will that was doing that shnore thing.

Manu Sporny: Okay.

Greg Bernstein: That's where he already started adding the test vectors,
but nobody did that for this yet. Okay.

Manu Sporny: I think that is good. and as far as these algorithms, I think
to be safe, we just go back in and just replace these with whatever gets
published as the global standard stuff for ECDSA. and then of course,
change the algorithms and everything back. but I think that ensures that we
do the right thing here, right?

Manu Sporny: And then there are small changes here. This can't be base 58
BTC encoded because the length of the signature. It's got to be B 64 UR so
little changes like that need to be made to this. And then once we get the
base, thing down, I think that's effectively copy and pasted four times for
each algorithm or, for S, SHs, DSA, and Falcon and Isogynes. and then once
we get that in there, we can put the selective disclosure versions of them
in as well. yeah, I'm pretty hesitant crypto suites. Feels like a lot.
00:35:00

Manu Sporny: but of course I guess we're doing that because it's just too
early to tell if these things are going to survive over the next 5 years.
So we're just saying here they Use whichever one, you feel works and more
than likely folks are going to pick the module lattis based one because
that's the one that's kind of out for now. okay. All right.

Manu Sporny: All right, I think we have a game plan here. is there anything
else we need to consider for the postquantum crypto suites? then we have a
game plan there. the only other thing we had on the agenda today was to
deep dive into some of the ZKP stuff that Ingang broke brought up last
week. in her CKP talk. but she also mentioned that she's probably not going
to be able to make this call…

Manu Sporny: because it's very late in Singapore. so we'll just invite her
back next week to kind of talk about that stuff, get some updates on where
things are.

Manu Sporny: I think the other thing maybe of relevance, go ahead, Greg.
I'll get that next

Greg Bernstein: I just wanted to say I did have a good talk with Ying Tong
and…

Greg Bernstein: Zoe and they had really done a deep dive into the data
integrity specs and had a lot of really good questions. And the one thing
we were kind of started doing was narrowing it down to a first use of ZKP
to take a non anonymous spec a non-anmous credential and be able to create
something that has an anonymity properties like BBS.

Greg Bernstein: But that was only a first use because I also told her about
some of the trade uses we've heard about but because there's a lot of power
with CKP but what do you want to use it for first because I know everybody
talks about more predicate proofs and things like that but I don't know
what we need yet that way but the idea of creating an anonymous credential
from a non anonymous base and that seemed like a good first target. I was
kind of suggesting to her other opinions on that.

Greg Bernstein: I wasn't trying to point her in a wrong direction, but that
sounded a good app from some Yeah,…

Manu Sporny: which…

Manu Sporny: which one was the app that you were talking mean meaning being
able to create an anonymous credential with a pseudonym from an ECDSA
signature. There. Okay.

Greg Bernstein: exactly. a non- selective disclosure ed ECDSA this because
we were talking about the structure of the document and I don't know if she
got that figure out to people but we're saying what you would be doing is
you would start if you look at both the ECDFA SD and…

Greg Bernstein: BBS they both start with what we call a base proof,…

Manu Sporny: Mhm.

Greg Bernstein: the thing that goes from the issuer to the holder. And we
said, we wouldn't expect that. We would just start with one that's already
standardized, and then we would come up with a way of doing the derived
proof that would turn it anonymous and add in some form of holder binding
and pseudonym and such like that." and we said we could even parameterize
that by the different ZKP techniques and such.

Manu Sporny: So let's say it's a just take the California driver's license
use case. So we've got a verifiable credential there. We've have an ECDSA
signature on it.

Manu Sporny: We could make it an ECDSASD signature and…

Manu Sporny: then as a result of that do the ZKP thing that Ying Kong would
be working on that would get us a efficient selective disclosure for one or
more fields or one or more I guess properties in that credential. Yeah,
it's within the CCG they Yeah.
00:40:00

Greg Bernstein: Get you.

Greg Bernstein: Yeah, that's and because they seem pretty ready to start
working on a document and so I wasn't sure if she contacted you about where
do we do that within the CCG I assume and where do and we get a repo. I
mean like I said they were ready to start putting down on paper and even
doing some I said can we start with an example can we take one of our
driver's license examples based on VC's contexts and…

Greg Bernstein: can we run that with one of our existing standardized
things and then go at it this is experimental so we don't have to worry
about which of the various EKP techniques is most optimal in, one shape or
another, but getting some proof of concepts to show people would be nice.

Manu Sporny: Mhm. Yep.

Manu Sporny: Yeah, absolutely. the other use case I think that's of
interest is set membership like proving revocation.

Manu Sporny: I think yeah because again if we look at things like bitstring
there bitstring status list has a bitstring status list it's like is the
bit flipped or not lends itself well to I think a an efficient
cryptographic circuit based approach because all you're trying to do is
prove whether or not a bit is flipped in an array bits. so it's largely
just a bunch of shifting potentially. that needs to happen in the
calculation. but again I'm saying that with next to no understanding about
how the base level circuits are built.

Manu Sporny: so that's another thing that you might ask Ying Tong is okay
what if we have a revocation list and we need to prove revocation without
because the thing with the revocation list it is both the credential itself
and the revocation list itself are massively identifiable. and the thing
with pseudonyms is we get rate limiting but we don't necessarily get
revocation status and if we in theory want these long lived unlinkable
credentials you need revocation status information along with those.

Manu Sporny: So, I think that's another area to point out to Ying Tong,
hey, we're really looking for a good solution, in that area. Mhm.

Greg Bernstein: Okay. Yeah,…

Greg Bernstein: that's the only issue is when she's talking to somebody
who's been working on BBS and unlinkability, she's going to hear from get a
biased application slant. but it was convenient…

Manu Sporny: Mhm. Right.

Greg Bernstein: because I was able to talk to her at 700 p.m. California
time rather than 700 a.m. California time.

Manu Sporny: Right. Yeah. Yeah. yeah. And I mean, I don't know how, again,
I mean I know that Obby and their team are pretty hyperfocused on M MDL and
finely tuned cryptographic circuits for the MDO format.

Manu Sporny: But I don't think we're as driven by that use case,…

Manu Sporny: Meaning we are more driven by efficiency and doing putting
everything together in the right way rather than trying to hack, something
based on the way things are deployed right now. go ahead, Greg.

Greg Bernstein: That'll be a great thing to hit Obby on Tuesday…

Greg Bernstein: because Ying Tong is here. Let's see if we can grab her.
But what Abby and Matteo put into the CFRG is actually a more general
proving technique not specific to ECDSA.

Greg Bernstein: It's an optimized version of some check leero for circuits
but not necessarily just for ECDSA circuits and that's where I was
optimistic because they had some very good performance numbers in their
paper for just Shaw 256 pre-im images which could be helpful for my
pseudonym.

Manu Sporny: Mhm. That's right.
00:45:00

Manu Sporny: Hi Intang, welcome to the call. we were just talking about
some of your work and what the next steps would be. we were going to try to
give you some time today to deep dive on some of the stuff that you weren't
able to last week, but we probably don't have enough time for that, which
is fine.

Manu Sporny: I mean, you can, if next week works for you, that works for
us. we were wondering kind of what we would need to kind of help you get
started. Is there anything that you need from the group? I mean, even
things like you need a source code repository setup, you need to understand
what the process is to start a work item, you need template, that kind of
stuff. So, do you have any questions along those lines?

Ying Tong: Hi. yeah, I had this call with Greg a day or two ago. we went
through the VCDI BBS specification. So, my approach so far has been to fork
BCDIBBS and Greg said, kind of got out a bunch of it. and keep the parts
that are relevant to a generic CK snark approach such as OB's approach from
Google.

Ying Tong: so I can quickly share. h would I be able to screen share?

Greg Bernstein: Tom, the only thing I don't know if I over got too specific
about is my interest obviously was going from a non anonymous credential to
something anonymous. And I know your ZKP techniques can do other things,…

Ying Tong: Absolutely like the first most basic thing we would want to
achieve is unlinkable presentations.

Greg Bernstein: but I thought that we were talking about this was a good
first application of the ZKP techniques. Is that correct interpretation or
Okay, good.

Ying Tong: and we can think of the generic CK snark as a privacy hardening
layer between the ECDSA signature and the presentation. If we could even
achieve that, that would be a huge win. so I think I was deep diving with
Greg

Manu Sporny: What? One second. Ying tong, you're sharing just the sharing.

Manu Sporny: Stop sharing. it not the screen itself. N still sharing only
you were sharing

Ying Tong: That's really funny.

Ying Tong: Can you see this mermaid diagram?

Manu Sporny: There We can see your full screen now, I think. Or at least
the browser window. Yep. Yes. Yeah. VCD ZK Snark. Yep.

Ying Tong: My approach was to fork VCDI BBS and adapt it to a generic CK
snark. Where it fits with VCD BBS is in the derived proof part of it. So
the generic DK snark approach is agnostic to the base proof. it could be a
ECDSA signature, EDDDSA, RSA, it could be a signature. the base proof would
be input as a private witness into a arithmetic circuit. So this derived
proof here is an arithmetic circuit for the proof relation. So for example
verifying an ECDSA signature.

Ying Tong: So this circuit would take as a private input the ECDSA
signature and generate a ZK snark that we know a valid ECDSA signature for
some agreed upon public key. So yeah, from my deep dive into the data
integrity specs, I concluded that we need a VC data integrity proof
derivation stack. So right I think I discussed this with Greg as well.
00:50:00

Ying Tong: right now the derive proof procedure is only specified for BBS
signatures where the base proof is the signature and the derived proof is a
BBS proof. So yeah, I do see a path forward to adapt this derive proof
mechanism for generic zk snarks where instead of a BBS proof we produce a
zk snark proof. yeah, I'm happy to take questions, comments.

Manu Sporny: Yeah, that sounds great and it sounds right to me. I don't
know, Greg, if Greg was able to go over kind of the process here to get
this stuff into the CCG, but all you'd have to do is, fork the BBS spec.
and it just needs kind of a title and an abstract introduction and then
maybe some of the stuff that you've written up here. it doesn't need to be
completed anyway. it just needs to be a start and then we have to kind of
propose it as a work item for the credentials community group. we can take
care of that, for you or, show you kind of what the process is, but you
just raise a GitHub issue and you're like, "Hey, I would like to start
working on this thing. Here are the other people that are working on it
with me.

Manu Sporny: It's required that you work with people that are not just
within your organization. You have to pull in people from other
organizations. I think there multiple people here that would support the
work. and then it's kind of floated out to the mailing list. It's given two
weeks for people to provide input on it. see if anyone would object to the
work. I don't think anyone's ever objected to starting work items ever. So
it would just be like we'd hear some feedback from people look for support
and then within two weeks to a month it gets officially adopted as a work
item and then we can just keep incubating it. none of that's all procedural
stuff that I think can happen in the background.

Manu Sporny: And you should just let us know when you're ready to kind of
kickstart that process.

Manu Sporny: I'd suggest starting it immediately just so it's out of the
way, And we don't have to worry about doing that later. do you have any
questions on that particular part of it?

Ying Tong: That's very helpful.

Ying Tong: Yeah, I agree. It should run in the background.

Ying Tong: So, I'll look into that and I'll reach out if I run into any
questions.

Manu Sporny: Okay, sounds good.

Manu Sporny: All right.

Manu Sporny: We're out of time for the call today, but Ying Tong, would you
want some time on the agenda next week to deep dive into some of the items
that you were not able to during the presentation last week? That's okay.
Great.

Ying Tong: Yes, I have this whole presentation prepared already.

Ying Tong: So that would be great. Yeah.

Manu Sporny: All So, we'll put you on the agenda to do that next week. you
can have as much time as you need to go through that stuff. and that'll be
I think our main agenda topic for next week. All Thank you everyone for the
call today. thanks for all the great work on BBS pseudonym and the vector
approach Greg and continuing to push that forward at CFG. thanks for the
input on postquantum stuff and thanks Ying Tong for continuing to do the
work that you're doing and being willing to present your stuff next week.
we will meet again next week at the same time. thank you everyone for the
call today and have a wonderful weekend. see you all next week. Take care.
Bye.
Meeting ended after 00:54:52 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Friday, 25 April 2025 22:04:55 UTC