[MINUTES] Data Integrity 2025-04-18

W3C Data Integrity Community Group Meeting Summary - 2025/04/18

*Topics Covered:*

   -

   *W3C Votes on Data Integrity Specifications:* All seven Verifiable
   Credential v2.0 specifications (including data integrity, EDDDSA, selective
   disclosure, etc.) passed the W3C vote with high turnout and no formal
   objections (pending review of one secret ballot). This signifies the
   specifications are very close to becoming global standards.
   -

   *Internet Identity Workshop (IIW) Updates:* Discussions included ZKP
   performance improvements (Google), a review of MDL and MDOC (deemed
   fundamentally traceable), and presentations on AI and MPC.
   -

   *Yinong Tong's Presentation: ZK-SNARKs for Data Integrity:* Yinong
   presented her work on implementing a zero-knowledge proof enabled wallet
   compliant with EU digital ID wallet standards. The presentation focused on
   the need for a standard for programmable zero-knowledge proofs due to
   increasing adoption in various applications, including digital identity.
   Key discussion points included:
   - The use of ZKPs as a privacy-enhancing layer on top of existing
      signature schemes (like ECDSA).
      - The desire to avoid trusted setups in favor of transparent setups.
      - Prioritization of non-interactive proofs, post-quantum security,
      and efficiency (especially prover/verifier signature size and
compute time).
      - Alignment with the W3C's rapid iteration approach to standards
      development. Action items were established to collaborate on creating a
      specification for a cryptographic suite, developing a ZK-SNARK optimized
      canonicalization transformation, and providing a pseudo-code example.
   -

   *BBS Pseudonym Updates (Greg Bernstein):* The group discussed the
   vulnerability of current BBS pseudonym mechanisms to quantum computers. A
   proposed solution using a vector of secrets was presented, offering
   everlasting unlinkability up to a certain number of pseudonym uses (n).
   Further work is needed to optimize the solution's size and efficiency.

*Key Points:*

   - The W3C data integrity specifications are on the verge of becoming
   global standards.
   - There's a strong interest in developing standards for programmable
   ZKPs to enhance privacy in digital identity and other applications.
   - The community prioritizes non-interactive, post-quantum secure, and
   efficient ZK solutions, particularly focusing on minimizing signature size
   between the prover and verifier.
   - The W3C's approach to standards allows for rapid iteration and
   adaptation to technological advancements.
   - A solution for quantum-resistant BBS pseudonyms is under development,
   focusing on maintaining everlasting unlinkability.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-04-18.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-04-18.mp4
*Data Integrity - 2025/04/18 09:56 EDT - Transcript* *Attendees*

Geun-Hyung Kim, Greg Bernstein, Hiroyuki Sano, John's Notetaker, Manu
Sporny, Marcus Engvall, Parth Bhatt, Phillip Long, Ted Thibodeau Jr, Ying
Tong, Zoey 0x
*Transcript*

Manu Sporny: Hi Good morning, good evening. we will get started in about
four minutes once we get a couple other folks joining.

Manu Sporny: Yin Kong, we will probably start with your presentation. ask
you to do a quick intro of yourself. that sort of stuff just as a heads up.

Ying Tong: Okay, that sounds great.

Manu Sporny: All right, let's go ahead and get started. I think we have a
couple of folks on vacation today because the US I guess Easter holiday
coming up. so it might be a little late the agenda for today first off
welcome everyone. This is the W3C data integrity meeting for the
credentials community group. this is April 18th, 2025. our agenda today
will be getting an introduction to work that Yinong is doing on ZK Snark's
in data integrity.

Manu Sporny: so we'll get an introduction to her and that work and Zoe
who's also here supporting that work. we'll cover any updates to BBS
pseudonym stuff. So we needed to solve that anti-correlation or we needed
an anti-correlation mechanism for pseudonyms if quantum relevant
supercomputers become a thing. we wanted everlasting unlinkability for
pseudonyms. so we'll get an update on that from Greg.

Manu Sporny: and then we'll do a update on the postquantum signatures PRs,
but I don't see Will here, so we might skip that for now. we'll start off
the meeting by covering where we are on the W3C votes on data integrity.
anything of relevance that came out of IIW. and then of course open floor
anything else that folks want to cover today. are there any updates or
changes to the agenda? Is there anything in addition to this stuff that
folks would want to cover? then our agenda is set. let's take kind of
introductions and announcements. Let's do introductions first. Yintang,
would you mind just giving a quick introduction to yourself and after you?
Zoe, please.

Ying Tong: Hi my name is Ingong and I'm an applied cryptographer and a
grantee of the Ethereum Foundation. So I'm working with Zoe to implement a
zero knowledge proof enabled wallet unit that would comply with the EU
digital ID wallet standards.

Ying Tong: So yeah, thank you all for the chance to share our work at this

Manu Sporny: wonderful and…

Manu Sporny: welcome to the call inong. Zoe, would you mind giving a quick
introduction to yourself as well?

Zoey 0x: Yeah, I'm a product owner at PSSE,…

Zoey 0x: which is a privacy scaling and explorations at the Ethereum
Foundation. I'm the team lead for CKID, primarily being the product role on
the team. Thanks for having us here today.

Manu Sporny: wonderful. Welcome to the call,…
00:05:00

Manu Sporny: Zoe. let's, go ahead and jump into kind of any community
updates. So this could be anything that folks u saw or experienced at
internet identity workshop last week. I can also start with kind of where
the W3C votes are. So as folks might know over the past three years we've
been working on data integrity the global standard in the verifiable
credential working group at the W3C. We finished our work up February of
this year in the specifications there seven of them seven verifiable
credential version 2.0

Manu Sporny: no specifications which include data integrity, EDDDSA, the
selective disclosure stuff for ECDSA all that stuff along with VC Hosy cozy
and bitstring status list and all those specs went into a vote a month ago
at the global standards level at W3C. So W3C has 348 member organizations.
and then it goes up to a vote for those member organizations and that's the
final vote to determine whether or not the specifications become a global
standard. The vote closed yesterday. So it was open for a month and it
closed yesterday. and the great news is that every single one of the
specifications passed the voting process.

Manu Sporny: we had 45 member organizations voting, which It's a high
turnout. There's usually only about 15 to 20 that show so we had 45
organizations show up. that's in the top five ever in the last 25 years for
W3C voting. big turnout. so that's good. the most important thing of course
is that there were no formal objections on any of the standards. So formal
objections are where a company says absolutely not. We cannot allow this
specification to proceed to a global standard. There were zero formal
objections to any of the specifications. So that's great because that means
that we can now immediately proceed to global standards.

Manu Sporny: There was one secret ballot that was submitted. secret ballots
are rare and we don't know what's in that secret ballot and that secret
ballot might hold a formal objection. the W3C team will take a look at that
and determine, why one of the organizations decided to do a Usually, secret
ballots are some big company that decided that they wanted to formally
object, but they didn't want anyone to know about it. So there's still a
chance for that happening through a secret ballot. and there's one of them.
let's see. but other than that, if that is not a formal objection in the
secret ballot, we are in incredibly good shape. so after 3 years finally,
we've got global standard. If there is a formal objection, it will go to a
formal objection council.

Manu Sporny: there is overwhelming support for these standards. I would be
surprised if a formal objection was upheld at this point. There has been
enough deployments into production and all that kind of stuff for any
formal objection to probably not stick at this point. okay, so that's where
we are. That's great. data integrity is now very close to a global
standard. The next step is going to be taking the specifications that were
voted on, responding to any criticism or critiques that any of the voters
had. So some of them, when they read the spec, they said, section 3 is a
little confusing. If you add this language, I would be okay with it. but
most of those were suggestions.

Manu Sporny: so yeah, I think that's where we are. Are there any questions
on any of that? All right. if not, we'll go ahead and, go forward. What
that means for this group is the work that this group has been doing,
around data integrity is finally hit a global standard thing. so we can now
very securely and safely build off of that foundation. So, data integrity E
selective disclosure in ECA, EDDDSA, we can now build BBS on a much more
solid foundation, any Snark ZK Stark, approach on a much more solid
foundation. all the postquantum stuff is on a much more solid foundation.
So, all that's good news.
00:10:00

Manu Sporny: for the work that we're doing here. And did anything else that
happened at IW that folks want to comment on? Anything cryptography ZK
related, anything of that nature? I think Parthy did anything else come up
at IW relevant to this group?

Parth Bhatt: There were a few sessions but I was not able to attend all. I
mainly attended the AI and MPC related sessions. but yeah I will double
check the existing I would say session list out there on the IW notes and…

Parth Bhatt: then get back and report.

Manu Sporny: Okay, sounds good.

Manu Sporny: Thanks, anything else from IW that folks wanted to cover? I
know some of the EU ECDSA ZKP stuff was covered, but I don't know at what
depth. haven't heard much out of that. I know that there was a session that
covered MDL and the cryptography used in MDOS and ACLU the American Civil
Liberties Union and a number of other privacy advocates and cryptographers
were part of that review.

Manu Sporny: from what I heard it did not go for MDL and MDOC primarily
because they're fundamentally traceable documents. They're kind of perma
cookookies because It's one of the reasons we're working on BBS in this
group is to make sure that there are unlinkable credentials that are
possible that are deployable on a near timeline. pars go ahead. Mhm.

Parth Bhatt: And there was one more presentation about s knowledge proof
performance from Abi Shalat from Google and it was continuous presentation
from last IIW 39. so it was good in terms of they were making progress
based on the research and reducing the proof and verification time on the
device itself. So they were working on the algorithm specific and creating
the support for them but I think they are going to release the updates or
make it open source down the line.

Parth Bhatt: That was the conclusion at the end of the session.

Manu Sporny: Excellent. Yes.

Manu Sporny: Thank you, Par. Yeah. yeah, and OB's one of the main people
that's working on the ZKP for ECDSA. they're using a variety of I don't
know Greg Sigma protocols ZK some circuit stuff to do those calculations.

Greg Bernstein: some check leero and…

Greg Bernstein: an optimized combination. Yeah, they're and they put in a
draft over at the CFRG too about talking about standardizing it. So,
they're doing the right things.

Manu Sporny: Okay, good.

Manu Sporny: Anything else from W? any other community news we should be
aware of before we jump into Yinong's presentation. then with that I'll go
ahead and stop sharing my Yangong over to you. let's time box this to let's
say u 30 minutes if that works for you. We can always go over and if we
don't have enough time to cover everything,…

Manu Sporny: we can always invite you back to present the rest of the
stuff. over to you

Ying Tong: Thank you.

Ying Tong: Yeah. I think that was a really good segue because yeah,
basically we're thinking about this anonymous credentials problem in very
much the same context as Abby and his team are. I would say on the high
level where we differ is our approach towards standardizing the ZK proof
system and I wanted to discuss with this group how we could collaborate
with the data integrity working group.
00:15:00

Ying Tong: yeah and also share I guess some of our work far working towards
a standard for generic or programmable zero knowledge proofs. also please
interrupt me at any time if you have questions. So yeah, I think just
broadly setting the motivation for a standard this programmable ZK proofs
are seeing increasingly wide adoption. you saw in the discussions at IW
it's being seriously considered as one of the candidate implementations for
anonymous credentials in the EDI wallet and in the so-called blockchain
world.

Ying Tong: programmable ZK proofs actually secure close to$2 billion dollar
worth of capital and all this is to say that there's a lot of existing
adoption there will be more future adoption and having a standard to
specify the secure instantiations of generic ZK proofs will be highly
valuable. so zooming on the use case for digital identity I think we
mentioned just now the team from Google has some lehare proof system for
ECDSA anonymous credentials from ECDSA the team from Microsoft has a
similar system crescent credentials that

Ying Tong: under the hood uses quite different cryptographic primitives but
the high level architecture is the same in that spirit. and then besides
that there's been a few in production deployments. So this freedom tool is
basically zk authenticating into an anonymous voting app based on proof of
knowledge of a passport signed by government and I was deployed in Georgia,
Russia and Iran.

Ying Tong: besides I've compiled this list of implementations. yeah,
there's a bunch of existing ZK identity implementations that integrate with
legacy signatures. So ECDSA and RSA signatures. and I'm sure this group is
familiar. High level the architecture is to really not change anything on
the issuer site. So this is very different from the BBS approach like we're
telling the issuer okay you can stick to ECDSA or RSA you can stick to your
MD doc and SDJ jets and we're going to solve the linkability problem for
you by putting in a ZK wrapper in the middle.

Ying Tong: So yeah, the holder conceals the static signatures and hashes
but discloses some predicate on the signed attributes. And then the RP only
sees this der knowledge proof does not see the static signature hashes or
anything else besides whether the predicate is fulfilled. So yeah, I wanted
to quickly demonstrate that we've been working on proof of concept for
verifying AS 256 SDJet token.

Ying Tong: So basically here the verifier is asking for disclosure of these
hashed claims.

Ying Tong: Okay.

Zoey 0x: You lost your screen by the way,…

Zoey 0x: your presentation.

Ying Tong: Let me reshare. Do you see my screen now?

Manu Sporny: It's only sharing the screen sharing.
00:20:00

Ying Tong: Yeah, that is bizarre. Yeah. think I know it when they said
share this window, they selected the screen sharing button as the window
which is funny. Okay. Do you see now this jet valator?

Manu Sporny: Yes.

Ying Tong: So what you input is the signed credential B64 encoded. the
prover input this as a private input and then the verifier request
disclosure of some salted hashes solid claims. and the idea here is you can
prove the inclusion of these hashes in the SD field of the S credential
without having to reveal the static signature or hashes themselves. So we
have a proof of concept here. it's just implemented using graph 16.

Ying Tong: It's not feature complete and there's a bunch of optimizations
performance-wise we have to do. But yeah, this JWT we took it from I think
one of the Taiwan digital ID wallet it was output from there. So all this
is to set the context the motivation for why a generic CK proof standard is
interesting. Do you see my slideshow?

Manu Sporny: Yes.

Greg Bernstein: Yes.

Ying Tong: So yeah I've worked on CK proof standards for a little while
now. I think this identity use case is definitely very compelling, very
energizing. so besides the identity use case, there's other motivations for
generic CK proof standards. So you could really approach it from specifying
a cipher suite and specifying secure composition of primitives in the
cyersuite. So it has happened in the wild in deployments like people
instantiate these primitives in insecure way. So for example there's this
primitive known as the fmir transform that compiles an interactive protocol
into a non-interactive one.

Ying Tong: But there's a multiple implementations that forgot to the public
inputs into the PHM transcript and exposed vulnerability like soundness
vulnerability and yeah there's many examples like this of just people
rolling their own and being unaware of how to securely instantiate and
compose these primitives. Yeah, both in terms of soundness and zero
knowledge. And this is what a standard can address. And I think the last
motivation here is really about upgradability. So not sure if you guys
follow the ZK space recently it's been just rapidly growing.

Ying Tong: every other week there's this improvement in performance and all
this is to say it is too early right now to enshrine any specific proof
system that being said there's a lot of proof systems already deployed
already in production and I think it is the right time now to specify at a
higher level

Ying Tong: the other benefit of specifying at a higher level is besides
upgradability is interoperability. So it is very often the case that
different instantiations are appropriate for different contexts for if you
just take the Google versus the Microsoft cretin credentials a very very
different choice of primitives with performance characteristics. So for
instance Microsoft uses a trusted setup but whereas the Google gets rid of
that they use transparent proof system depending on the relation you're
proving.

Ying Tong: So we all need to do ES 256, And for SHA 256 is a lot of bit
operation shifting exors that's efficient in a small field but ECDSA works
over the P256 which is efficient in a large field. So there's lots and lots
of tradeoffs choices these degrees of freedom and developers do exercise
these choices. They need flexibility for this. They should not be tied down
by a overly specific standard. at the same time they should absolutely be
doing this safely with an eye to soundness and their knowledge. I'm gonna
Okay. I heard a hand. Yeah.
00:25:00

Manu Sporny: Yeah. just I have a question on this slide or maybe more of a
comment, so this is all great stuff in this is fantastic. and is very much
of interest to this group it's stuff that we've been looking at for a
while. There are two general use case categories that we care about. one of
them is identity credentials. Anything where you have a bunch of statements
about an entity, a person or an organization, a product, things like that.

Manu Sporny: and you've got a number of attributes and you want to reveal
those attributes ideally u in zero knowledge and ideally in a way that
makes sure that you're unlinkable you're non-correlatable meaning the group
size is large enough for someone to not be able kind of to understand that
for example you have a passport but not necessarily which country you come
from right stuff like that so that's one set of use cases that we care
about. the other set of use cases has to do with the revocation and proving
that the credential that you're using hasn't been revoked in some capacity.

Manu Sporny: that's also an area that we care about a lot because when it
comes to zero knowledge and unlinkable credentials you can have a
credential that is so unlinkable that it's easy to abuse usage of the
credential meaning if you don't have rate limiting if you don't have
revocation proofs then all of a sudden people can kind of share a
credential potential behind the scenes and you can't tell that that's
happening. So, we have these other requirements that kind of stack on top
of the normal just kind of like credential use case. We do care about
identity credentials. We care about everlasting un unlinkability. meaning
is this system postquantum secure?

Manu Sporny: as and we also care about status changes on the credential
itself whether or…

Greg Bernstein: Yes.

Manu Sporny: the credential can be revoked or you can set statuses on it.
So all of those things matter to us and this goes to what you're saying in
the bottom four. So the trust assumptions I think and please anyone correct
me if I'm wrong but I think we want to avoid trusted setups unless there is
a very good reason to do that. we want fiat shamir type approaches
non-interactive so I think our base assumption here is just like everything
we're doing here needs to be non-interactive.

Manu Sporny: we don't have an interest in trusted setup I don't think we
also care a lot about efficiency so we understand that there are some
organizations in the space that are like hey it would be really good to
continue to use ECDSA and…

Greg Bernstein: Just kidding.

Manu Sporny: I don't think many of us we understand why they're big upgrade
problems and issues and things like that But if cryptographically relevant
quantum computers come along which maybe they're 30 years away maybe they
never happened but if they come along we would like to have a solution that
can survive that kind of attack and so ECDSA is fine and good but we think
that we need to start

Manu Sporny: working on, postquantum cryptographically unlinkable like
everlasting privacy. and we care about the performance characteristics. one
of them being we want to see what the best we can do is, right? and without
worrying about needing to be, compatible with ECDSA because there's a whole
bunch of stuff there that just makes the circuit stuff very complicated and
we want to see how optimized we can get both in signature size and compute
time.
00:30:00

Manu Sporny: we tend to not care that much about the initial proof that's
created the initial digital signature that goes from the issuer to the
holder of the credential. So that can be a very compute intensive task like
we are very much willing to burn memory and compute and all that kind of
stuff that the issuer uses to calculate the proof to make it efficient for
the holder to derive a new proof and for the verifier to verify.

Manu Sporny: So we care a lot more about the efficiency between the holder
and the prover and the verifier than we care about the efficiency at the
issuer. so in the types of things that we care about is I think at the top
of the list is the signature size the prover to verifier that signature
size we would like to optimize for that as much as possible primarily
signature size and then after that a compute in memory we're fine for the
holder to have a lot of and in time ideally under 3 seconds 2 seconds would
be great if we get above five seconds it starts to become painful but a lot
of the stuff we're talking about can I think standard those things

Manu Sporny: so hopefully that helps. So we want non-interactive we want to
make sure that it is an architecturally and mathematically clean design. We
want to see how good we can get there without any care about A or any of
that stuff. we may care about the ECDSA stuff in the future, but we want to
see what's the best we can do. We care about postquantum the security if
there's a good argument there or at least understanding how we put together
postquantum primitives into the same kind of mathematical framework or
architecture.

Manu Sporny: And then we care a lot about signature size, and less about
compute. And we really care about the signature size between the prover and
the verifier. So hopefully that gives you kind of some of the,…

Manu Sporny: requirements that we've kind of talked about in this community
for many years now. go ahead, Greg.

Greg Bernstein: On the technical detail of trusted versus transparent,…

Greg Bernstein: the verifiable credential model really leans towards the
transparent setup. We don't want to trust some global single party to
establish those powers of tow things and such like that.

Greg Bernstein: So the stark type stuff, the things that don't require,
trusted setup, the transparent setups, even though they can road salt in a
bit longer, proofs than a groth or something like that that uses a crusted
setup. it just fits with the transparent setups. So, I don't think we want
any trusted setups.

Ying Tong: Yeah, that makes a lot of sense to me. we were talking with the
Microsoft team just earlier today. as you know, they have a trusted setup.
and we were suggesting basically options to get rid of that. The one very
nice feature of the Microsoft solution is the ability to rerandomize and
reuse the issuer proofs.

Ying Tong: So as I think Manu was saying the issue proof should be sort of
a one-time expensive thing and then subsequently the presentation proves
like that really has to be cheap and the Microsoft solution has that kind
of shape but yeah thanks very much for this context on your priorities.
yeah, I will say the nice thing about the CK snar is I see it as a privacy
hardening layer on top of the underlying signature scheme. So even if we
swap out ECDSA to the lithium or something this architecture wouldn't have
to change.
00:35:00

Ying Tong: but yeah it would be interesting to benchmark the lithium as
well. So yeah all of these motivates us to aim for spec. and I think yeah,
if you guys saw Anya Layman's talk at Real World Crypto, she was just
saying standards I heard she was saying even The pairing curve standard
expired in 2022. And so she was asking, is it time to rethink how we
standardize things?

Ying Tong: she put forth the suggestion to have a standard for some minimal
core primitive and then for use case and application specific variants to
have a standard light process that's faster. I really liked that approach.
I think as I mentioned just now the zk proof space moving so fast. I think
this is the only approach that would make sense at this stage because on
the one hand there's deployed mature proof systems on the other hand
there's constantly the stream of newer and better proof systems. Yeah I
heard a hand

Manu Sporny: And on that point inong that is what we have set up here. So
we saw that the need for this seven years ago and so we set up a
development process and you're sitting in the very front part of that
development process that allows us to rapidly iterate in this group and
then we have a path to a global standard at W3C. So this rapid iteration
thing is what we do here.

Manu Sporny: and when it's ready ready enough to go on the standards track
not get finally standardized but ready enough to go on the standards track
we already have a pipeline set up to achieve kind of what Anna is talking
about here I think unfortunately a lot of people don't know that and so
that's where we need help kind of getting the word out to the
cryptographers in the space but our big issue with the way cryptography has
been done for the past 20 plus 30 plus years is that they're these
incredibly long 7-year standardization cycles at the national
standardization bodies and they just do not keep up with the type of
innovation that you're talking about. And so we, created data integrity and
that's why the crypto suites are dated, right? we have a date on the crypto
suites because we expect rapid iteration.

Manu Sporny: We want to release new crypto suites every two years maybe to
four and we can mark some of these crypto suites as experimental saying hey
it's standardized it exists right we can get to interoperability but it is
still experimental and that helps the national standards bodies get some
experience on how this stuff is deployed in more experimental scenarios to
speed up the process there.

Manu Sporny: So again, we very much agree with this. This is why this group
is set up in the way that it is.

Ying Tong: Yeah, that's really encouraging to hear.

Ying Tong: I think reading through you guys' specs, I also saw it's a very
forwardlooking. I think I had a slide much later on I was saying that the
proof chains and proof graphs and that I found on your specs are super
compatible with their analogous concepts in zero knowledge proofs like
incrementally verifiable computation and proof carrying data.

Ying Tong: So I definitely see already the benefit of your approach just by
reading through your specs so far. Yeah, that's good. That's very
encouraging to hear. I think yeah so there's this effort called ZK proof
standards that has been working on something like this highle standards for
generic zk proofs. so at high level this is like the pipeline for a modern
zk construction.
00:40:00

Ying Tong: you start with some relation and some satisfying witness then
you end up with a non-interactive argument to prove the s satisfying
witness knowledge of the satisfying witness. and so the idea is to keep the
standard so you can see at each of the levels there's any number of
instantiations. The idea is to not enshrine any one of these but instead to
specify I the interface and how to securely compose these how to make sure
that you're comp compatibly. So this is really more like a cipher suite
approach. So I had this part of the presentation just going through
existing efforts at each of these levels.

Ying Tong: yeah, this is ZK proof standards had this working group that was
standardizing a particular arithmetization called ploners. They have gotten
pretty far. and this working group is still active. so on the next level
the polomial IOPS level there's a multiple implementations that modularize
the kinds of interactive oracle proofs that are used in proof systems and
show how they compose.

Ying Tong: I would say compared to the arithmetizations working group this
IOP level is less specified. Yeah. Did someone have a comment?

Manu Sporny: No, I think we're good. I'm just doing a quick time check.
which is fine.

Manu Sporny: We can go another five minutes inong, but what we might want
to do is we might want to deep dive into each one of these things during
the next call if you're available for that. up to you.

Ying Tong: Yeah, that would be great.

Manu Sporny: So let's say another five minutes because we still need to get
to the BBS stuff during this call. and then anything that's left over we
can do on the next

Ying Tong: Yeah, that sounds good. Thank you.

Ying Tong: then the next level is introducing basically cryptographic
hardness assumptions to efficiently realize these polomial IOPS. and this
is also actually pretty well developed. is very common to find in DK proof
libraries these generic traits that are implemented for a bunch of polomal
commitment schemes and then I don't know if people here know Miklly he's
been working on standardizing the free transform that converts an
interactive protocol into non-interactive so he's gotten very far on that
as well so those are like

Ying Tong: Yeah. I think there was a few interesting questions that came up
about the scope of the standard about what types of features we should
prioritize are interesting. I think though what I want to do with the
remaining time is ask some specific questions about this working group. So
I'm sure people have thoughts on this. So okay like we saw in the
architecture just now ZK proves they can be thought of as this privacy
hardening layer over some underlying issuance flow such as a signature. so
in the case of ECDSA it makes the signature unlinkable because you can
reveal the public key without revealing the static signature.

Ying Tong: So I was just reading through the data integrity spec and
wondering how we would express this sort of composition of proofs as a
proof mechanism because it is a zk proof but it's a proof of some
underlying data integrity proof. So I was just not clear how I would
express this in the language of a proof mechanism. another thing that came
up was like the verifier site doesn't have to transform or hash the data.
It just needs to verify a ZK proof. I heard a hand go up.
00:45:00

Manu Sporny: Yeah, I mean so I think the closest analog that we have right
now is the way we do the ature. So Greg here is the primary person that is
working on that both at the IETF and on the BBS spec here. Dave Longley is
also working on that, but he's not here today. I think the short answer is
data integrity gives you an enormous amount of flexibility on how you do
that. Right? So the only thing that data integrity kind of cares about at
least is the final serialization of the signature or the proof and you can
do that in any way and the specifications can provide massive algorithms on
how you get from one thing to the other.

Manu Sporny: So I think we call this there are the base proofs which are
generated by the issuer and then there are the derived proofs that are
generated by the prover and again just because data integrity has kind of
this transform hash sign pipeline you don't have to do that for every
single base signature or derived signature. You can choose to skip it. You
can choose to say that there is no transform step that's required here or
you could state that there's no hashing step that's required here or you
can inject other things into the pipeline.

Manu Sporny: So all that to say I think and this is not an exact answer to
your question …

Manu Sporny: but you might want to chat with Greg here to figure out how to
put this into something that's going to work for you. But yeah BBS's
section especially the test vectors on the base proof and the derived proof
might be helpful to you.

Ying Tong: Okay, that says exactly yeah…

Ying Tong: what I mean and then I guess my last thing was just to list some
action items we could have. So how we could interact with the data
integrity working group. So we could specify a cryptographic suite like
write it up the way it's been done for PBS a pseudo code example.

Ying Tong: And then another thing that could be interesting was I think we
were discussing with Greg and Dan and some people like what is if we could
do pick any transformation in the transform data step like what's the most
ZK friendly transformation. So yeah these were some action items I thought
could be interesting. yeah, we could also…

Manu Sporny: And…

Ying Tong: what modes of collaboration would be useful. and I've come to
the end of my presentation. Yeah.

Manu Sporny: and that looks good especially the action items. So
specifically I think probably your first and last action item are good and
they could be done in parallel. so the first thing is let's try to just
create the base specification. It can be completely empty but at least
creating that and setting that up for you so that you've got control over
that document.

Greg Bernstein: Yeah. Yeah.

Manu Sporny: I can help you with that. Greg can help you with that. we've
got multiple people that can help you set up the base specification. the
way it works in the credentials community group. So I just want to make a
pedantic but important point So we are not an official working group of the
W3C. We are a community group and the thing that differs there is that we
incubate things here which is where you are right So you're in the right
spot. We incubate things here and once they're ready to go global standards
track we will move it over to the official working group.

Manu Sporny: The word working group means that it has international law
that applies. There are patent releases they're required in this group,
too. A copyright has to be assigned. There's a whole bunch of just kind of
administrative stuff that's important, so we can create the specification
here. Greg and I can help you kind of set that up. and then we'll just work
through different parts of the specification as you have questions around
how to do each part of the specification we can probably help there. having
you come in and give us updates on how you're progressing. If you've got
any questions for us as a regular part of our check-ins we meet every
single week. So whenever you need it you can get on the agenda and we can
work through any issues you have.
00:50:00

Manu Sporny: So, I think let's get that started as soon as we can. the
other thing that I think we're very interested in is a ZK snark optimized
canonicalization transformation step. I think it would be a very beneficial
thing not just for this ecosystem but all the ecosystems that are working
on Z casebased approaches.

Manu Sporny: And then I'm forgetting what your middle item was, but I felt
that we could also probably do that in parallel, but that one might come
after that thing go. Yeah, the middle one was the pseudo code example. We
can just put that in the specification the first item. does that sound like
a good way to proceed for you, Yin Tong? Would that be helpful to you if we
did those items?

Ying Tong: That would be great. yeah, I think maybe pointing me to a
starter template or something. I was just planning to copy basically what
was done for BBS.

Manu Sporny: That would be a great way to start.

Ying Tong: Okay.

Manu Sporny: Yeah, if you just take the DBS spec, copy it, and then delete
the sections that don't apply to what you're doing, that would be fine, I
think. And Greg is the primary author there, so you've got his contact
information as well.

Ying Tong: That sounds good. Yeah.

Manu Sporny: All thank you, That was wonderful. Really appreciate you
spending the time. I know it's late for you. but really appreciate you and
Zoe coming to present the work here and we are very excited about it and
can integrate that with our regular work cycle. okay. let's in the
remaining couple of minutes, Greg, if you want to give us kind of a high
level on,…

Manu Sporny: where we are with the BBS pseudonym stuff, any feedback you
need, that kind of stuff, let's give it maybe five minutes and then, we can
end.

Greg Bernstein: So let me just remind people of pseudonyms ca came about…

Greg Bernstein: because with BBS when you have absolute unlinkability that
presents problems because you can't be remembered if you want to be
remembered form and so we'd like to be remembered but how much so that's
what pseudonyms help solve and that they solve it both

Greg Bernstein: for the holder if they want to assert a synonymous identity
and for the verifier if they want to go how many times have I seen this
person did somebody just give away their subscription information and so
now I have 10,000 people all using the same subscription for one okay
pseudonyms and the pseudonyms we use are based on a one-way function that
uses exponentiation. So that means we're vulnerable to a cryptographically
relevant quantum computer. Okay.

Greg Bernstein: So if that comes people would be able to correlate a
pseudonym or by being able to crack that the take that discrete log be able
to figure out a pseudonym used one place relates to a pseudonym used in
another context. And that's bad because that's the whole point of
pseudonyms is in two different contexts you shouldn't be able to correlate
them. BBS set a very high bar because it uses ZKPS in the sense that every
time he uses a new BBS proof from the holder to a verifier, those are
absolutely everlasting unlinkable. Each proof uses new randomness.
00:55:00

Greg Bernstein: they're CKPS. Unless you use the exact proof with the same
randomness for two places, they're going to look different. And somebody
said, hey, even if they break discrete logs, you don't break the
everlasting unlinkable property of BBS proofs. We want that for pseudonyms.
And it's so that sense looking for solutions and any solution requires some
compromise. But we could use a vector of secrets is the one that we've
worked with some cryptographers on.

Greg Bernstein: and that would allow you to create different pseudonyms
within different contextes. And until you pass that value of n, you
wouldn't be at risk of losing this everlasting information theoretic
unlinkability property amongst those pseudonyms. Okay, that's a very high
bar. Okay, and so when we talk to some of our ZK proof friends, we go that
bar is a little high. What if we just had it so we could use some other
one-way function like collision resistant hash function, but those don't
quite fit in with the BBS proof technique. So we'd need other proof
techniques.

Greg Bernstein: So for right now and I gave the links to it one I gave the
link to the issue that was raised and some of the discussion about it and
then I gave a link to the proposed n pseudonyms that means you'd use a
vector of random secrets known only to the holder. And until you p use that
n times or you had n colluding verifiers that shared these separate
pseudonym values with each other and a cryptographically relevant quantum
computer, you would not be able to correlate. so that's where we're at as
far as solutions. This is not optimized for size yet.

Greg Bernstein: It's not super bad, but it's not super efficient. For
example, a thousand use of pseudonyms would give you something like 40k
bytes worth of proof. usually our proofs are much smaller. Okay, so that's
where we're at. so this is actually what we would do to the details in the
second link are what we do to update the spec so that we'd have this
additional capability. Okay, that doesn't mean we're not looking at more to
reduce the size other techniques such as applying some kind of ZKP
technique to a hash function to give us our student names. Any questions?

Greg Bernstein: note that those links go to the repositories for the CFRG
drafts. So this is where pseudonyms and blind BBS and BBS are all being
standardized as cryptographic protocols at the cryptographic forum research
group. Questions? We're out of time.

Manu Sporny: I have questions but we are out of time. So we will start the
next call with a little bit of a more deep dive into this in kind of next
steps. okay that's it for going over by a minute today. really appreciate
the presentation. Yong, thank you for the update. Greg will cover and start
with BBS next call and then Yinong if we make some progress on getting you
set up with the spec before the call next week we can cover that as well as
a deeper dive into the various snark approaches. All right. thanks
everyone. Have a wonderful rest of your day, a wonderful weekend and we
will meet again next week. Take care. Bye.
01:00:00

Parth Bhatt: Thank you.

Greg Bernstein: All right,…

Greg Bernstein: contact me if you want to avoid getting too deep a dive
into spec details. We can do it at a high level first. Yes, there's a lot
of detail that go into that stuff and…

Ying Tong: Okay, I think I will actually.

Ying Tong: Thanks very Okay,…

Greg Bernstein: we can get the high level first. Okay, bye.

Ying Tong: thanks very much.
Meeting ended after 01:00:30 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Friday, 18 April 2025 22:03:55 UTC