[MINUTES] Data Integrity 2025-05-09

W3C Data Integrity Community Group Meeting Summary - 2025/05/09

*Topics Covered:*

   1.

   *BBS Updates:* Discussion on improvements to BBS (Blindly Signed
   Signatures) including pseudonym generation for quantum-resistant
   unlinkability. Three different pseudonym computation methods were
   presented, with performance evaluations planned. A query regarding
   credential reissuance with the same pseudonym was raised and a potential
   solution was discussed. A presentation request from the US AI Safety
   Institute regarding pseudonyms and personhood credentials was announced,
   highlighting the challenge of sybil attacks in multi-issuer systems.
   2.

   *Quantum-Safe Data Integrity Crypto Suites:* Review of a pull request to
   add quantum-safe algorithms (MLDSA, Falcon, and SPHINCS+) to the crypto
   suites. Minor editorial changes were noted and will be addressed after
   merging the pull request. Discussion of algorithm naming and reuse of code.
   3.

   *Standardization of Polynomial Commitment Schemes for ZK-SNARKs:*
   Presentation of a draft specification for a generic interface for
   polynomial commitment schemes, a key building block for ZK-SNARKs. The
   discussion included the suitability of the specification for the CFRG, the
   need to clarify its position within the overall data integrity process, and
   the importance of including concrete test vectors with various credential
   sizes (small, large). Further discussion focused on the need to document
   specific algorithms, address potential optimization "cheats," and consider
   scenarios involving attribute selection for improved efficiency. The need
   to present concrete algorithms rather than a purely meta-specification for
   IETF acceptance was emphasized.

*Key Points:*

   - BBS improvements focus on enhancing pseudonym generation for quantum
   resistance, addressing reissuance scenarios, and exploring the complexities
   of sybil resistance in multi-issuer settings.
   - The quantum-safe crypto suite pull request will be merged after minor
   editorial updates.
   - The polynomial commitment scheme specification draft requires further
   refinement, including clearer placement within the data integrity stack,
   concrete test vectors with realistic credential sizes, and detailed
   algorithm descriptions for IETF compatibility. The approach should include
   at least one concrete implementation alongside a generic interface.
   - There's a cultural difference between W3C and IETF regarding the
   approach to standardization of cryptographic primitives. IETF tends to
   favor concrete specifications over meta-specifications.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-05-09.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-05-09.mp4
*Data Integrity - 2025/05/09 09:58 EDT - Transcript* *Attendees*

Dave Longley, Eddie Dennis, Greg Bernstein, Hiroyuki Sano, Janabel Xia,
John's Notetaker, Manu Sporny, Parth Bhatt, Pierre-Antoine Champin, Ted
Thibodeau Jr, Ying Tong
*Transcript*

Manu Sporny: All right, let's go ahead and get started. Welcome everyone to
the data integrity call. this is May 9th, 2025. on the agenda today, we'll
talk about BBS any updates there, Greg, that you want to share, any
feedback you want to get on that. then we will kind of review where we got
to on the postquantum stuff. although I do not see Will on the call today.
So it might be a quick call and make sure everyone's okay to merge that
postquantum algorithms discussion that we had last time.

Manu Sporny: and then I think again we'll spend a good chunk of the time
talking through Ying Tong your work and anything specific you wanted to
discuss today. any questions you had? we'll cover any other updates or
changes to the agenda? Anything else folks would like to discuss today? All
right. If there is nothing else, let's jump into the BBS stuff first.
actually, sorry, let me see if there are any introductions or
reintroductions that folks would like to do. Anyone new to the call that
would like to talk about joining, why they're interested in the work,
anything like that.

Ying Tong: I wanted to introduce my colleague here Janabel.

Manu Sporny: Wonderful.

Janabel Xia: Yes. Yeah. Hi, I'm working with Wing Tong. my name is Janabal.
I graduated from MIT last spring and then I'll be starting my PhD at
Harvard this fall in math. and I'm curious about standards specifically
towards moving towards kind of like ZKP adoption and ZK Snorks in
particular and thinking about that in the context of digital identity and
allowing for privacy and also a postquantum context when thinking about
hash hashbased systems.

Janabel Xia: And yeah, I've been working with Nong on kind of these
standardization efforts and…

Manu Sporny: Wonderful to meet you.

Janabel Xia: also interested in what's going on here and how the work
overlaps. So yeah, it's nice to meet everyone.

Manu Sporny: It's how would you spell Yannabel?

Janabel Xia: Jabel

Manu Sporny: Wonderful to have you join the call. Janabel you have a very
interesting background. hopefully we are doing things that are of interest
to you here. welcome to the group. All right.

Janabel Xia: Thank you.

Manu Sporny: Let's jump into the main agenda then. sorry I'm getting ahead
of myself again. Any community updates? Anything happening that we should
be aware of?

Manu Sporny: anything of that nature. I know that I can speak to one thing
which is that Abby Shellot from Google presented on the snark approaches
that they're using. they're trying to figure out ways to use snarks to
reuse C use a base kind of ECDSA based credential that's been issued but be
able to do a snark-based proof that is more privacy preserving particularly
kind of focused around the MDOC MDL stuff and his presentation two weeks ago
00:05:00

Manu Sporny: him and Matteo's presentation two weeks ago to the credentials
community group was wondering how they could integrate the W3C verifiable
credential standards into the work that they're doing. So we've got some
ideas on how to do that. I'm working with them to figure out a time that
they can come and talk here a bit more. I think we have a general approach
that might work. and again aligned with Inkong's work as well. So all that
to say that we are slowly figuring out the best way to construct the
cryptographic primitives to do an efficient version of a snark on a W3C
verifiable credential at least.

Manu Sporny: And we'll be talking more about that today. Any other kind of
community updates? Anything else we should be talking about?

Manu Sporny: Okay, with that, then let's go ahead and jump into the BBS
stuff. we've been trying to figure out Yes,…

Greg Bernstein: Can you Okay.

Manu Sporny: we can see your screen. yeah. Why don't you give us a little
bit of background, Greg, and then we can

Greg Bernstein: So we're traditional cryptography maybe a little bit beyond
traditional since it uses what are known as pairings with elliptic curves
provides a great set of features including unlinkable proofs. So we have a
three-party system of issuer holder and verifier.

Greg Bernstein: So the issuer issues a credential to the holder. The holder
can selectively disclose as little as they want and presents a proof with
that proof to ptographic aspect of the proof, the cryptographic pieces are
unlinkable and they have what we'd call everlasting or information
theoretic privacy. So that means even if a quantum computer comes around,
they're created with new randomness each time. You can't link them even if
you have a quantum computer.

Greg Bernstein: we added pseudonyms to BBS and we were doing a very simple
calculation where we were taking basically the exponential and producing a
pseudonym. So this is a very nice way to do this if people don't have a
quantum computer which they don't yet and that would give us unlinkability
but if a quantum computer comes around then you would be able to link
people. So a basic solution is not to use a single piece of random data.
Sorry for scrolling. I know it makes me ill when people scroll their
screens.

Greg Bernstein: But do you use a vector of secrets and then until you've
used the pseudonym with n different contextes not end presentations but
different contextes. You use it to create n pseudonyms specific in n
different contextes. And if all those verifiers collude, then if you have a
quantum computer, you would be able to figure this out. And so this seemed
like a reasonable thing to do. Okay.

Greg Bernstein: where we're progressing is to one how to implement this in
the standard the best way to do this and then we started thinking about
wait the initial way I put down to do it other people said you could do
this better and so I summarized an email and I posted this

Greg Bernstein: in one form to the issue list, but not in the nice
mathematics. Showing three different ways that we could compute pseudonyms
where we hash the curve, the context plus an index times the secrets or we
can make it look like a polomial evaluation in the public parameter the
context.
00:10:00

Greg Bernstein: So the goal here is to make first to have an easy
straightforward way that we don't have to do that's easy for developers but
this gets very close to a lot of the stuff this EKP people are doing okay
they deal with polomial commitments and they deal with polomial commitments
for very large vectors very large polomials because those have to do
circuits or transcripts and things like that. And so we want to try and see
what we do for the simple calculation we can look forward to if we have
some enhancements to keep the size down. Okay?

Greg Bernstein: Because even with a thousand context pseudonym vector,
we're only adding 32 40k bytes to the proof. And since we're using what are
known as Person commitments, the thing that gets signed over by the issuer
is just a single group element. and such like that. So, it's very fairly
efficient except the proofs get longer, but they don't look very long
compared to some of the transparent CKPS. So, that's where we're at. We are
going to do some performance evaluations of these different things.

Greg Bernstein: And we did share this with other cryptographers to make
sure that I didn't make any mistakes with these things so that I didn't
compromise that end you n different context type of thing. So that's kind
of where we're going. And then we also see this comparison with these
polomial commitments. And one thing to take into account especially since
we have some ZKP folks on the line is one approach for producing a
pseudonym that's computationally secure is to just use the hash function to
compute the pseudonym as a combination of the secret and the context.

Greg Bernstein: and then provide a proof and that would be separate from
our BBS proofs for selective disclosure. We kind of append it to it and
such like that. So, we're always interested in feedback back from folks
like that and other people are interested. for those who are in doing stuff
with CKPS, BBS, uses pairings, but a lot of its proof stuff is based on
what are known as sigma protocols, which you're probably familiar with. Any
questions? This is very deep diabet. Okay.

Manu Sporny: Ying Tong, you shared a link which I'm reading through that
looks like it has bearing on what Greg Greg is talking about. would you
mind kind of taking us through your thoughts? Okay.

Ying Tong: Yeah, this was the main document I wanted to present today. so
yeah I could wait until the protocol but in short it's a generic standard
for polomial commitment schemes and…

Greg Bernstein: Cool.

Ying Tong: is applicable to commitment schemes such as Patterson vector
commitments that are used in BBS.

Manu Sporny: Got That's great. So, yes, let's definitely go through the
rest of this towards the back part of the meeting. that sounds good. Greg,
I've got one comment and one question. I think the path we're still on is
to standardize the vector of secrets thing first…

Greg Bernstein: Yes. Yeah,…

Manu Sporny:

Manu Sporny: And even if something better comes along, we have to figure
out if it's going to be accepted by the crypto forum research group at the
IETF orf

Greg Bernstein: those three things I just showed are essentially
equivalent. So me and Basilus were going to run some performance checks and
we wanted to take it just keep an eye out for what's happening with
polomial commitments and…
00:15:00

Manu Sporny: Mhm.

Greg Bernstein: things like that to see that most of this is very localized
to certain parts of the code and procedures. So it shouldn't be too hard to
say if you want to do an optimization.

Greg Bernstein: So we're just trying to do the homework, make sure the
simplest scheme that we start with is relatively performance. I did have
one more thing that float by any cryptographers or application people. I
got a query from somebody at more application site about reissuing
credentials and so in having the same pseudonym and so I'm going to be
bringing that up but basically it's kind of like the holder would prove
when they come back to the issuer that they are using the same set of
secrets.

Greg Bernstein: So these would be two different randomized commitments and
they would be able to prove that the secrets underlying them are the same
and such like that. And I think that would solve that rather than more
complicated contributions and things like that.

Manu Sporny: Yeah. the only danger to watch out for there,…

Manu Sporny: Greg, I think, is the whole link secret problem where Yeah.

Greg Bernstein: Yeah, we don't.

Greg Bernstein: That's exactly what we're trying to avoid. It's like handle
that outside. Don't put link secret stuff in. Don't we another can of
That's a whole different thing,…

Manu Sporny: Good.

Greg Bernstein: Want to try and avoid that. I got that message.

Manu Sporny: All right. And then one kind of heads up. The US artificial
intelligence safety institute has asked a couple of us to present. this
came out of the work that we did with OpenAI and…

Greg Bernstein: Yes. Yes.

Manu Sporny: their researchers on pseudonyms for individuals for kind of a
proof of these are the personhood credentials use case right so how do you
prove that you're a human online when an AI can fairly do a great job
mimicking and so how do we do that in a privacy preserving way online is
the core use case. this work was led by Steven Adler when he was at OpenAI
and now the US AI safety institute which is a part of NIST would like us to
come and present on what the latest is on pseudonyms…

Manu Sporny: because they're saying we're the ones that have probably the
best grip on the pseudonym stuff. at this point.

Greg Bernstein: Great. Stephen Adler is the tools for humanity guy we just
heard from or…

Manu Sporny: So just a heads up, Greg, I might be pulling you into that
discussion. anyone else is, welcome to join. I think I still don't have the
details on it. It was just a fresh kind of request. but even with I don't
know…

Greg Bernstein: somebody different. How are they relate to those guys?

Manu Sporny:

Manu Sporny: who he's with I know he used to be at opening.

Greg Bernstein: Okay, because…

Manu Sporny: I have no idea. he left and I don't know who he's with now.
yeah,…

Greg Bernstein: because the tools for humanities was hitting that
uniqueness thing very much and they brought up civil attacks and things
like that. So I thought they hit some of the same things.

Greg Bernstein: They're coming from a biometrics point of view, but it was
very interesting.

Manu Sporny: I didn't think that person was Stephen because Gabes with
tools for humanity now.

Greg Bernstein: No, I couldn't. Sorry. Okay.

Manu Sporny: So I know that so Gabe who we know it was one of the chairs of
the decentralized identifier working group is now working for TFH but I
don't think they are concerned about the same problem they're approaching
it in a slightly different way TFH is taking a biometrics based approach
whereas I think…

Manu Sporny: what we're trying to at least what the open AI paper kind of
warned about was overuse of biometrics when it comes to that kind of thing.
there is also a problem around syibles and…

Greg Bernstein: There you go.

Manu Sporny: pseudonyms when you have multiple issuers of credentials that
can be used as pseudonyms. meaning we know that the problem can be solved
if you centralize everything and have one issuer that issues proof of
personhood credentials.
00:20:00

Manu Sporny: Which is a really bad idea like that we do not want one
centralized entity issuing whether or…

Greg Bernstein: Not now.

Greg Bernstein: Not now.

Manu Sporny:

Manu Sporny: not you're a human or not. so when we have multiple different
issuers of personhood credentials we know that probably the best way to use
them is to do it in a pseudonym based manner but if you have multiple
issuers then you can introduce sibles into the system based on the number
of issuers that they are. So, Greg, I don't know if we have a strong
solution to that problem yet. That's always been the problem. Meaning okay,
so how do you ensure that, syibles don't get into the system? that an un
unacceptable level, there's always going to be sibbles in a system that has
pseudonyms like that.

Greg Bernstein: Sounds like a good research topic for some of the we do
have them. Okay.

Manu Sporny: But how do you prevent it from getting to an unaccept
acceptable level? I don't think anyone has a good handle on that problem
right now. So unless someone For Ness. they think we are the ones that are
working on it. So that's the problem. we are kind of working on it. so if
anyone has any bright ideas on that, it'd be nice to report that out to
NIST when we talk with them.

Manu Sporny: so that's just a heads up that, we'd be presenting and I think
I'm going to go into that presentation, Greg with this is an open problem.
it's an open we don't know how to solve the multi-yol thing. we think we
have a very good handle we think we have a good enough handle on
pseudonyms. what we don't have a handle on is when you have multiple
different issuers. that are doing personhood credentials. How do you
prevent syibles in that system? yeah, and Dave, I don't know if anyone
knows what an acceptable level of syibles I mean, it changes from use case
to use case, right?

Dave Longley: Yeah, I was just saying it would be an interesting question
for someone since Greg was mentioning research papers for someone to make
some suggestions around what acceptable levels are on the basis of how you
fill those gaps when you have it which traditionally in any other system
you fill it with some kind of insurance.

Dave Longley: And so, I'm sure some analysis could be performed that would
suggest, if you have this level, this is what it's going to cost you to
fill the gap with insurance or All right.

Manu Sporny: Yeah. Yep.

Manu Sporny: Yep. Yeah. that's a great idea. So proposing to NIST that
there are some re research things that need to be done especially around
simple stuff. And I think we could also say, there's research gaps here
around, snarks and pseudonyms and postquantums cure, pseudonyms and things
of that nature. okay,…

Manu Sporny: thank that was helpful. I'll try to work with Stephen to put
together a deck and present it to and it

Greg Bernstein: and check in with Anna.

Greg Bernstein: Anna because she wrote the paper back in 2000.

Manu Sporny: Yes, that's we'll also check in with Anna to make sure that
she's done. Jabel, I don't think the meeting is public. It might be they go
back and forth on that. Some of those are public,…

Manu Sporny: I think some of them are invite only or whatever. …

Greg Bernstein: Yeah, some of the crypto club invite only,…

Greg Bernstein: but then they publish because they wanted to keep it small
enough and so once you present then you get invited to the club. but that's
been a little bit that hasn't been happening as much this year because of
issues.

Manu Sporny: yep. Yeah. if it is public, I'll share it and, we'll go from
there. certainly everything that we're talking about here is public and
we'll eventually be trying to solve the problem in this group So that
discussion will be anything else on BBS Greg that you want to cover before
we move on?
00:25:00

Greg Bernstein: Only thing is for anybody who is doing anything with Rust,
I've been trying to get my rust shops up so I can use some of these
libraries. There's one called artworks and it's academic code, but it's
pretty good with BLS 12381 and they have a bunch of polomial commitment
schemes. So, I'm trying to thank you, Ying Tong. This is the place I should
be looking. Good. I'm working on it. It takes a while to get a new language
going,…

Greg Bernstein: but I noticed a lot of cryptographers like it and so
working on it. Anybody want to check in with me as I go through this
learning curve? Thanks.

Manu Sporny: All right,…

Manu Sporny: Thanks, Gg. moving over to the next agenda item, which is the
Quantum Safe data integrity crypto suites. let me go ahead and share my
screen here. who I don't think is here today. No. Will raised a, pull
request to add, the new quantum safe algorithms that we decided upon a
couple of weeks ago. I think we're still trying to figure out how to name
these things. the key types.

Manu Sporny: I think still needs to add a couple more things to this before
we merge it. So, I'll ping Will on that. I haven't seen any other reviews
since the last time we discussed. So, once Will gets those things in, we'll
go ahead and go ahead, Dave. this the latter I think let's signal that hey
we are working on a number of postquantum crypto suites so we're definitely
doing MLDDSA SLH this should be SLH

Dave Longley: I did take a quick look at it yesterday or something. before
we merge it, did we want to try and make it a little more reusable than it
is now or there's still some duplication and so on in it? did we want to
merge and then editorially change that later? Okay. Yeah.

Manu Sporny: Falcon and SQLI. No,…

Dave Longley: The other comment I was going to make is the names used on
that line 512 513 don't necessarily match what's in the algorithms down
below. again I didn't know if we were just going to go ahead and merge or
we're going to try and type all that.

Manu Sporny: I mean, if there are changes that you see need to be made,…

Manu Sporny: suggest them as changes, but I don't think we're not trying to
get this perfect before we merge is…

Dave Longley: Yeah. Yeah.

Dave Longley: I didn't want to hold it up.

Manu Sporny:

Manu Sporny: what I'm trying to say. Yeah. Yep.

Dave Longley: And if we're going to be duplicated, it might not be worth
making all those little changes.

Manu Sporny: Yeah. Yeah. Correct. So any kind of algorithm stuff I think
you can skip but certainly algorithm identifiers we probably want to get
that ideally we pick these prefixes though we don't have to. Anyway, I
think what we're trying to do is we're trying to get something in there and
then we'll iterate on it after it's in there. okay, I'll ping Will to make
sure he is able to make some progress on that. and then let's move on to
our next top any questions on the postquantum stuff before we move on? All
right.

Manu Sporny: Then it's going to be over to you, Ing. do you want me to
screen share or are you going to screen share? What would you prefer?

Ying Tong: I'll try and screen share. but yeah, I Let's do salty. That
doesn't work. Sorry.

Manu Sporny: Sorry, Kong, you've got a fair bit of background noise. I
didn't quite make that out. Let me go ahead and share. kissing tong's on
mute.

Ying Tong: Sorry about that.

Manu Sporny: Yes, I think so.

Ying Tong: Yeah, Mono, you can go ahead. wait.

Manu Sporny: Okay, let me stop.

Ying Tong: Hold on. I think
00:30:00

Manu Sporny: Let's see. Yeah, we can see that.

Ying Tong: So just to conceptualize …

Ying Tong: what I'll be presenting today in the past three meetings we have
discussed a possible integration of the case included data integrity back
as an instantiation of derived proof so the Google people are doing because
can be used to derive a verifiable presentation given a verifiable
credential as an input. So without having to modify the underlying ECDFA
signature, we can generate proof over it for arbitrary predicates such as
collective disclosure or

Ying Tong: age verification. So our approach towards standardizing ZIN
snarks is modular. so we're breaking it down into separate namely
arithmetization interactive oracle proof polo commitment scheme and the
mirror transform. So the most manageable component out of these is the
calling scheme and today I'll be presenting a job stack I've been working
on and asking for feedback and comments. and also just stop me whenever you
have questions.

Ying Tong: So Greg mentioned just now vector patter commitments. those are
slightly different from polomal commitments. So any polomial commitment can
be used as a vector commitment by encoding the vector elements as polit.
but it is not the case that any vector commitment can be used as a
polinomal commitment. But a polomial commitment has to preserve the
underlying structure of the polomial, what it's variable, its powers. and
so my plan is to quickly walk through this draft. and to highlight some
questions I have about the best way to include

Ying Tong: to include additional information such as cypher suites
descriptions of efficiency and security. So at the high level a polomial
commitment scheme allows us to commit to a polomial and then later to prove
that we've evaluated the polomial correctly at a given challenge point. and
this is a building block in most modern DKs.

Ying Tong: this is how we get succinctness in the parent and this is also
where we introduce cryptographic hardness assumptions instead. So patterns
and permissions discrete log there's also path per that only assume so what
I've done is I went through a bunch of popular implementations of polyomic
commitment schemes.

Ying Tong: I picked libraries such as artworks which already provide a
natural abstraction and interface over several different PCS's and I also
went through the academic literature and converted it into a form that's
more practical for developers and more useful for standards. so the bulk of
this document is describing the generic interface that a polomial
commitment scheme needs to expose. so you can see here that we start with a
setup phase whereby we can specify the parameters that we need for this
polymer theme.
00:35:00

Ying Tong: Usually we target 128 bits of security for most protocols in use
today. and the setup base has been written here to be general over both
trusted setups and transparent setups. So for example the KZG falling on a
commitment uses a trusted setup but let's say fry when used on the
commitment scheme does not require a trusted setup.

Ying Tong: All right. this also generalizes over both univariat and
multilinear and in theory multivariants. so most schemes in use today use
either unarit or multilinear polomials. So for universe polomials we would
just set the number of variables to one. For multilinear polomials we would
set the degree bounds for each variable to be one. there are some schemes
that make use of barant. So it's good that we have this be very general. We
don't know what other multivaried might emerge.

Ying Tong: I'm going to stop here for a question.

Manu Sporny: I've got go ahead, Greg.

Greg Bernstein: I had two question or one point to make. but first a
question. these are kind of powerful techniques and I know they're kind of
lower level, but I was wondering given our flexibility with the u how we
can process credentials, might we be able to get closer to using these
things with our credentials with less additional stuff?

Ying Tong: It's not Yeah.

Ying Tong: Yeah.

Greg Bernstein: I mean this kind of came to me when I was started looking
at these for my pseudonym issue. So that's one thing to keep in mind. The
other one is while we may not be the place to standardize this, this could
be a CFRG type item. So you can contact me and some other folks about
bringing stuff in to the CFRG which is a different procedure.

Ying Tong: So if we could get the issuer to canonicize its credential in a
way that's friendly to poly on the commitment schemes. we could even use
poly on the commitment schemes in the canana site step. So for example have
the issuer assign a merco commitment to the messages. more correctly that
would be a better commitment scheme. but yeah,…

Ying Tong: definitely there are formats that are friendlier and

Greg Bernstein: So we might be able to cut out some of the middleman and…

Greg Bernstein: use some of these techniques more directly because a lot of
times I notice that these come at the end of a lot of IOPS or various
things that I don't fully have my head around yet…

Greg Bernstein: so just curious

Ying Tong: Yeah. …

Ying Tong: if we could get a nicer data format input to the snark we would
be able to take out a lot of parsing that is very unnatural to the
arithmetic circuit. so I was also targeting IETF or CFRG and I was at the
same time wanting to get feedback but the community group.

Ying Tong: so because the Google table said it would be very helpful to
integrate with data integrity and existing standards that I heard another
hand go up.
00:40:00

Manu Sporny: Yeah, that was mine. Yingong, a comment and, a question. So,
the initial comment is, agreeing with Greg here. This feels like something
that belongs in the CFRG,…

Ying Tong: I'll give you two It's okay.

Manu Sporny: meaning the mechanism seems fine. it's good. It's a
cryptographic primitive because it's a cryptographic primitive we probably
want to send it through the CFRG but as you mentioned we want to be able to
use it for the verifiable credential stuff and I think we are highly
confident that we can do a bunch of canonicalization and processing to make
the polomial commitment more efficient.

Manu Sporny: or just make sure that the input is friendlier to a polomial
commitment. so I'm fairly confident we can do all that. to Two different
comments. The first one it is going to help you a lot. So it's going to
help all of us a lot if the introduction talks about what stage in the
process we expect this polomial commitment to happen.

Manu Sporny: So the introduction really needs to talk about …

Ying Tong: Yeah. I think it's

Manu Sporny: where this spec fits in right so you know how we have this
diagram in the data integrity specification where we talk about the stages
of processing where it's like you have your input document then you put it
into canonicalization then you do some hashing operations and then you do
the digital signature and then out pops the digital signature, the proof.
exactly what you have on screen. I think we're going to have to put that in
each one of these cryptographic primitives, you're probably going to want
this type of explanation block in there to say, this specification covers
the third block, the polinomial commitment scheme, right?

Manu Sporny: So, make sure that's in your introduction. Otherwise, people
are going to be lost when they're reading it and you're going to get a
whole bunch of commentary back that's probably going to be unhelpful. So,
that's the first comment is make sure the introduction talks about where in
the stack this specification fits.

Ying Tong: You have to keep it.

Manu Sporny: Ideally, you also want to point to the other specifications
that fill out the other blocks so that people know that this is a full
solution. you may even want to point up at the data integrity,
specification for that uses this to help people understand that there's a
concrete realization, of the specification. okay, that's the first comment.

Manu Sporny: The second comment is that I think we are going to need some
concrete test vectors at the verifiable credential layer.

Ying Tong: Yeah,

Manu Sporny: So I think we need to let me kind of give some background on
thinking here. when Obby and Google were doing their presentation on their
snarkbased approach they had some presumptions about the input that were
really important. for example, the credential assumption is that it's a 2.4
kilobyte credential. that's a bad assumption to make for verifiable
credentials because you can have input credentials that are 5 megabytes in
size.

Manu Sporny: So what we're doing here needs to take that into
consideration. I'm using canonicalization and specific types of
credentials. I think we can get down to, 400 to 800 bytes for the
credential itself. but we could also have inputs that are five megabytes in
size. so what does that do to the derive proof and the snark output? So we
need a small credential as an example and a very large credential as an
example to kind of understand how performance changes between those sorts
of inputs.

Ying Tong: Let's see.

Manu Sporny: The other thing that we probably should consider, which is
what Greg said, is this is probably the wrong word, but there are ways to
cheat and make the solution look really good, right? So, one of the ways to
cheat is …
00:45:00

Manu Sporny: we're going to do a credential that only has three attributes
in it, only has one attribute in it. and look how powerful and small and
tiny and all that kind of stuff That's not completely unrealistic when it
comes to the verifiable credentials which are usually like a driver's
license or something of that nature.

Ying Tong: I don't think

Manu Sporny: And so there is a mechanism that we could use in this group.
sorry let me go back for a verifiable credential that's using data
integrity when we issue it we can issue the base credential which is a set
of attributes the person's name their height hair color that kind of stuff.
we encode all those attributes into the credential and there are about 35
in a driver's license and there can be hundreds to thousands in some of
these other credentials and…

Ying Tong: Can I have

Manu Sporny: so we need to also take that into those are realistic
credentials right we don't typically have credentials that only have one or
two attributes we have credentials that have tens to hundreds of attributes
we have

Manu Sporny: have a mech and when we issue that credential if we do like an
ECDSA and a postquantum signature and we do like a snarkbased signature
we'll issue the same set of attributes 30 of them but each proof on the
credential will have a totally different kind of cryptographic mechanism
attached to it and I think one of the things that we need to think about in
this group is if we're going to do a snark-based approach we might need to
down select the attributes that we're actually securing ing or the nark.
So, one of the tricks that we can do, with the data integrity work is we
can take a credential that has 30 attributes and potentially we can encode
the specific attributes that we're actually actually allowing to be,
presented in zero knowledge, which allows us to then highly optimize the
cryptographic, circuit for those three attributes.

Manu Sporny: So I think that needs to be a part of at least the high level
data integrity specification. We need to make a decision around are we
allowing down selection of attributes or do we allow issuers to down select
attributes in order to optimize the circuit

Manu Sporny: if we have no other choice or do we allow more flexible a
snark to be created over 30 attributes or 100 attributes or something like
that. I know that was a lot, Ying Tong, but I think it basically boils down
to we need a diagram like this in the work that you're doing so that people
understand where each spec fits in. and…

Ying Tong: Thank you.

Ying Tong: Yeah. Hi.

Manu Sporny: then that was The second thing is we need concrete examples of
realistic verifiable credentials. a driver's license and whatever we have
to think of a couple, small one and a large one.

Manu Sporny: and then small size credential three attributes maybe and one
that has 100 attributes and is five megabytes in size. I think we need to
see how the algorithms do compared to those inputs.

Manu Sporny: And finally I think we need to figure out if we're allowing
down selection of attributes. So if you have a 50 attribute credential, the
snark approach allows you to say I'm only protecting three of these
attributes as unlinkable things. Let me stop there. Inkong, did that make
sense? Did you have any questions on any of

Ying Tong: Yeah, that makes a lot of sense. Thank you. that was my first
question here is where in the bra do you think we should include these test
vectors benchmarks. absolutely.

Manu Sporny: Typically, those are at the end of the document at the very
bottom. So, you'll have an appendix and it'll say, test vectors. If you
look at some of the I can't think of one off the top of my head, but if you
look at most ITF specs, the W3C specs are the same way. They have all the
test vectors down at the bottom of the spec as an appix. Same thing for,…

Manu Sporny: size analyses, things of that nature.

Ying Tong: Okay, thank you.

Ying Tong: Very helpful. so I think I have seven minutes left. what I'm
going to do is just skim the rest of the generic interface. And yeah, I'm
hoping to get comments and questions from Hope here. so for example, is
there anything else this specification should include? are there any edge
cases or infrances? Are there any reference implementation? I should look
out. or any other comments high level or detailed would be so after the
setup phase.
00:50:00

Ying Tong: we can instantiate a specific pair of keys for the commitment
approver and the commitment verifier. and these keys are used in creating
commitments and opening proofs and verifying opening proofs respectively.
So, I heard a

Manu Sporny: Yeah, just a feedback on you will need algorithms in here and
you will need to spell out exactly what the algorithms do beyond what you
have here. So this is great you're defining the interface, you're defining
the input and the output. but each function also has to painstakingly
document

Manu Sporny: what the algorithm is. how would someone read through the
specification and implement it in code? it doesn't have to be bit pseudo
code but you will need the algorithms for each one of these functions.

Ying Tong: So yeah,…

Ying Tong: that's the thing. so this is a generic interface and each
instantiation of it will use a different underlying algorithm. So for
example, KZG policment, it's just going to use the Merkel hashes. so
depending on the underlying cryptographic primitive, the algorithm is going
to look completely different.

Manu Sporny: Yeah plus one to that. understood. I guess for the specific
algorithm or I guess the question is are you going to add the spe some of
the specific algorithms into the specification because the reason I ask is
ITF does not like doing meta specifications like they want to know exactly

Ying Tong: And it's Yeah,…

Manu Sporny: what the binary inputs and outputs are going to be for a very
specific algorithm. So it's fine to it it is good that you have defined it
in this way and you have a generic interface but you will also need to get
towards the bottom around the exact concrete algorithm and what the steps
are there. Did that make sense?

Ying Tong: I mentioned her that it's difficult for us to pick cypher suite
because lots of the underlying primitives are not specified. So for let's
say I wanted to do kg commitments I don't have a parenting friendly curve
standard that I can use and yeah I need to think more about how to
structure this. you're right in that it's a meta fact and that we do intend
to prepare reference implementations and examples for specific instances of
it. I think the question is where in the drop we should mention these.

Ying Tong: So it could be the case that we include them in the appendix,
but I think my point here is that this is a meta spec and it's not going to
prescribe any particular instantiation. I think the most we could do is
list a few concrete examples.

Manu Sporny: Yeah. and…

Manu Sporny: just in, I don't agree with the way that this is how ITF
operates in this capacity, but you will be hard bounced out of CFRG if you
bring in medit.

Ying Tong: I say I want to Welcome.

Manu Sporny: They'll just reject it outright. They'll just basically be
like, "Come back when you've got something that we can implement." and they
can be unfortunately quite rude about it. what you might so I think what
you're looking at as examples is the thing that they have to see right and
I completely get like there isn't a pairing friendly spec for the KCG
commitment stuff.
00:55:00

Manu Sporny: What that means is that you will have to write that for it to
really go anywhere at CFRG or you will have to find alternate mechanisms
where you can spec a good chunk of it in this specification. So, it's fine
to have a meta-pec, but that metaspec must also include at least one very
concrete algorithm and…

Manu Sporny: and set of things or it has to point to something else like
another draft that is very concrete about the implementation of that. Did
that make sense?

Ying Tong: Yeah. Yeah.

Ying Tong: Thank you for the advice. yeah, we have no shortage of concrete
implementation. And I will think more about how to structure it. yeah, I
agree this is a new approach with meta…

Ying Tong: but I feel like spiritually it's more compatible with W3C just
for example VP data integrity there like a BBS version and…

Manu Sporny: Yes. Yeah.

Ying Tong: So structurally it's more compatible with that but maybe the
actual subject of it it's more under the magic of

Manu Sporny: And the reason for that is is As strange as it sounds, there
is a cultural difference in the way ITF and CFRG do cryptographic standards
and the way W3C puts them together. That's the difference, right?

Manu Sporny: So I mean the reason we did the data integrity work at the W3C
is because it couldn't have been done at the ITF because of the cultural
differences there. And so when you create these low-level cryptographic
primitives W3C I won't say they won't standardize it…

Manu Sporny: but everyone says they point the CFRG and they say you've got
to go over there. in doing that, you've got to change the way you write the
specification into a format that they accept culturally,…

Ying Tong: Not yet.

Ying Tong: Okay.

Manu Sporny: So, there's nothing wrong with metaspecs and having other
specs, out there.

Manu Sporny: I'm just letting culturally at IETF they'll hard bounce
metaspecs out unless you also in the metaspec specifically implement a
concrete realization of the interface or you can come in with two different
specifications one of them kind of meta and one of them concrete but
they're more likely to bounce that out. So I'd say just start off with a
concrete realization of the interfaces and that will probably be more
likely to be accepted.

Dave Longley: Yeah, and I was going to say and you can pick the simplest
one, whatever is easiest to write pseudo code for. All you need is at least
one concrete thing that could be implemented and people can see where you
could slot in other choices.

Ying Tong: Okay, thank you.

Ying Tong: Okay, got it.

Manu Sporny: All right.

Manu Sporny: With that, we're over time. but this was great. Ying Tong,
wonderful work as always. Really excited to see kind of the next iteration
on that. we will meet again next week. we will continue Intong if there's
anything else that you want to cover next week please let us know and then
Greg will also check in with BBS and hopefully have some updates on the
postquantum stuff. And clearly if there's anything else anyone wants to
cover we can do that as All right. thank you everyone. Have a wonderful
weekend and we will meet again next week. Take care. Bye.

Janabel Xia: Thank you.
Meeting ended after 00:59:45 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Friday, 9 May 2025 22:05:26 UTC