[MINUTES] Data Integrity 2025-05-16

W3C Data Integrity Community Group Meeting Summary - 2025/05/16

*Topics Covered:*

   1.

   *W3C Verifiable Credential 2.0 Specifications:* Announcement that
   several specifications, including Data Integrity 1.0, became global
   standards. The working group will take a short break before focusing on
   point releases and addressing potential vulnerabilities.
   2.

   *Post-Quantum Pseudonyms with Everlasting Unlinkability:* Greg Bernstein
   presented performance results of three implementations aiming for
   unlinkability even with quantum computers. The polynomial evaluation method
   showed the best performance (around 4-8 seconds depending on the task,
   using JavaScript on a non-optimized laptop), though performance improves
   significantly with fewer secrets (e.g., 100 instead of 1000). The group
   discussed the trade-off between the number of secrets and the level of
   protection needed. The time required for issuer verification was identified
   as a key area for potential optimization, though techniques requiring new
   cryptographic standards weren't considered for this immediate iteration.
   Pre-computation of proofs by the holder was also discussed as a potential
   optimization.
   3.

   *ZKP Approaches for Data Integrity:* Ying Tong presented an updated
   draft specification for polynomial commitment schemes, seeking feedback on
   its structure and level of detail. The group suggested structuring it
   similarly to W3C specifications, with a generic interface and specific
   examples (commitment suites). The possibility of integrating this work with
   the BBS signature scheme and exploring its potential for creating more
   efficient post-quantum unlinkable mechanisms was discussed. A focus on
   experimentation and benchmarking various polynomial commitment schemes to
   achieve significant speed improvements was suggested as a valuable next
   step. The implications for canonicalization were also discussed.
   4.

   *Post-Quantum Crypto Suites Specifications:* The group agreed to
   continue the discussion on naming conventions for public key identifiers in
   a future meeting.

*Key Points:*

   - Several W3C Verifiable Credential specifications, including crucial
   data integrity work, achieved global standard status.
   - A performant implementation for post-quantum pseudonyms with
   everlasting unlinkability was demonstrated, though further optimization is
   possible.
   - The number of secrets used in the pseudonym scheme needs to be
   carefully considered based on security requirements and performance
   constraints.
   - The specification for polynomial commitment schemes needs further
   refinement, potentially with a generic interface and concrete examples.
   - Focus on the practical implications and implementation of polynomial
   commitment schemes in the context of verifiable credentials (specifically
   improving efficiency by potentially several orders of magnitude) was
   prioritized.
   - Integrating polynomial commitment schemes into the BBS specification
   and experimenting with ZKP-based approaches for data integrity are
   potential next steps.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-05-16.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-05-16.mp4
*Data Integrity - 2025/05/16 09:58 EDT - Transcript* *Attendees*

Dave Longley, Eddie Dennis, Geun-Hyung Kim, Greg Bernstein, Hiroyuki Sano,
John's Notetaker, Manu Sporny, Parth Bhatt, Tom Jones, Tom's Notetaker,
Will Abramson, Ying Tong
*Transcript*

Manu Sporny: All right. let's go ahead and get started. Welcome everyone to
the data integrity call. this is May 16th 2025. we have a fairly
straightforward agenda today. Greg was going to take us through, the
postquantum pseudonyms with everlasting unlinkability protections work that
he's been doing. he had to implement that in a variety of different ways
and, has some performance numbers for us.

Manu Sporny: good news is that we've found at least one mechanism that's
fairly performant and fairly straightforward to implement doesn't require
any new advanced fancy crypto beyond what BBS already provides. So he will
hopefully be here today and take us through that. we will also continue
with Ying Kong's work on the ZKP approaches for data integrity u as well as
seeing if you've got any updates to the postquantum cryptoeuite
specifications.

Manu Sporny: I think one of the things there that we need to figure out is
we need to pick some names for these public key identifiers. and the
challenge here is that hasn't picked any names yet. for some of them they
have them for others like the isogyny stuff they don't. So we'll need to
have a discussion about that. that is our agenda for today. are there any
other additions or updates to the agenda? anything else that we need to
cover today? All right.

Manu Sporny: if not we can review some community updates and then after
that get into the agenda with Greg the BBS performance work that you've
done first up and then Ying Tong's work and then the postquantum
cryptosweet specs. we will probably have some questions on the PR and that
you raised and if you have any clarifications need from the group just a
heads up that we'll talk about that. general community announcements. I
think folks saw but the W3C verifiable credential 20 specifications
including seven.

Manu Sporny: So there's a family of specifications. Some of them include
the data integrity work became global standards yesterday. So W3C put out
this press release about all the specifications that were published.
there's nice write up here. the family of recommendations is also written
up on the W3C news page. So this includes global standards for VC2, data
integrity 10, ECDSA crypto suites, EDDDSA crypto suites, the hosy cozy
securing mechanisms, controlled identifiers and bitstring status list. So
quite a large number of specifications pushed through. what that means is
that the work that we're doing in this group has a very solid base now.
They are global standards.
00:05:00

Manu Sporny: We are building on top of data integrity. we are doing better
unlinkable crypto suites through BBS and we are also doing postquantum
crypto suites through the quantum safe crypto suites. so that's good news.
next steps for this work the working group is going to take a little bit of
a breather for a couple of weeks and then we will turn the crank on all of
the one and 21 specifications. So, we'll move on to the point releases.
We'll start making any editorial updates.

Manu Sporny: and that'll open us up to being able to address any bara or if
somebody finds severe vulnerability we'll be able to move very quickly to
fix that as well. okay I think that's it for at least announce good work to
all of you that were involved in that work. that was at least three years
of active working group time and there were two years before that we were
also working on it. So that's five years of our collective lives put into
that work. and we already know that this stuff is going out in production
deployment soon.

Manu Sporny: So, it's not like we're going to have to wait very long. There
have been people that have been integrating this stuff the entire time and
we will see some production deployment announcements probably in the couple
of months. okay, that's it for that work. going back to our agenda, this
first one's over to you,…

Manu Sporny: Greg, to give us kind of an update on, how the implementations
for the everlasting unlinkability for pseudonyms work is going.

Greg Bernstein: Okay, let's see…

Greg Bernstein: if I can present and let's see if it's gonna let me select
window or screen.

Greg Bernstein: So to remind everyone we are trying to make sure in case of
a quantum computer when it happens that the use of pseudonyms are similar
to the use of BBS in general in the sense that you can't link even if
there's a quantum

Greg Bernstein: The quantum computer could allow somebody to forge a BBS
signature they could forge an ECDSA signature, but they won't be able to
link the pseudonym's uses across context. And to enable that, we need the
simplest solution. And this is what we call information theoretic or
everlasting. It doesn't matter about computation. We use a vector of
secrets. The advantage of this is it doesn't need any new cryptography.
Okay.

Greg Bernstein: to remind folks of the technical theory behind this It uses
for its zero knowledge proofs a technique called sigma protocols in
addition for some of it it uses some stuff called pairing cryptography. But
the zero knowledge proof part is mostly based on something known as the
sigma protocol. There's a theory about using arbit how you can prove
arbitrary linear relations in the group. This is an excerpt from the book
written by Bonet and Shupe that's available freely online. Okay.

Greg Bernstein: And what we were doing we said we were going to do last
time was we were going to compare the initial way of coming up with this
vector of pseudonyms to two other proposals that are equivalentish under
the sense that they use what's known as one of these arbitrary or can be
cast as an arbitrary very linear relation even if they have different
performance computationally. so my initial thing that I came up with looks
just like this. Okay.
00:10:00

Greg Bernstein: So here we have a product of a bunch of things and the only
difference here is whether we're using product sum or product notation
depending on people work this way with the elliptic curves and groups. So
just translate the product to a sum hash the curve generates these
different g things. Okay. and then the vector of secrets would be these x i
corresponding to that. so that's a very direct interpretation that I did
initially. another way of looking at this that could be more optimal is we
look at it more like evaluating a polomial.

Greg Bernstein: So I wrote up these things and put them on the issue page.
So I won't go through how this can be ined. The main thing to know is it's
by linear relations meaning it's simply in terms of the coefficients or the
values that we're using here. And so even though this polomial evaluation
thing looks a little bit more complicated, look there's these powers of
this thing. It's the secrets that it's linear in. And then there was one
more case that we did where we said let's come up with a bunch of different
values via hashing to a scalar rather than hashing to curve.

Greg Bernstein: We call the inner product case. Now we performance tested
these three cases. What they don't change is they don't change the
computation used by the person committing. So the holder who commits to
this vector of secrets does not reveal them but they commit to the vector
of secrets and they provide a proof of the vector of secrets. So what we're
seeing is right here commit okay to the vector of secrets and we're seeing
three different runs. these are all in milliseconds. This is a thousand mm
secrets. This is JavaScript not purposely written to take a long time but
not optimized because it's JavaScript.

Greg Bernstein: and we see that we broke out some of the pieces. There's a
piece where we prepare generators That took a second and a half or so.
there's the commit time and there's the proof time. So, we're add those
together, we're looking at five, six seconds. Okay. This can be done
prepared ahead of time. This does not need to happen quickly by the hold
the signing by the issuer. once again they have to prepare the generators
and we say they all need the same amount of generators. and that's
cachable. We can take that off the table. They don't need to do as quite so
much work. Okay, so're instead of six seconds, we're seeing about four
seconds, right?

Greg Bernstein: So okay they don't have any choice with the matter because
this is just a matter of how big that vector is. Our method does not have
anything that simplifies that they are doing a verification of the proof
that the holder really has this set of secrets and knows Then it comes back
to the U holder who does a verification of the signature. they have to do
some of this processing because they're getting back some information.

Greg Bernstein: We could reduce this out because in the verifiable
credentials a lot of our suites we don't have the U holder verify the
signature back from the U issuer. They can if they want to. So this one
step takes some time. Now we get to the different variations. So these are
the cases where the holder is going to prepare a presentation and they're
preparing a presentation within a new context for a new verifier. They
don't have anything cached as far as well they can cache these generator
things.
00:15:00

Greg Bernstein: They have their secrets that they got with their s back
that they used. what we saw with the index generator case is we saw there's
a basic piece that took about five seconds and another piece that took
about six seconds. And that in here we are calculating the pseudonym and
providing some other stuff. And the proof the name. So we were seeing six
extra extra seconds. This is a case. Okay.

Greg Bernstein: with the polomial evaluation case that six extra seconds
turned into eight to seven milliseconds. So that hash to curve plus the
exponentiations in the curve that takes a lot of time. So the method that
one of our colleagues suggested, Vasillus, basically turns that part of the
computation into almost a non- entity. We're still stuck with the 5
seconds. but polinomial evaluation, we checked at the inner product
approach. Was that a question?

Manu Sporny: Yeah, the five seconds.

Manu Sporny: So, I'm getting a bit lost into what the issuer is having to
do and what the holder is having to do. the polomial evaluation case are we
still looking at who's doing the proof in it I guess is the Question.

Greg Bernstein: proof generation.

Greg Bernstein: All the proof generation is being done by the holder. When
we look at what the issuer has to do, this blind sign with NIM. Sorry,
everything is jumping around. I am on an old laptop. what they're spending
their time doing is they're really spending their time validating the proof
that the holder knows those secrets because it's very long. Right.

Greg Bernstein: Okay, they've got a thousand secrets. So, they have an
extra now, of course, they are in the cloud. They're on a good server. They
should not be using JavaScript, but it comes out to an extra 3 seconds or
so. Advanced techniques could maybe short so things like bulletproofs and
such like that, but those are not standardized. So the approach I'm showing
you here is no new cryptography and using techniques that are well known so
we don't have to write a new standard for an advanced technique. I'm not
saying that we can't do that but I'm just saying if we're trying to get a
version of pseudonyms out there that offers this extra protection. Okay.

Greg Bernstein: So the burden for the issuer is that they have to check
that proof. Okay, that's what's happening with them. They have to
deserialize the proof and…

Greg Bernstein: check it. That's where this extra time comes from. They
don't Okay.

Manu Sporny: Right. Got it.

Manu Sporny: Yeah. Okay. Yeah. that's right.

Greg Bernstein: and that we have nothing fancy that's simple to reduce it.
That doesn't mean there aren't techniques.

Manu Sporny: Yeah. Yeah. the thing I'm going forward to kind of the use
cases that we have deployed in production. And so, if we look at things
like DMVs are currently issuing, their driver's licenses and things like
that to expire after 30 days. That's kind of the MDL approach is every 30
days you get a new digital driver's license that's auto refreshes,…
00:20:00

Manu Sporny: which is highly problematic. I mean, I think everyone knows
that.

Manu Sporny: And what we're trying to get is for longer lived documents. so
I'm trying to think of how many NIM secrets is enough for a month, I mean
thousand seems way more than you you would need. Maybe it would be a 100.
And if we look at a hundred then yeah that …

Greg Bernstein: seems a Yeah.

Greg Bernstein: Then we're back down into the noise level. this after…

Manu Sporny: then that's fine. that's the main thing I was concerned about
is okay so if we're issuing these things once a month then we're down to
100 and then these are rounding errors on the server side we can scale
horizontally and issue millions of these things and not really worry too
much about it.

Manu Sporny: No.

Greg Bernstein: what we have here is the problem of linear scale which is
not terrible…

Manu Sporny: Right. Yeah. Mhm.

Greg Bernstein: but the ZKP people they're equivalent of the size of this
stuff can get very big and that's why they want it square root or they want
it logarithmic. So that's why some of these commitment and proof things
like we're doing here they are working on all sorts of ways to do that more
optimally. it could be postquantum but it might be computationally. So
there's a lot of advanced techniques. they're not ready yet.

Greg Bernstein: And so if you're telling me 30 days and 100, I'm just
running this on my laptop with JavaScript. and I have not tried it on my
cell phone in JavaScript, but it's reasonable and it's a lot of protection.
I mean, because we don't have a quantum computer yet and…

Greg Bernstein:

Manu Sporny: Right. Yeah.

Manu Sporny: Yeah. of course. Yeah. the other thing that I think we still
need to understand is we need some way of reasoning about the number of the
length of this vector like what is acceptable

Greg Bernstein: it's acceptable.

Manu Sporny: because what we are counting on is the way this is written is
that we are presuming a hundred or…

Greg Bernstein: Yes. That's the level of protection. Yeah.

Manu Sporny: in this case we're presuming a thousand verifiers all 1,000 of
them are colluding. that's the type level of protection that we're trying
to provide here, which is way over the top, right?

Manu Sporny: And so I'm wondering if at some point we're going to have to
write about how people issuers reason about appropriate what appropriate is
right because let's say that you've got a subset of the population you've
got of those thousand verifiers only really two of them are likely to
collude and…

Greg Bernstein: Yeah. Yeah.

Manu Sporny: therefore we can get to some kind of number right percentage
chance that you will be discovered if two parties collude out of a thousand
and they both happen to have access to a cryptographically relevant quantum
computer which again is just such a massively high bar. okay that's
helpful. I think we do need to spend some time thinking about okay so we
have something that's tunable.

Manu Sporny: What should people be tuning it to when the things launched?

Greg Bernstein: Yeah. and…

Greg Bernstein: and this is a straightforward way to do it. I can write up
that maps. I know It looks more complicated. Not l I can write up that maps
this exactly to the textbook base. We just have to then weak some of the
details about exactly, where we get some of these constants and such that
we use for the standard purposes but it's that write up of okay how do you
use this and how do you reason about it I think that's the next cases
cleaning up the details and such like that because I know there are folks
that I've talked to have been wanting to make sure that we have the
00:25:00

Greg Bernstein: n equals one case where we're not worrying about it and the
n equals whatever you want case we want it we kind of wanted that all to be
programmed kind of the same we can have a default as we present I'm just
talking about how we're going to standard with standardize this with the
BBS community then how do we bring it into the credential thing we've kind
of done that I mean I did it initially But that write up about how to
reason about it. The one thing I'll probably say is if the number of uses
start, the number of vector sites gets bigger and bigger, at a certain
point, you hit the complexity of a ZKP proving that something that you've
put into a hash function.

Greg Bernstein: and then we can shift to more computationally because right
now we're at a high bar. This is it doesn't matter if you don't have a
quantum computer and you haven't linked these colluded with these n people,
there is no way information theoretically that you can with regardless of
how much computing power non-quantum you have. While if you go to something
like a hash function using a leerero technique just to prove the two items
that go into the hash function that's computationally but that's
postquantum computationally and so we may intersect again there and just
combine that with the other nice properties of BBS.

Greg Bernstein: So there's reasons not to worry about making N too huge
because we might want to go to other techniques that may be getting
standardized. that's why we're watching the ZKP people if that made any
sense. Sorry, it's early here.

Manu Sporny: We've got questions in the reminder to everyone that the chat
is not saved or recorded anywhere and…

Manu Sporny: so no no no I mean it's fine…

Greg Bernstein: Should I stop sharing?

Greg Bernstein: Or

Manu Sporny: but Inkong you might want to ask your question longly you
might want to make your points otherwise we're just not going to have a
record of

Ying Tong: My question is regarding how much we can premputee these proofs.

Ying Tong: Yeah, I think Dave made the comment that some part of the proof
is dependent on what we are disclosing and that's only known at
verification time. But I thought that the proof is just for your student
just for Yeah.

Greg Bernstein: That's a good point though.

Greg Bernstein: That's a good point. You got me. That is a really good
point. be sorry because just as we were having this discussion I was going
I was thinking about that as far as we premputee a lot of stuff and it
seems like the holder before they talk to a verifier can premputee
everything But the pseudonym computation now they have to do these a
different proof for each one but they could premputee it. So we may be able
to really help out the holder with because they can just premputee things.

Greg Bernstein: But sorry, I did I should have thought of more about that
before this because once again, they have to generate these new proofs each
time, but they're still doing it against the same vector of secrets. So
that means they can prepare ahead of time and we all know that helps a lot.
sorry.

Manu Sporny: Go ahead, Dave. …

Dave Longley: Yeah, I think

Manu Sporny: actually, hold on one second. Ying Kong, were you able to ask
all your questions or make all your points?

Ying Tong: I think I'm lacking context on what the relation being proven is.

Ying Tong: So as if I could find this documentation anywhere or gu that
would be helpful.

Greg Bernstein: Yep, I can get Yeah,…

Greg Bernstein: I can get you the link because I put it all into an issues
on the pseudonym stuff. what we're proving is really just that we know the
vector of secrets. So it's like a vector commitment or a polomial
commitment with the moderate but not super huge poly. That's why your ZKP
techniques, right? You guys deal with polies that are like a million terms
sometimes and you get a logarithmic in that right.
00:30:00

Ying Tong: Filter it. Yeah.

Greg Bernstein: Yeah, that's why let me I'll get you the link which has
more background on this so you can see and go Greg once you get to this
point we may be able to help you as long as we can glue it properly with
the BBS. So once again part of this is we want something sooner than later.
we don't want to wait for a different crypto standard, a ZKB standard, but
the stuff that you're proposing or Yingong may help out with a more
sophisticated variant of this.

Manu Sporny: All right. and Greg, we're going to have to wrap up and move
on to the next agenda item shortly. Dave, go ahead.

Manu Sporny: And then Greg, you to wrap up and then on to Intang. go ahead
Dave.

Dave Longley: All right,…

Dave Longley: I'll try to make these points quickly. I think it's
important, Greg, for us to say a little bit more about the totality of
what's being proved. we are focusing on the pseudonym piece but other
things that are being proved are that some statements that you're
disclosing some statements and the ones that aren't disclosed are in the
credential and there's a signature over it all that stuff that's always
there and some of the proof in it touches that and that's where the larger
number is still and so the question around whether or not we can premputee
that what would be required to do it and I think the other piece that's
interesting

Dave Longley: and important and this relates to Tom Jones's question is if
we were to map out the entire experience of connecting with a verifier and
they're going to make some kind of request for particular statements in
your credential the whole UX where that request is going to come in you're
going to prepare some kind of proof you're going to transmit that proof
over. If we map that whole thing out, we can kind of look at how long that
typically takes and where we could slot in these calls, where we could do
something that was precomputed, where we could potentially rerandomize
something that was older so that we could get an understanding of how
easily we can push if we still have significant times, which I think 6
seconds is significant.

Dave Longley: where we could push that around and and if we don't think we
need a thousand secrets that's a separate analysis problem but we need to
talk about how many do we need and what is that time and so getting a
mapping of the whole process and…

Dave Longley: and how long that whole process usually just takes from a
user experience perspective tells us if we have too much time in what we're
computing to that we can map to that user experience

Greg Bernstein: I agree.

Greg Bernstein: Agree completely. Now that it makes a lot of sense to and I
don't want to take up too much more time here, but the breaking up of those
pieces and the holder is proving that they know the signature, they know
their secrets, they know the quote unquote statements.

Greg Bernstein: ments from the issuer. All those come together. It's just
this thing being a thousand things long sticks out. right because we take
their credential, we canonicalize it, we get a set of statements, they get
hashed into scalers. and so yeah, if we have a very very long credential u
and we're not revealing much of it that if we had a very long learner
record we could hit things kind of like this if it was a thousand
statements long. Yeah.

Greg Bernstein: Lots of details about courses and such like that.
00:35:00

Manu Sporny: All right. we've got to move on. thank you very much, Greg,
for the update. I know we weren't able to get through all of it, but we can
cover it next meeting.

Greg Bernstein: I think that's good enough for now.

Manu Sporny: All Sounds good. Ingong, If you've got anything you'd like to
share with the group, any questions for the group. dear.

Ying Tong: Yeah, it is encouraging to see that there might be some overlap
between the polomial commitment standard and this BBS studentm work. so I
dropped the link to the draft for the polycomitman standard. it's the same
draft I was working on in the last meeting. but I've edited it according to
some comments from you guys. Let me also share my screen. Yeah, I think
that we can go through it again.

Ying Tong: one of the really helpful comments was just the need for a
diagram showing where polom commitment schemes fit into the overall generic
cas construction. and also a short paragraph some exposition about it.
yeah, I would still appreciate any further comments. for example, is this
too theoretical? but for me this schematic is useful just to place why
we're even discuss falling down commitment schemes.

Ying Tong: another helpful comment from last time was the need for test
factors and reference implementations. So, I made an appendix for it.
There's nothing in it right now. but yeah I had a few useful comments from
my other colleagues that our reference factor should cover enough of the
common both of polomic commitment schemes.

Ying Tong: So I think in Greg's case just now there was unit variant of
polomial and the other increasingly popular mode is a multi linear. so yeah
a lot of this is trying to address the problem of the meta spec. this is
still very much structured like a meta spec.

Ying Tong: yeah, in the sense that the bulk of it is a generic interface.
but I think one way we could still say something useful about common
implementations is to just list a few common cypher suites. So yeah, I
think I noted in the last meeting. I would very much prefer to structure it
like how the W3C does it. So there's one generic data integrity spec and
then separate specs for specific imple implementations like ACDSA, ADDSA,
BBS. but I guess I wanted some feedback from people in this group.

Ying Tong: What would you think of having this generic interface along with
a few examples of the more comp popular cyber suites?

Greg Bernstein: Instead of cipher suite, you mean polyal commitment method
or do you mean cyers? Cipher sweet.

Ying Tong: So yeah, I'm not sure what to call it, but it would be for
example that uses elliptic pairing. so this would be an example of a cyber
state.
00:40:00

Ying Tong: yeah, five basic it just uses this one.

Greg Bernstein: Oops. Manu has his hand up. But yeah, we could call those
cypress squeeze. We could call those commitment suites. Your choice.

Manu Sporny: We should probably call it a commitment suite. The cipher
suites we need to be careful about picking the language here because
there's a good chance we're going to just confuse the ecosystem if we say
cipher suite which we've typically meant to be the top level thing that
wraps absolutely everything when you're generating a proof of some kind.
this is certainly a part this is something that would go into a higher
level cipher suite. but yeah I think this is largely if we wanted a long
name for it it's a polomial commitment suite or…

Ying Tong: Let's check.

Manu Sporny: polomial commitment scheme and we may want to call it the
thing that you've got in the diagram you say polinomial commitment scheme
and so you might want to just label it.

Manu Sporny: here are some concrete polomial commitment schemes. and then
underneath that, do the KZG one or the FRRI one. Yeah, I think that that's
probably the suggestion…

Manu Sporny: because I think that is most likely to be accepted by ITF
people.

Ying Tong: Thank you.

Ying Tong: That's very So the current approach is not to enshrine one of
these concrete instantiations, but rather to list a few popular ones. And
in my mind, this could be a sort of compromise between the meta strike and
just specifying one of them. So I'm not sure. Yeah. what do people think?

Manu Sporny: What you could do is you will have to write down one of these
concretely the K let's just take the KCG which one's the easiest one to
write down do you feel liarero okay all right so let's say you will have to
create a concrete specification on the liarero polomial commitment scheme
that looks like this kind of spec you've put together…

Manu Sporny: but a separate one and then you can refer to that one from
this one. right.

Ying Tong: Okay, that makes sense.

Ying Tong: The nice thing is that the Google team already has libk as an
information address and within that they say something about the hero. So
one way forward could just be adapting that to fit this generic interface.
I had another hand

Manu Sporny: Go ahead, Dave.

Dave Longley: Yeah,…

Dave Longley: I put my hand up. I think that's a good idea if there's a
separate spec you can just reference and map it to the interface you're
defining. I also put a link in chat to if this is going to be FC style.
There's the hybrid public encryption RFC which defines a bunch of
primitives and interfaces for working with key encapsulation. And then it
goes on to provide at least one concrete example. But those are a bunch of
functions that it says, …

Dave Longley: if you want to use hybrid public key encryption, you need a
key encapsulation mechanism that has these properties or has this
interface. You need these other bits. and defines what those are. And then
it gives you a concrete example.

Ying Tong: I see. This is very helpful. let's see. So, yeah, besides that,
I think or the way forward is becoming clearer.

Ying Tong:

Ying Tong: So yeah, I guess my last kind of top level question is how we
can be useful to the W3C cuz yeah I understood from the last meeting that
this level of primitive is very low level but we do see it being used in
say Greg's work for pseudonyms. So yeah, is there some way I think in the
last meeting it was also raised that we could even point up to the VC data
integrity spec.
00:45:00

Ying Tong: so I'm curious to get some ideas from folks here how do you
think we can integrate or contribute to WPC?

Manu Sporny: I'm interested to hear what Greg thinks about this as well. I
think at least with the BBS stuff we can just embed the polomial commitment
scheme that Greg has identified as most performant directly into the BBS
specification. So I don't think we're going to need to refer to anything
outside for the current S specification. because that would be the fastest
way to get a global standard. I think the stuff that you're working on is
much more important as a foundational underpinning of something beyond BBS,
like some kind of, I think we've got a couple of options here, right?

Manu Sporny: One of them is to set this lay the groundwork for a
postquantum unlinkable mechanism. that doesn't exist right now. We know we
need it. and in order for that to exist, there have to be base
specifications put through the cryptographic forum CFRG at the internet
engineering task force and that could take years to do, but the work needs
to be started and the sooner we start the work the cryptographic community
needs time to get used to something existing and to provide input on it and
do cryptographic review and all that kind of stuff.

Manu Sporny: So one option and that would be very useful to the higher
level W3C specification u and any kind of new and I say quote unquote new
cryptography anything that's ZKPbased basically would need to go through
IETF before we could use it at a higher level. so that's one possibility.
but that's a very kind of long-term view of how to do this work.

Manu Sporny: the other more short-term thing is to pick a set of these that
we feel are pertinent for would create a full solution like we could get
something semi-equivalent to BBS using some of these new ZKP techniques and
we work on that in an experimental capacity and just shove all of that into
a W3 3C data integrity crypto suite, right? we just make some G Greg grain,
these three tests and there's one very clear approach that looks good and
we're going to go with that.

Manu Sporny: In the same way, we could do that with the work you're working
on Ying Tong and create a new data integrity crypto suite for some style of
ZKP that we want to standardize and just dump absolutely everything into
that spec and treat it as an experimental spec and just try to get
implementations so that we can demonstrate how this actually applies at a
very high level meaning like a verifiable credential like a driver's
license or a birth certificate or a vehicle title or something something
like that. and that would help us demonstrate how this stuff works kind of
at the application layer that has a lot of value as well.

Manu Sporny: but at some point we will have to take the primitives in that
specification and we will have to put them into IETF form if we wanted to
get that if we wanted it to become a global standard right it would just
have to go through that. so it's really dependent on you Ying Tong. If you
feel like we need some level of experimentation to see how useful these
things are at the application level where we're not quite sure about for
example which polinomial commitment scheme we want to use.
00:50:00

Manu Sporny: then we can just work on it in this group. figure out what we
want, what looks like it's got good performance characteristics and then
once we're sure about that, we can take it through. so both ways are
legitimate ways to go about this. I think one of them takes at IETF to
three to five years at ITF and the other way lets us do experiments within
three months to six months so that we can figure out the one we're going to
focus on the set of options we're going to focus

Ying Tong: Yeah, maybe one I like the experimentation approach and one use
case we could work on is this S student. Yeah, I understand that to get it
in the spec we don't want to use any new cryptography but it would be a
sort of self-contained problem to experiment on that doesn't require the
whole dec but it just requires polomic commitment scheme. Why is the castle?

Manu Sporny: Okay, thumbs up from Greg. That feels like something, focus
that we could make progress on. that's pluggable. so yeah, I mean, if
that's, the direction you want to go, Intang, I think, we, looks like Greg
would be happy with that.

Greg Bernstein: In general with the ZKP techniques we have in general we
can prove additional things about statements we already have.

Manu Sporny: I think it would provide some value in that we could, in a
focused way work on that primitive.

Ying Tong: Yeah, I think last meeting Greg was saying you were trying art
works. So I'm very down to they have a bunch of polyomic commitments
schemes already. it would not be that hard to just benchmark each of them.

Greg Bernstein: the other use is allowing us to prove things using turn old
cryptography ECDSA into something that can be selective ly disclosed but we
have a way of doing that but also turning it into something that's provides
anonymity and so we have a better way to prove something like pseudonyms
were proving that we know these things and that we calculated the pseudonym
correctly. So that's one example and tacking that on to BBS where we may
have some limits or computational issues. We could have other kinds of
things we want to prove which is more wide open.

Greg Bernstein: But then we have this issue that the Google people kind of
attacked but very much from the M do JWT kind of thing and we have better
starting points where I think we said last time it's like wait don't there
things grow as the circuit size grow as the size of the stuff they're
hashing the way they are taking it and we can reprocess things better. So
that's another way because we set up credentials, we have a much better
canonicalization approach with JSON LD and such.

Manu Sporny: All So, we're coming up at the top of the hour. I want to try
and spend I don't think we're going to be able to get to So, we may want to
Sorry, we're probably not going to be able to get to the postquantum thing.
but we do need to pick some names. So, next call we'll start with the
postquantum scheme. will just to try to, help picking some names so that we
can that get that PR in there. in Kong and then after we do the postquantum
thing during the next call, let's spend some time figuring out what we want
to focus on, right?

Manu Sporny: I mean I think the thing that would be most valuable to the
group is figuring out how much we can maximize so what you suggested the
polinomial commitment scheme those are useful but I think the bigger thing
that the group that would really benefit the group is understanding how
much more efficient we can get with the polinomial commitment scheme and I
have no idea where the POP factors into this but I think we're really
interested in understanding how efficient we can get with the schemes both
the size of the signature and the proving time.
00:55:00

Manu Sporny: if we start with very compact initial ECDSA proofs or
something like that. so that's the really interesting research topic that
would turn into an immediate implementation thing if we found out that we
can get a 10x 100x more efficient I don't know just throwing random numbers
out there but that would be a huge breakthrough if we can kind of
demonstrate that

Manu Sporny: Yeah.

Ying Tong: You mean if I can do whatever I want for the canonicalization?
Yeah. I think the 10x00x is completely reasonable. Yeah.

Manu Sporny: Mhm. That's right.

Ying Tong: I see it's more speculative, but you're saying if we can
demonstrate a compelling speed up like that would motivate a implementation
and start.

Manu Sporny: Yeah. if we can demonstrate a much more compelling kind of
proposal than what we currently have with the Google proposal. that would
have a lot of traction because all of a sudden that becomes something that
we want to deploy in the European Union and the United States and
everywhere, else in the world.

Manu Sporny: if it's just so much more efficient than what's being proposed
for MDOC or MDL that's right and…

Ying Tong: Yeah. Yeah. I feel like the Google team had started with the
premise that they cannot change a canonicalization, Break.

Manu Sporny: we don't have that same limitation and I say that in
production in the United States we do not have that same limitation so I
think it would be worth seeing how much more efficient we can get and if we
can't get more efficient then that's fine the Google solution stands but if
we can get high way more efficient because we can really shrink the size of
the initial input down then that would be a big win.

Manu Sporny: okay. We're actually apologies for going over. Thank you
everyone very much for the great discussion this week. Next week when we
meet we will pick up with the postquantum spec work and then we will
discuss this particular item about how we might most effectively use
everyone's time for the ZK stuff and then anything else that we want to
cover we can cover as well. All right, thanks everyone. Have a great
weekend. chat with all of you next week. Take care. Bye.
Meeting ended after 00:58:58 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Friday, 16 May 2025 22:08:45 UTC