[MINUTES] Data Integrity 2025-04-04

W3C Community Group Meeting Summary: dph-vtuc-iof (2025-04-04)

*Date:* 2025-04-04

*Attendees:* Dave Longley, Eddie Dennis, Eric Phillips, Geun-Hyung Kim,
Greg Bernstein, Hiroyuki Sano, John's Notetaker, Kayode Ezike, Manu Sporny,
Parth Bhatt, Phillip Long, Will Abramson

*Topics Covered and Key Points:*

   1.

   *Community Updates:*
   - Voting period open for base data integrity specs (closes in 13 days).
      Encouragement to vote.
      - Engagement with security researchers and zero-knowledge
      cryptography communities planned.
   2.

   *CCG Post-Quantum Data Integrity Specification:*
   - Goal: Include post-quantum crypto suites in the next Verifiable
      Credential working group charter.
      - Proposed Crypto Suites: MLDDSA, Stateless Hash-Based Signatures
      (SHS), Falcon, with a placeholder for isogeny-based signatures (SQISign).
      Only the lowest security parameter level will be supported for each
      algorithm to improve interoperability and reduce attack surface.
MLDDSA may
      be dropped later if Falcon becomes FIPS approved and widely adopted.
      - Will Abramson volunteered to update the specification. Parth Bhatt
      offered assistance.
   3.

   *Everlasting Unlinkability for Pseudonyms:*
   - Three options explored: traditional unlinkability (vulnerable to
      quantum computers), information-theoretic unlinkability (limited by the
      number of pseudonyms, 'n'), and post-quantum unlinkability (not yet
      standardized).
      - Information-theoretic unlinkability: A user can create 'n'
      pseudonyms before unlinkability is compromised if 'n' verifiers
collude. A
      value of n=100 was suggested as a practical balance. The impact
of reusing
      pseudonyms with the same verifier across different contexts was
discussed,
      highlighting the need for wallet-level controls to limit the number of
      presentations per credential.
      - Post-quantum unlinkability using a hash function was considered as
      an alternative, lacking everlasting unlinkability but offering simpler
      implementation.
      - The group agreed that the specification needs to clearly define how
      to manage pseudonym usage to prevent exploitation of the 'n'
limit. Wallet
      implementations must track contexts and prevent excessive pseudonym
      creation. The lifespan of credentials and issuer-side rate-limiting were
      also highlighted as mitigating factors.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-04-04.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-04-04.mp4
*dph-vtuc-iof (2025-04-04 10:00 GMT-4) - Transcript* *Attendees*

Dave Longley, Eddie Dennis, Eric Phillips, Geun-Hyung Kim, Greg Bernstein,
Hiroyuki Sano, John's Notetaker, Kayode Ezike, Manu Sporny, Parth Bhatt,
Phillip Long, Will Abramson
*Transcript*

Manu Sporny: All right, welcome folks. Let's go ahead and get started. we
do have an agenda for today. on the agenda today is mostly spending a good
chunk of the time thinking about what we want to be in the postquantum data
integrity specification. and then spending a little time getting an update
from Greg on pseudonyms and the everlasting unlinkability stuff. I think it
might be a quick call, but we'll see how it goes. let's go ahead and get
started with introductions really quickly. there's anyone new to the call?

Manu Sporny: I think Eric, you're new. What we usually do is just give a
quick one or two sentence introduction to yourself what you're interested
in the work or what kind of work you're interested in around data
integrity. and then we kind of go on from there. So, Eric, do you mind
giving an intro to you don't have to, but, we'd love to hear from you and
you're on mute if you're talking. Okay, we can skip the intro.

Manu Sporny: then let's go ahead and hear about any community updates.
anything in general we should be aware of when it comes to the data
integrity stuff. I'll mention that the voting period is open for the base
data integrity specs. it will close in about 13 days. So, if you have not
voted or if of companies that have not voted, please urge them to do so. I
will note that, we've got at least 14 companies that have voted so far, but
short of the companies that are in the actual working group. So, I'm going
to try and reach out to a couple of them and remind their AC reps that the
voting period's open for a working group that they're actively engaged in.

Manu Sporny: but if everyone else can kind of reach out to their advisory
committee representatives and get them to vote on the global standard for
data integrity and the crypto suites of course that would be much
appreciated. any other kind of community updates? Anything else we should
discuss or be aware of? I guess the other thing that I'll note is that I am
talking with quite a number of security researchers in people that are
involved in zero knowledge cryptography in communities that we haven't
quite engaged with recently. and I'm going to try to bring them into this
call as well.
00:05:00

Manu Sporny: There's a lot of interest in using zero knowledge technologies
for existing technologies meaning the zero knowledge passport stuff the
zero knowledge ECDSA zk snark stuff and that sort of thing. So going to be
inviting some of those folks into this call in the coming weeks. Okay, if
that's it, let's go ahead and jump into our main agenda item which is the
CCG postquantum what is it quantum safe specification.

Manu Sporny: we were able to chat with Yaramel from Dine a bit. Will,
you've got your name as an editor on here as well. so does Dig Bazaar. we
want to get this specification into the next verifiable credential working
group charter. So, we're going to ask for a rechartering sometime in the
summer, and it would be nice to get a set of postquantum suites in there.
and in order to do that, the spec has to be in better shape than it is
today. So, we have to improve the specification from where it is today.

Manu Sporny: so we're going to spend at least 20 to 30 minutes today trying
to highlight the things that we would like to show up in the specification
and then ideally get some editors to move that forward over the next couple
of weeks. so we're at the point now where we definitely need to put pen to
paper to get some of these other crypto suites in there. okay so that's
kind of the opening. I will also note in parallel we are getting an active
security review from the W3C security interest group on data integrity. and
that's going fairly well. u I yeah all those notes are public. I don't have
the link on me.

Manu Sporny: Hopefully maybe somebody else can take a look to see if they
can find the minutes. but the security group has done a high level review
of data integrity ECDSA EDDDSA. it went fairly well. they had a number of
questions around why we did not allow for all the different elliptic curves
and why didn't we support 521? why didn't we support ED448? and the
response was the typical response we've been talking about over the past
couple of years, 256 requires all the energy output of the sun for 6 years
to be able to even remotely come close to, potentially breaking it.

Manu Sporny: 384 and 521 don't really provide anything beyond unless some
base mathematical breakthroughs made HSM don't support some of the higher
numbers European even the work in Europe is basically saying 256 should be
fine for everything so I think they seems like they're backing off of
things curves like brainpool and things of that nature and so on and so
forth. So that was the first response is it's not really seeming like by
the time elliptic curves are broken that 384 or 521's really going to help
anyone. in that and they seem to be accepting of that. They're like okay
sure that I guess makes sense.

Manu Sporny: they want to go through and do a much more rigorous
mathematical and energy analysis on is it really true what the limits of
256 are with where computers are today GPUs and things of that nature but
again I think that's held for you decade plus 15 plus years at this point
okay so that input kind of goes into the postquantum discussion. there are
other questions around set signatures versus chain signatures and whether
hybrid signatures with elliptic curves plus postquantum whether it matters
with the difference matters between a set signature and a chain signature.
we've had that discussion in this group over the past couple of weeks.
00:10:00

Manu Sporny: I think the conclusion we came to is it doesn't matter as long
as there's a pre-quantum signature and a postquantum signature on it. It
really doesn't matter if it's a chain same things apply. so that was a
whole review. they're going to be writing up some of their findings. There
were some other things around, it would be nice to instruct implementations
to make sure that they do certain things security section could be,
restructured using their new model, but there was nothing came out of the
security review that was a massive design flaw or really change in the
direction.

Manu Sporny: So, that applies to kind of the postquantum crypto suite that
we're going to do here. I think during the last discussion we had here we
agreed that it would be good to have at least the stateless hashbased SHs
signatures and we wanted to put room in here for Falcon when that's a FIPS
approved algorithm and those would be the three that we would end up
supporting.

Manu Sporny: we support MLDDSA because that's the first one that got
approved the next one is the stateless hashbased stuff. So we had a
lattisbased scheme, a hashbased scheme and then Falcon provides some pretty
nice optimizations around key size and signature size and things of that
nature. let me stop there. is there anyone else that feels like we
shouldn't have one of those or we should have another one as a part of this
first kind of crypto suite we're putting through the group.

Manu Sporny: Go ahead, Dave. Mhm.

Dave Longley: The only other one I could think of would be…

Dave Longley: if something happens with SQI sign in the meantime. I don't
know if we can leave some room to include that, but that work would provide
signature sizes and speeds and so on that are really similar to Elliptic
Curve. they're just a little bit bigger. whereas the others are all they
range from considerably bigger to much bigger, but that work is not ready
yet. It's not nest approved. but we don't know what might happen while
we're working on this. so I don't know if we can leave any room for it or
if we just want to say that it'll have to be a follow on. Yes.

Manu Sporny: So that's the isogynes work I guess the SQI sign stuff. Yeah.

Manu Sporny: I mean one concern I have is that even at three it's a lot
like I don't know if and the unfortunate thing is the way that this stuff
is typically done at the lower levels is the cryptographers here are all
the options and they have a ton of options and it tends to be a really
terrible interoperability story right I mean the more parameters and
variability and all that kind of stuff that we have the worse the interoper
ility story is and that leads to larger attack surface and inevitably
security failures when downgrade attacks, start happening. because the
attacker is always going to pick the weakest thing or the thing with that
has the bug and the servers tend to basically say we support all of these
things.

Manu Sporny: And so you end up in a situation where it's just downgrade
attacks are the typical way to defeat those sorts of systems. go ahead Dave.

Dave Longley: So I agree with the first part of that and we should try to
aggressively reduce the parameters or choices within one of these different
types. but each we really don't know how well these things are going to
hold up and each one of these is a different approach to the problem. as
for the latter part about downgrade attacks, at least with respect to the
verifiable credential space, I think there's less of a concern there
because verifiers can say this is the only type I will accept and they
should be fairly flexible and it should be fairly easy for them to do so.
00:15:00

Dave Longley: So even if a VC has a proof set on it that has three
different proof types,…

Dave Longley: verifiers can, almost immediately say this, but this is the
only one I will accept. And that would be the one that you would have to
present with.

Manu Sporny: Mhm. …

Manu Sporny: I'm ignorant of How different is MLDDSA from Falcon? I'm
wondering if that is a Okay.

Dave Longley: They use different approaches. Falcon uses some floatingoint
stuff that's different from the lattice stuff that MLDDSA is using.

Manu Sporny: So they're significantly different enough and they're like the
f fundamental mathematical problem the hardness problem is

Dave Longley: Both I don't know enough about it. I do think Falcon still
uses lattises in some way. I don't know if they have the same hardness
dependency. but if they do have the same harness dependency Falcon has
smaller sizes and is more attractive.

Dave Longley: It's just that MLDDSA has come out first. But if they are
different, then once again, that's an argument to support both of them
because we don't know how this is going to pan

Greg Bernstein: Falcon is based on lattises…

Greg Bernstein: but uses this trap door type technique which is also being
looked at for some of the privacy preserving g versions of signatures. So
it is significantly different from MLDDSA in addition to having some
optimizations, but it seems like it's on the way towards something more
like a postquantum BBS as far as things go. I mean at least the followon
work.

Manu Sporny: for Falcon looks like.

Greg Bernstein: Okay.

Manu Sporny: All that was just an attempt to see if we could reduce it to,
remove MLDDSA or Falcon. but it sounds like the answer is no. so that
probably …

Manu Sporny: go ahead Dave.

Dave Longley: I would say we should go in with all of them,…

Dave Longley: but we might find out by the end of it that we don't
necessarily need one of them. and I would think that if Falcon has NIST
approval and FIPS 206 and everything is out at that time, if we decide to
cut one, I would think it would be MLDDSA unless we find out that
everyone's implementing that one. So there's a variety of different reasons
to support some of the different options.

Dave Longley: So I would want us to go in with the options. but the working
group could later decide it looks like people are going in this direction.
I can drop one of

Manu Sporny: All right.

Manu Sporny: So then that probably means that what we need to do is current
let's talk about parameters. and let me just propose that I don't think we
should support anything but the lowest parameter for all of these things.
it's just we don't know how secure they are.

Manu Sporny: the likelihood that one of them is just fundamentally broken
is a real possibility and on top of that it doesn't seem like I mean also a
cryptographically relevant quantum computer does not exist today and the
timelines for them are not in the next five years unless there's a amazing
breakthrough and so I don't see any reason we need to support the level two
or level three or level five versions of the algorithms. thoughts?

Manu Sporny: Go ahead Dave.

Dave Longley: Yeah, I think we should go in with that as the default to
increase interoperability…

Dave Longley: unless something comes in that requires us to do something
different or strongly indicates some country that wants to use this has
mandated that some other security levels required.

Dave Longley: I don't think we should try to support more than the fastest
thing that is considered secure.
00:20:00

Manu Sporny: Yeah, agreed.

Manu Sporny: That does open a more general question which there are people
that come in and insist that they need 521 or the highest level of
security. usually the reasoning is tends to be fairly misguided.

Manu Sporny: but there are very for good reason paranoid security agencies
that just crank the levels all the way to the top because the secrets are
so important that they want to use the highest level. So there is a
question around having something in the middle doesn't really seem to make
a whole bunch of sense for the postquantum stuff. It's like you either use
the basic thing which is either going to stand the test of time or not like
EC 256 P256 has to date. or you're going to be very very very paranoid and
you're just going to crank the settings all the way up to the top and the
question is do we want to cater to that?

Manu Sporny: I will note that in the past 10 years that this work has been
going on, no one has said that they absolutely need data integrity to have
the highest setting. go ahead. Dave

Dave Longley: This sort of thing has it came up with P521, which browsers
removed because people were not using it. I think it's less of a concern
here with digital signatures because there can be secrets with selective
disclosure, but so far with what we're using with BBS, those secrets are
statistically hidden. they're hidden. they're perfectly hidden and they're
not even susceptible to quantum attacks. but for this particular document,
we're talking about generic digital signatures and so the data that you're
signing is revealed. So, the threat is around forgery. And so, I don't
think that that sort of changes the model.

Dave Longley: We're not worried about secrets getting out there because
they've been encrypted. Rather whether or not it's the question is whether
or not we continue to trust something that has a stamp on it because it
might have been forged. And so I think that consideration is a little bit
different. And if we need to pivot and increase the security or do further
crypto suites in the future because something's been broken then we can
pivot and do that. We can do it during the working group if it happens. I
would think more likely we would just drop whatever it is. it seems
unlikely that if one of these schemes at this point in history is broken
that the stronger versions won't also be broken.

Dave Longley: That is possible, but just given how new they are, seems like
it's more likely there's a critical flaw rather than we just didn't quite
make the bump the security up enough.

Manu Sporny: Yep. Okay.

Manu Sporny: Then all of that is probably pointing towards we're going to
have four different crypto suites at the lowest security level. quote
unquote low security level MLDDSA SHS DSA Falcon and then we'll put a
placeholder in for the isogynes stuff noting that it is highly likely that
it will not make it into the specification.

Manu Sporny: It would be wonderful if it did because we would have,
postquantum signatures that are equivalent in speed and size to signatures.
okay, that's the concrete proposal. Is there any opposition to that? anyone
object to that path? All right. Not hearing any objections. Are there any
modifications that folks would like to see in the quantum safe crypto
suites document? Any other things we need to make sure are in there? then's
that gives us exactly the sorts of things that we need to do. the next
question is editorially who's going to do that work?
00:25:00

Manu Sporny: I can certainly put it on my queue, but my queue is way
delayed. I mean, it's a pretty deep back on editorial stuff. Do we have any
other volunteers that could add these sections into the spec? It shouldn't
be a difficult work. It's mostly copy and paste of what's,…

Manu Sporny: already in there. these section 32 you're basically going to
copy and paste that to the other crypto suites and just change some
references and key formats.

Will Abramson: Yeah, I can probably do that.

Will Abramson: It's fine.

Manu Sporny: Great.

Will Abramson: Do you mind just creating an issue or I'll try?

Manu Sporny: I think just raise PRs is fine.

Will Abramson: Cool.

Manu Sporny: We've talked about it. it'll be in the minutes. All that kind
of stuff.

Manu Sporny: Okay.

Will Abramson: Yeah, it is just copy and paste job, isn't it? I think.

Manu Sporny: Yep. Yep. Yep. Yeah. You'll want to make a pass through to
make sure that what we have here lines up well with the current specs that
are in proposed wreck like ECDSA.

Will Abramson: Yeah. Yeah.

Manu Sporny: But once you verified that, then it should be just copy and
paste for all the other schemes.

Will Abramson: Yeah, I can do that.

Manu Sporny: Thank you, Will. Much appreciated. any other volunteers that
would like to help out on the spec. All right. go ahead. Okay. then thank
you, I will look forward to kind of pre PRs.

Manu Sporny: Do you have an ET on when you'd be able to kind of get the
first one in there?

Will Abramson: Yeah. I guess I'm going to I so maybe …

Manu Sporny: Okay, that's fine. Let's say Okay.

Will Abramson: I'm traveling, but I promise.

Manu Sporny: Yeah, no problem. Totally totally understand. try to do, I
don't know. I might be able to get to it as well at least doing a review of
what's in there currently and updating it so that it's easier for you to
copy and paste. Okay. I think we have a firm plan there. thanks and we'll
review PRs as they come in. All right. that next item up is the Parth I
didn't see your text in the chat channel.

Manu Sporny: Parth has noted that he can help. so yeah, very much
appreciate that, if you could collaborate, that'd be great. all right. next
up is the everlasting unlinkability for pseudonyms discussion.

Manu Sporny: Greg, I don't know if you want to take over the screen or you
want me to show stuff. how would you like to Okay.

Greg Bernstein: Let me see…

Greg Bernstein: if I can let's select No share. Do we see that?

Manu Sporny: Yes.

Greg Bernstein: Okay, I'm going to go over that window. W3C. This is CFRG.
This is the repo for the pseudonym draft. a while back some unknown person
raised the issue about no everlasting unlinkability. so we've been having
conversations with cryptographers and such like that and emails and such
like that.

Greg Bernstein: I took those various things and started summarizing them in
this issue so there'd be a easy place for everybody to get to them review
them and comments and add comments. So I'm not sure did we go over these
basics on pseudonyms last time with this group.

Manu Sporny: We did. Yeah. Yeah.

Greg Bernstein: Okay.

Manu Sporny: I don't think we need to go over them again. We just need an
update on where are we?

Greg Bernstein: So, basically I'm scrolling and I know it makes people
motion sick. We can look at this as three options. We've got our
traditional unlinkability of pseudonyms under the discrete log assumption
which doesn't get us everlasting privacy under a quantum computer.
00:30:00

Greg Bernstein: We have a notion of information theoretic unlinkability
which means everlasting unlinkability but it's limited in the sense that
after you've created n pseudonyms and you've had n colluding verifiers
compare all the pseudonyms that various provers have used with them and you
have a cryptographically relevant quantum computer that you lose the
unlinkability after n pseudonyms are created by any given holder. Okay.

Greg Bernstein: But the cryptographers feel that this can be proven. And
this comes from using n random numbers to form a vector of what we call
pseudonym secrets and we construct a pseudonym appropriately from those
cryptographic secrets. I mean the third option would be some kind of
postquantum unlinkability one once again this is not information theoretic
this would not be the same as what we have with unlinkableility of proofs
but something that can't be broken currently with

Greg Bernstein: can't be broken with a quantum computer. So something like
just creating a pseudonym based on a conventional hash of the secret and
the context with each of these we've got issues. So the N pseudonym
everlasting privacy. I ran some tests because that kind of uses
computations similar to what we already have. So we were able to kind of
get an idea of how long it might take for the folder to generate this
information.

Greg Bernstein: This doesn't impact the Steiner as much, but we see here
numbers. If you want to be able to generate a hundred different pseudonyms,
you can kind of do it less than a second. If you want to do a thousand,
then we're start getting into territory which takes longer. U the size
scales linearly. So we see the added information there's about three k
bytes for 100 32 k bytes for a thousand. And questions.

Manu Sporny: That's in the derived proof, right? 3K in the derived or in
the initial.

Greg Bernstein: Yes. Yes.

Manu Sporny: So every presentation is going to be three kilobytes in size
with a and…

Greg Bernstein: What is okay?

Manu Sporny: end of 100.

Greg Bernstein: What is the proof? There's also a proof sent from the
holder to the signer saying this is my NIM secret and this is the proof
that it's a proof that says here is a commitment to my NIM secret and I'm
proving that I actually know what's in this That commitment thing is fixed
size. It's the proof that grows. What you get back as a signature from the
issuer and the additional work for the issuer is negligi It's always
between the holder and the signer or the holder and the verifier that you
get this extra size and computation.

Greg Bernstein: Is that clear?

Manu Sporny: It is. and this is jumping to a conclusion but I think what
we're looking for is we probably need some level of protection against
this. and the N of 100 looks to be performant enough to mitigate some of
the potentially greatest concern here. Right.
00:35:00

Manu Sporny: so we still haven't done the analysis of what N of 100 means.
Meaning like you would, when you go in, you present one of the ends and I
don't think we've picked do you always pick the same N for a verifier or do
you pick a different N every single time? and what are the implications?
ively, it feels like the implications are worse if you randomly select
versus selecting the same one each time. And then, if you pick the same one
for two different verifiers, that's the collusion concern that we have, if
both of those and these are a number of big ifs that need to be chained
together.

Manu Sporny: If there's a cryptographically relevant quantum computer and
if you pick the same in for two verifiers and…

Manu Sporny: if those two verifiers choose to collude then they can link
you across two different security domains.

Manu Sporny: I think is that correct? Is that the right way to interpret
the n of a 100?

Greg Bernstein: No, the N of a 100 means go ahead Dave.

Dave Longley: Can I take a guess at this to make sure that I've got it
right before you?

Dave Longley: Because Greg, and you can correct me if I'm wrong. my
understanding is that it doesn't work like that.

Greg Bernstein: Yeah. Yes.

Dave Longley: You don't choose a given NIM secret. Instead, your pseudonym
is constructed from all of them every single time. And there is a
statistical attack. If you present this to a 100 different verifiers and
all 100 of them or more all decide to collude together…

Greg Bernstein: Yes. They cannot.

Dave Longley: then they can deanonymize you otherwise they can't if they
only have 99 they can't pull it off.

Greg Bernstein: Yes. Okay.

Manu Sporny: That's much better than I thought it was.

Dave Longley: So yes it correct.

Greg Bernstein: And remember this is not per presentation but per separate
pseudonym created for a different verifier. You come back to the same
verifier and you give them a new proof. Your pseudonym stays the same. But
you can come back to them as many times as you want. it's how many
different pseudonyms you create from this vector of secrets.

Dave Longley: So, I want to speak to that because we got to be a little
careful with that. So, it's…

Greg Bernstein: Yes. Different.

Dave Longley: what you just said. It's every different pseudonym. and
another way of thinking it's different contexts in which you present.

Greg Bernstein: Yes. As we talked context X.

Dave Longley: But You could have the same verifier have many different
contexts and then that can be dangerous if that is one of these verifiers
that's willing to collude with others. So if you had a website run by a
verifier that's willing to collude and they ask you to present a different
pseudonym every day and that is a reasonable use case where you're
effectively be you're presenting yourself as a different persona every day.
So you get to sort of erase your history every day.

Greg Bernstein: Yes, I know.

Dave Longley: It's a good feature.

Greg Bernstein: It was a good feature and…

Dave Longley: Yes. Yeah. Yeah.

Greg Bernstein: now we're saying, we don't know if that's such a great
feature under this." Yeah.

Dave Longley: After 30 days of doing that verifier would have taken 30 out
of 100 if n is equal to 100 here. And if you do that at three other sites,
now you can be deanonymized across all of those sites.

Manu Sporny: So that means that the security consideration section starts
getting pretty complicated if that's the type of thing. and we'd have to
have very strict guidance to wallets on if you see sites doing this. I mean
it's almost like the wallet has to keep track of how many different
contexts on how many different verifiers have seen this. And it erodess the
security over time, right?

Dave Longley: Yeah, I think the only sensible way for a wallet to implement
this is to keep track of the number of contexts that have been used with a
particular credential and say you can't go over 100 if and you can't go
over n whatever n is set to. and I'm wondering if there's a way that we can
make that easier for wallets to implement in some way. if there's some
clever mapping we can do with days or times or I don't know something that
would make it sort of automatic to prevent overpresenting.
00:40:00

Dave Longley: the other thing to keep in mind is for any one of these
credentials, we expect most of the time for them to not be that long lived.
30 to 60 days, is something sensible, especially if you want to make it so
that you don't ever have to present revocation information when you're
presenting one of these credentials. So, you want the expiry to be
reasonably short to still allow for issuers to have made mistakes or need
to revoke something or I guess I won't say need to revoke because they
would be relying on the expiration they just wouldn't reissue after 30 to
60 days and so there's a sort of back pressure here where it makes sense
that you're only going to present one of these credential

Dave Longley: ials a certain number of times over its lifetime And when you
get a new credential, you're going to be generating new pseudonyms, which
is also something we want to consider and think about. I think presently
the main reason for using a pseudonym is not specifically to identify a
party and persist an account. You could use it that way, but it's primarily
for ensuring that verifiers get protection when proofs are presented to
them. Because if you don't have pseudonyms, then anyone making an
unlinkable proof can sell their credential to other people and create APIs
and interfaces around them to present as many of those as they want to and
the verifier would never know.

Dave Longley: But when you have pseudonyms, it gives you rate limiting and
prevents that kind of fraud. And so all of those are sort of pieces that
can go into the design here for what makes the most sense for how we solve
this problem.

Manu Sporny: Yeah, my concern here is that those are all complexities that
we do not very clearly spell out people what a rational valid design looks
like for a credential and have examples of it. It could very quickly lead
to people turning against the entire solution because look, in this
scenario, someone ends up deploying something wildly popular and it's
clearly broken and we know but it gets to scale and then it's broken and
then a whole bunch of negative things come back against the core technology.

Manu Sporny: That's my concern is that it seems multivariate in what you
have to pay attention to. we've got

Dave Longley: So just to clarify,…

Dave Longley: I'm not suggesting that we hand all of these sort of levers
to designers of credentials. I'm saying we move forward with either one or
two pseudonym choices in the spec. One of them that has everlasting privacy
and wallets have to implement this. You can only present it X times. the
end and you have to keep track of the context. That's how it works. It's
not more complicated than that. but I'm interested in making it easier for
wallet impleers to pull that off if there's something clever we can do
there. But there's only one way to use it. And then the other option would
be postquantum, which is we're going to use a shot 256 hash or some other
hash that we really believe is not ever really going to be broken.

Dave Longley: even if you did break it, maybe the collision you find
doesn't even match up with someone's NIM secret. and if we go in that
direction, then you don't get everlasting unlinkability, but it might
practically be that or it might be sufficiently seeming to be that. and you
don't have any limits on the number of times you can present. And so we
might end up having to say this is a pluggable pseudonym. You can have this
kind or that kind. and just do both of them. And obviously it would be
better to have one that has everlasting unlinkability for and let you do as
many as you want, but we might not be able to achieve that.

Manu Sporny: All right.

Manu Sporny: And I think that's a great segue into the C unlinkability
approach. so go ahead Craig.

Greg Bernstein: So as Dave mentioned so when we say everlasting that's the
highest bar right that's information theoretic unlinkability it's so high
that there is no encryption besides a one time pad…
00:45:00

Greg Bernstein: where your key length has to be equal to your message
length to achieve that with encryption. Okay, we get this with ZKPS and the
BBS proofs because each time we generate one, we use a new randomness to
make it. So the bar when we everlasting unlinkability in that sense that's
a lower bar but still a very good bar high bar is something that's
postquantum resistance like a hash function.

Greg Bernstein: Because one of the papers that started this whole pseudonym
thing back in 2000 just said use some kind of one-way function based on the
secret and the context. And right now there's a lot of work going on trying
to do ZKPS. And one of the things they try and show how good they are at
coming up with good prover time good proof length is they all try and say
how much does it do take to do a Shaw 256 hash verification.

Greg Bernstein: So there is a recent paper of some folks this was more an
MDOC application they're doing taking an anonymous credential based on
ECDSA or taking a credential based on ECDSA and how do you make that
anonymous can we use these big new modern stark ZKP techniques but even
within those there's always Shaw 256 hashes. So when I was looking at this
paper which has some of the most modern techniques, we were seeing
performance under 100 milliseconds. they didn't break out the proof size
separately. They gave proof size for a kind of a complicated MDOC case with
a full ECDSA around 300 K bytes. Okay.

Greg Bernstein: So we'd have to talk to people that are more doing in this
area. Now you can do more optimizations with this everlasting privacy thing
by using a more advanced technique known as bulletproofs or compressed
sigma protocols. just and both these things have a similar problem right
this second is these bulletproof sigma protocols or these ZKP techniques
none of them have been standardized yet okay so that just gives us this
kind of we have a bit of a gap okay so BBS

Greg Bernstein: BBS pushing along nicely. Some of these optimizations we'd
like to do or attaching a postquantum proof of the pseudonym to it. We
don't have standards yet that we can just drop in So that's like this
practical issue with getting a standard together for some of these things.
So they're valid approaches. The n pseudonym without optimization that's
just works based just like what we're doing now. The proof scroll that's
why I was able to do these Anybody that's done a blind BBS or the existing
pseudonym we can generalize it and that's how we can get these performance
numbers.

Greg Bernstein: But from the point of view of, this postquantum approach
sounds really nice and folks are working on getting this stuff good. I
mean, to attach one of those proofs to it would be good. It's just we're
not there with standards. some folks did put in an individual draft. they
didn't have a chance to present on doing some stuff like this. It's not
complete. I can't find any nice open- source libraries still to try and
prototype what this would take and the true sizes and running it. But those
are our options. I think they're reasonable options. It's just nothing's
off the shelf right now besides the end use one where we keep in down
around 100.
00:50:00

Greg Bernstein: So that's kind of where we're at. Yeah.

Manu Sporny: All right, thanks Greg. and I mean that's not a bad place to
be, right? I mean, again, we are trying to solve for a theoretical, attack
right now. cryptographically relevant, quantum computer plus some pretty
nasty collusion behind the scenes to try and demask people that are using
postquantum or sorry the pseudonyms. so again I think the current approach
with around 100 seems workable.

Manu Sporny: And it feels like we can certainly write some spec text and
lock down the parameters and basically tell wallets that they just need to
keep track of the contexts all the different contexts that have been used
with a particular credential and once they burn through 99 do not allow
presentations.

Manu Sporny: And even if wallets misimplement that the security is not
destroyed, it just means that people could correlate you after the fact, if
they have a cryptographically relevant, quantum computer and they've got
all those contexts. yeah,…

Greg Bernstein: Yes, they've got all those contacts and…

Greg Bernstein: they're willing to use their cryptographically relevant
quantum computer on them.

Manu Sporny: that's right.

Manu Sporny: Yeah. and in thinking about the rate limiting thing, I'm
wondering if the use case around, generating a new context every single day
and…

Greg Bernstein: Okay.

Manu Sporny: asking you for a new context every single day. wondering if
that makes sense. I mean, once you establish a session with a site, you
might not need to do that every day. I don't know if there are other ways
around that. go ahead Dave.

Dave Longley: I expect there are a lot of use cases where you don't need to
and cases where you do. I'm sure there are cases where people want to come
back the next day and not be tracked as being the previous person on that
site the day before. So I think there are both cases that need to be
considered. And another separate bit of back pressure on it's important
that issuers don't hand out too many new credentials all the time. if they
don't have their own rate limit on a credential like this, then people can
set up these sort of proxies to cheat again.

Dave Longley: And so, there's in every one of the dimensions, we kind of
have to thread the appropriate needle to make sure we get people the
unlinkability they want and verifiers the trust in the system that, they're
only interacting with one person at a

Manu Sporny: All with that, we are, at the end of our, time today. thank
you everyone for, the great discussions around the postquantum crypto
suites and, Greg, thank you for the presentation on updates, for, the,
everlasting unlinkability stuff. okay.

Phillip Long: Just a quick summary.

Manu Sporny: Before we go ahead, Phil, and then, we'll have to shut the
call down. You isogynes.

Phillip Long: You mentioned the four that you said that you included. I
missed one of them. I had MLDDSA the lattice space schemes and Falcon and
there was one other if you would mind as need…

Manu Sporny: No,…

Phillip Long: but I thought you ruled that out at the end.

Manu Sporny: no, we need to that that's our ideal case.

Manu Sporny: If isogynes works,…

Phillip Long: Okay,…

Manu Sporny: then we'd want to use that over just about anything else as
long as it holds.

Phillip Long: thank That's all I needed.

Manu Sporny: Yeah, and Dave put in a link to sqisign org which has details
on that.

Phillip Long: Great. Thanks, Dave.

Manu Sporny: All right. that's our call for this week. thanks everyone for
the great discussion. As always, we will probably cancel the call next week
because of in fact, we'll probably cancel all of our, CCG calls next week
because of IIW and then meet again the following week. all right. that's
Have a wonderful weekend and we'll chat again in two weeks. Take care. Bye.
Meeting ended after 00:55:26 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Friday, 4 April 2025 22:07:32 UTC