[MINUTES] Data Integrity 2025-03-28

W3C Data Integrity Community Group Meeting Summary - 2025-03-28

*Topics Covered:*

   1.

   *Hybrid Signature Scheme:* Discussion on a proposed hybrid signature
   scheme combining elliptic curve and post-quantum cryptography. The
   consensus was that existing mechanisms in Data Integrity (set signatures
   and chain signatures) already address the concerns addressed by the hybrid
   approach, provided the verifier enforces its policy on which signatures are
   acceptable. Stripping attacks are a known part of the attack model, and the
   verifier’s policy controls what signatures are considered valid. The issuer
   cannot dictate verifier policy. Exceptions exist, such as the EU digital
   identity wallet, where issuer policies are enforced, but this is considered
   a less desirable model. The group agreed to focus efforts on developing a
   robust post-quantum suite instead of a hybrid.
   2.

   *Post-Quantum Cryptography Scheme for VCWG Rechartering:* The group
   discussed the necessity of including at least one post-quantum scheme in
   the next rechartering of the Verifiable Credentials Working Group (VCWG).
   Options include using NIST-approved algorithms (FIPS 204 and 205) and
   potentially the PS signature scheme after further evaluation and CFRG
   review. The need for careful consideration of parameters and the avoidance
   of excessive parameterization was emphasized to maintain crypto suite
   stability.
   3.

   *Everlasting Unlinkability for Pseudonyms:* Concerns were raised
   regarding the lack of everlasting unlinkability for pseudonyms in the
   current system, particularly in the context of verifier collusion and the
   potential for breaking the underlying DDH assumption with a quantum
   computer. Two potential solutions were discussed: 1) using hash functions
   (computationally secure) instead of exponentiation, requiring more complex
   zero-knowledge proofs (ZKPs), and 2) using a vector of NIM secrets
   providing 'n' uses before linkability becomes possible. Additional research
   is needed to determine statistically acceptable levels of verifier
   collusion and the impact on performance for large 'n' values. The size of
   proofs (but not issuer signatures) increases with 'n'. The exploration of
   hash-based schemes like Poseidon and their post-quantum security was
   suggested.

*Key Points:*

   - The verifier's policy, not the issuer's, ultimately determines which
   signatures are accepted.
   - Existing Data Integrity mechanisms handle the functionality of the
   proposed hybrid scheme.
   - A post-quantum cryptographic scheme is crucial for the next VCWG
   rechartering.
   - Everlasting unlinkability for pseudonyms needs addressing, with two
   potential approaches identified requiring further investigation.
   - The number of colluding verifiers needed to break the system and the
   performance implications of different approaches require further analysis.
   - The focus will shift towards developing a robust post-quantum
   cryptographic suite and addressing everlasting unlinkability.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-03-28.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-03-28.mp4
*Data Integrity - 2025/03/28 09:58 EDT - Transcript* *Attendees*

Andrea Vesco, Dave Longley, David C, Greg Bernstein, Hiroyuki Sano, John's
Notetaker, Manu Sporny, Manu Sporny's Presentation, Patrick St-Louis, Sam
Schlesinger, Ted Thibodeau Jr
*Transcript*

Greg Bernstein: Hey, mono.

Sam Schlesinger: Hey, Gg. Hey, morning.

Manu Sporny: Hey Sam, morning. Hey Greg. We're going to hold for about
three minutes and then we'll get started.

Sam Schlesinger: when I have just you two here. I have a question. have you
guys or…

Manu Sporny: Sam, just real quick or remember that it's being recorded and
transcribed just as a heads up.

Sam Schlesinger: Yes. Thank you so much. This is actually okay to be
transcribed. I'm curious…

Manu Sporny: Defined privately and…

Sam Schlesinger: if anyone's ever worked on privately verifiable
credentials in and around these working groups like Okay.

Manu Sporny: in general, yes. But you might have a different definition
than us.

Sam Schlesinger: the same entity is issuing and verifying. Answer it.

Manu Sporny: I mean, yes. not I think you also mean in an unlinkable
fashion.

Manu Sporny: That's a little more not really we believe the current
mechanism that we have for three party might work for two party like that
or whatever party definition you have but not at depth right there's just a
presumption that we think that problem is addressed with some of the BBS
stuff that we're doing but at the same time not directly. Did that answer
your question?

Sam Schlesinger: Yeah, thank you so

Manu Sporny: Okay. Yeah.

Manu Sporny: Yeah, I mean if there is a desire from you that we spend some
time on these calls like diving deep or just contemplating some particular
concern or question that you have on it let's definitely do that because
there's quite a bit of knowledge in this community around the core
cryptography itself and then maybe we can help in some way if that's
desired.

Sam Schlesinger: Yeah, I'll definitely let I think for now I was just sort
of curious if there had been designs that had broch your desks.

Manu Sporny: Yep, no problem.

Manu Sporny: Yeah, and happy to chat about that in the future if you want
to focus on anything in particular. let me go ahead and fire up the screen
sharing. I think people are showing up in this call, which is good because
it's a new URL. let me share my screen here. and let's go ahead and get
started. welcome to the data integrity call for this week it's March 28th
2025.

Manu Sporny: on the agenda today, we're going to continue the discussion
around everlasting unlinkability. provide kind of some updates on thoughts
there, plan forward. we do need to decide whether or not we're including at
least one postquantum scheme in the next rechartering of the VCWG. I'd
suggest that we really should because if we miss that window, it's going to
be another, year to two years before the rechartering happens. So, we've
got to put a little more effort into the postquantum scheme. and then, we
are also going to continue our discussion on the hybrid scheme that Andrea
proposed last week.

Manu Sporny: maybe kind of follow up on, things you don't get, and that
sort of thing. we gave preliminary feedback last week, but didn't quite get
through the discussion. We'll go ahead and start off with that one since,
we ended the call last week with that. get that put a bow on that, get the
conversation completed on that and then we'll move on to the postquantum
scheme and then finally the everlasting unlinkability. I expect that to
take most of are there any other updates to the agenda or anything else
that folks would like to cover today? Okay. hearing no additions.

Manu Sporny: are there any community updates or anything relevant to just
data integrity in general that we should know about that we're not talking
about, with that then let's just go ahead and jump into the agenda. Andre,
I think you had a number of questions that you sent via email. so let's
kind of dive into those. Last week, just as a reminder to everyone, there
was a hybrid scheme that Andre and his colleagues proposed.
00:05:00

Manu Sporny: I think the general feedback from the group is that we believe
data integrity has the primitives necessary to do some variation of a
hybrid like elliptic curve plus curve postquantum mechanism to hybridize
those signatures already. I think the general feedback was we didn't quite
see the need for a hybrid scheme because you can already mix signatures
either through set signatures or through chained signatures in data
integrity. Andre maybe we can pick up from that.

Manu Sporny: You had some thoughts and feedback on that concept. if you
don't mind kind of highlighting your thoughts on that and then we can
continue the discussion there.

Andrea Vesco: Yeah, thank you.

Andrea Vesco: No I share with some of ideas comments after your feedback
that I found very very useful and essentially is that if I'm not wrong I
see essentially two problems when using for example proof chain to
implement an hybrid scheme. the first one let me recap a bit. Imagine that
you have a proof chain with proof one postquantum and proof in case the
cryptographically relevant quantum computer is ready.

Andrea Vesco: in my opinion the adversary can remove proof one the
postquantum forge the new single traditional proof on the documents. the
other way around. In case a flow come up in the postquantum encrypted
assumption an adversary can remove proof two the traditional and forge the
new single postquantum proof on the documents and here is where I see
possibly a value for the composite Juk that let's say provide an evidence
to the prover

Andrea Vesco: that the older for example or the issuer want to use the
composite postquantum traditional I'm not sure this let's say case are
completely let's say are not wrong so that's…

Greg Bernstein: Excuse me.

Andrea Vesco: why I'm share this with you and just to continue the
discussion so essentially The idea is that with proof chase I see the
possibility of a stripping attack. That is what we want to avoid with a
postquantum traditional hybrid scheme.

Manu Sporny: Yes. so stripping attacks are a part of the data integrity
attack model. meaning we have a presumption that people are going to try to
do stripping attacks on set signatures and chain signatures. it is true
that an attacker approver in this case can strip the signatures. However,
the verifier is the one that sets up what's acceptable and what is not. And
if the verifier does not see a postquantum signature and a pre-quantum
signature either in a set signature or in a chain signature, they'll just
reject it, right?

Manu Sporny: meaning that the attack's not possible because the verifier is
the one that determines the type of signatures that they need to see
because they require both to see a postquantum signature and a pre-quantum
signature the attack is thwarted because of so I think that's the argument
why we don't need hybrid signatures. we already have a mechanism to
effectively do the equivalent of a hybrid signature and one that is not
susceptible to stripping attacks. does that make sense Andre? Do you see
any issue with kind of that defense?
00:10:00

Andrea Vesco: No, that's fine. If you assume that a verifier accept only
postquantum and…

Andrea Vesco: traditional signature together of course you solve the
problem. this is not solved in case you want to let's say enable this case
in…

Manu Sporny: Mhm.

Andrea Vesco: which the verifier can also accept other signatures not
hybrid for example.

Manu Sporny: Mhm. Yeah.

Andrea Vesco: So I understand you you want to force the decision of the
verifier and in that case the verifier can only accept the hybrid. in my
view is the issuer or the other that knows what the verifier support and…

Andrea Vesco: propose it own decision.

Manu Sporny: Got it.

Manu Sporny: Okay, we have a queue. go ahead, Dave.

Dave Longley: Yeah, I just want to make a slightly stronger statement about
how the three-party model works, which is that it's always the verifiers's
policy to decide which proofs are required. and so there are a lot of these
things that come up sort of in the VC space where people start talking
about the issuer making certain assertions this way or that around how they
would prefer their data to be consumed or where they would prefer it to be
used. but in the three-party model, all of those decisions ultimately are
up to the f A verifier could decide to completely ignore a signature.
that's their call. and that can have a number of different consequences.
and an issuer could make a statement. If you accepted my credential without
checking, these proofs, then if there's any sort of legal issue issue
around that, then they can step away from that. But there's liability.

Dave Longley: But in the three-party model fundamentally it's the
verifier's policy to decide which proofs they'll accept, all of that.

Manu Sporny: All right. Thanks, Dave.

Patrick St-Louis: Yeah, I just wanted to add this concept that the verifier
doesn't know what they're going to verify I think doesn't make sense when
you look at a production software it's good theoretically and if the
verifier's use case requires a certain level of assurance and they want to
have a postquantic signature or any kind of credential they verify, they
will definitely need to secure their software appropriately and make a
valid sort of presentation request to prevent these kind of things.

Manu Sporny: Andrea, historically I think a lot of cryptographic systems
not all of them but a lot of people think in two-party modes where the
issuer and the verifier are fairly tightly coupled or the verifier can
insist a certain policy the issuer can insist a certain policy on the
verifier with the three-party

Manu Sporny: model. That's just not the case. Meaning we designed these
systems for openw world verification not for closed world two-party
verification. So there's a fundamental kind of change in philosophy and
design of these systems where the issuer cannot make a presumption about
what the verifier is and is not going to accept. It's the verifier's
security posture and policy. that's the thing that drives these ecosystems.
yeah, I think that's largely where we are. I do think that there's somewhat
of a kind of a weaker argument that yeah, but the issuer really wants to
force a postquantum hybridized scheme on the verifier.

Manu Sporny: But honestly, if they're doing that, they should just be
insisting on a postquantum scheme, meaning if they are concerned about,
postquantum, a break cryptography, all the FIPS stuff exists today. they
should be insisting on kind of a postquantum scheme. not necessarily
without a switchover. and if they want to switch over, they can just do, a
set signature. so I don't know if we've really seen use cases or at least,
specifically talking about our customers.
00:15:00

Manu Sporny: I don't think we've seen any use cases in the state sector
where they are trying to force the verifiers into a particular mode of
operation. they tend to fall into either we're going to do we're going to
allow elliptic curve and postquantum beside each other and allow the
verifiers to check and we're going to keep it that way because the
verifiers are ultimately the ones that have the security problem on their
hands if they start accepting quantum signatures after a cryptographically
relevant postquantum computer isn't exist

Manu Sporny: distance there will be a market function to kind of force
people to go to postquantum. David you're on the queue. Go ahead.

David C: Yeah, I just wanted to point out that the case you're saying that
the verifier is in charge is not actually the case with a European digital
identity wallet because the issuer is more in charge and the issuer can set
policies and the issuer can force the wallet to enforce those policies. So
the European community is going to have a list of trusted verifiers with a
rule that you cannot send your credentials to a verifier that's not
trusted. So it isn't the verifier that's in charge, it's the issuer that's
in charge because it will only issue credentials to wallets that are
trusted and the wallets that do what they want. So I think it's a slightly
different model they've got. I'm not saying agree with the model, but I'm
saying that is my understanding of their

Manu Sporny: or higher assurance things. I think that's correct, right
David? and again I think there are number of us that think that's a
catastrophically bad way to set up an ecosystem. I think fundamentally it's
going to end up failing to scale right but yes your point is taken there
are some ecosystems that are trying to force the ecosystem into a
particular way of operating though I would argue that an that's not an
issue thing that's an ecosystem thing right it's the European Union digital
wallet ecosystem that's kind of enforcing

Manu Sporny: that behavior. but ultimately still it is the verifier that
configures their software to take, a certain thing. It just so happens here
that there's going to be regulation that's going to force the verifier into
a particular state. and it's going to force the, issuer into a particular
state. but with that given that I think the set signatures and andor change
signatures work even better in that ecosystem right- because they're going
to enforce issuers if data integrity is adopted in the European Union
they're going to force issuers to issue set signatures and they're going to
force verifiers to accept

Manu Sporny: set signatures without stripping attacks and there'll be
further regulation in Europe to enforce the enforce the protections that
don't allow a stripping attack right so I think even going through kind of
the European Union view of this you have even stronger protections to
protect against a stripping attack. Andre, I mean, at this point, I think
that the feedback to me feels pretty clear. I mean, I don't think we need a
hybrid scheme. but that said, the way data integrity is designed, it is
something where innovation can happen in a decentralized way.

Manu Sporny: And so if you feel very strongly after having heard at least
the people on this call's feedback, if you still feel very strongly that
there is a benefit to this data integrity crypto suite, you can continue to
develop it and create a spec and…
00:20:00

Manu Sporny: in the list of available crypto suites and so on so forth.
does that make sense?

Andrea Vesco: Yeah. No,…

Andrea Vesco: Crystal Kia. No, no, no. I understand your point. is slightly
different from let's say our starting point, but let's say I cannot agree.
Let's No. Yes, I have to agree with your approach…

Andrea Vesco: because he's totally fine. so you want the verifier to
enforce his own policy and that's So for sure we need to work let's say
only on the post of quantum crypto suite because this is what you want
essentially.

Manu Sporny: Mhm.

Manu Sporny: I think so. I think it addresses the use cases you're
concerned about and…

Manu Sporny: that some of the rest in this group are concerned about. If we
have a postquantum suite then I believe we achieve the use cases that you
want to achieve and we get a postquantum crypto suite out of it. Okay.

Andrea Vesco: Yeah, it's safe only…

Andrea Vesco: if the verifier want to let's say enforce the policy of
having both be the signature.

Andrea Vesco: If this is not true, of course, there is a possible attack.

Manu Sporny: Yes. Yes,…

Manu Sporny: that's correct.

Andrea Vesco: Yeah. Yeah.

Manu Sporny: And I think we need to document that somewhere. that's a good
point, re. we should document that in probably the postquantum suite.

Andrea Vesco: Clarify this. Yeah.

Manu Sporny: Yeah, we should put it as a security concern that says if
you're going to do a hybridized signature or a set signature with
pre-quantum and postquantum it is absolutely the verifier's responsibility
to make sure that they enforce that. If they don't enforce that they can
have a massive security vulnerability on their hands. that sounds like a
good thing to a good concrete outcome of the discussion. Patrick, go ahead.

Patrick St-Louis: Sorry what's a use case for having both of these
signatures what let's for assume we are in the postquantic era what's the
use case if we have this technology available to have this hybrid signature

Manu Sporny: Go ahead, Dave.

Dave Longley: So I think the use case is actually for being in the
pre-quantum era and it's with concern over people not knowing enough about
whether the quantum signatures are going to fail. there's lots of hand
ringing in that space around the whole competition that happened with the
post quantum things. lots of things failed in the competition as things
went along.

Dave Longley: People might be worried that whatever has actually made it
through the process might fail. And by having something pre-quantum there
as well, then you get a guarantee at least while we're still in the
pre-quantum era that you have some viable signature and that there's some
critical flaw in it.

Patrick St-Louis: What do you mean?

Patrick St-Louis: It fails like

Dave Longley: So I think Rainbow was one of the ones that failed on a
laptop on a weekend at some point. that one didn't make it there were a
number of them that people found out this has this critical flaw that was
discovered. that hasn't been true of the lattice ones that have made all
the way through the process so far. But at any moment somebody could say
these are actually broken and…

Dave Longley: the existing quantum things that we have while we're still in
the pre-quantum era have not been broken for 30 years or…

Andrea Vesco: Okay.

Patrick St-Louis: …

Dave Longley: whatever. Yes.

Patrick St-Louis: it's about the current thing we have. They've been sort
of field tested long enough that we can relatively be safe that there won't
be a new sort of thing that will break them while postquanting is still so
new that for the early time of adoption, you want to have a sort of
safeguard trusted signature.

Dave Longley: That's right.

Patrick St-Louis: I don't

Andrea Vesco: No no I fully agree with Dave.

Andrea Vesco: So my proposal was for covering this period until let's say
the PQC go through the time. I mean it's something that if useful is useful
now for a period of time until we have a super strong PQC algorithm for
signature.
00:25:00

Manu Sporny: Absolutely. there a few other kind of more I don't want to say
tinfoil hat, things, but there's a possibility that a nation state already
has an operational, cryptographically relevant postquantum supercomputer or
it's going to appear a number of years before it's publicly known that that
exists. And there's another, concern that people want to cover. this is
arguing for we have a postquantum signature that actually ends up working
over the next 20 years and is not broken. But the sooner we apply it if we
for example are exchanging a verifiable credential 2 years before everyone
finds out that elliptic curve cryptography was broken.

Manu Sporny: At that point there are many other horrible things that would
end up happening in society right if we don't have these dual signatures
but we would be able to look back at a postquantum signature that was
applied multiple years before a cryptographically relevant postquantum
computer existed. and those would be still secure So, if we're looking at a
verifiable credential, it would still have been a secure statement, at the
time because it had both an elliptic curve and a pre-quantum and a
postquantum signature on it. Whereas something that only had a quantum
signature on it couldn't be trusted at all. Right?

Manu Sporny: So there's the argument for what happens if these new
postquantum schemes are broken? how do we continue to protect stuff in a
pre-quantum world? so that's what happens if the postquantum security
breaks and then the other side of it is what happens if a nation state
actually gets access to a cryptographically relevant quantum computer and
it is not announced to the public for multiple years before it's announced.
So it's about both kind of protecting from both kinds of scenarios.

Manu Sporny: So that kind of dubtales into our discussion about whether or
not we're going to do a postquantum scheme in the next rechartering in the
verifiable credential working group. before we go there, Andre, do we cover
everything that you wanted to cover at least on the proposal that you had?

Manu Sporny: Was there anything else that you wanted to cover before you?
Okay. Okay.

Andrea Vesco: Yeah. No,…

Andrea Vesco: thank you very much. That's it.

Manu Sporny: and thank you very much for the proposal and the write up. it
was great to have that analysis and discussion in the community. So we
really appreciate that. So the next question is are we going to put forward
a postquantum scheme for the next rechartering of the verifiable credential
working group?

Manu Sporny: I hope we will because the alternative is to not put anything
forward and that would be bad. I think we've got a spec that is in good
enough shape where we we do need to do some work on the postquantum spec.
we want to make some decisions about are we going to allow selective
disclosure? what kind of schemes are we going to allow in the postquantum
suite what's missing can we get the algorithms into shape all that kind of
stuff Greg you're on the queue

Greg Bernstein: I posted in the library that we u been using for EDDDSA and
the fundamental curves for BBS. class they've got VIPS 204 and 205. So two
of the postquantum signature things available. and they are signatures. my
only con thing would be figuring out which of the options in the FIPS spec
we would do and things like that.
00:30:00

Greg Bernstein: but all these things slot nicely into day-to-day Data
integrity, set this standard, and so ECDSA and all the other guys work with
it. And so we should be able to get things a little bit of work, needed for
do we need all the different parameters? I mean because each one has two or
three parameter sets for different levels of security and such like that.
But the crypto suite approach that we've been doing the text that we have I
mean it's pretty close to dropping and same with the test vectors.

Greg Bernstein: we just had an example of that Will took my test vectors
and applied it to a different signature as long as he had the JavaScript
library to help him do it that Schnore for the other special curve thing.
So that's my opinion and we should do them.

Manu Sporny: All right. Thanks, Greg.

Dave Longley: Yeah, plus to One of the things I think we should also say in
whatever spec gets pulled in the group is that we can pull in any NIST
approved crypto that I want to make sure that we leave space for when FIPS
206 comes out for Falcon so that can be included. I think that's important.

Manu Sporny: Plus one of that.

Patrick St-Louis: So I have very limited knowledge about this but I do know
that the PS signature for anon creds v2 had a possible postquantic use
case. So I was wondering if it seems relevant to the discussion. I don't
know much about how it works. supposed to post a link here about it.

Manu Sporny: Great. Thank you, Patrick. Let me go ahead and pull that up.
that didn't work. one of the things that, we need to be careful of is,
trying to do too much in the postquantum scheme. So I don't know enough to
have a strong opinion about the PS signature stuff. other than it looks
really promising. the challenge is taking it through the crypto forum
research group if there's no FIPS publication for it and CFRG doesn't have
enough experience with it or we can't bring enough cryptographers to CFRG
to work on it.

Manu Sporny: then it slows down the entire spec. So if we include the PS
stuff the postquantum stuff ends up potentially getting slowed down. we
also have to be careful about how much parameterization we allow in the
crypto suites. We've tried to minimize the amount of parameterization as
much as possible just for the year the crypto suite is deployed. So just as
a reminder to those of you that don't quite know about all the details of
data integrity what we try to do is we date stamp the crypto suites.

Manu Sporny: So when we release a 2025 crypto suite, we release
cryptography that we believe is stable for that year and has a good
parameter set for that year in maybe the next 5 years. But we don't go
beyond that we don't ratchet up to the complete going beyond the heat death
of the universe trying to protect against those kinds of things. So, for
example, the higher order of the elliptic curve stuff, We support up to 384
and that's kind of it for the crypto suites released during the past couple
of years.

Manu Sporny: And even there was an argument that we really didn't need to
go beyond 256. so, we need to have the same kind of discussion over the
postquantum schemes. go ahead, Craig.

Greg Bernstein: I was just going to say Patrick I very much like that
scheme.

Greg Bernstein: I got in contact with the authors and Oliver Sanders
responded. He wasn't sure if it was a bit too early because they were still
coming up with optimizations. I shot back to him that things take some
time. it's a particular lattice type of scheme. kind of more similar to
Falcon than to MLDDSA. And I thought it would be good to take the CFRG.
00:35:00

Greg Bernstein: and hopefully I'll continue the conversation with them
because it's the kind of thing that I don't think NIST is going to do a
competition on, but it's the kind of thing given the support at CFRG had
we've had for BBS that I think it would be a good place to take it to get
something eventually standardized there.

Manu Sporny: Plus one to that.

Patrick St-Louis: Yeah, thanks all for this information. I'm just pointing
this out because this is the only sort of postquantic related stuff that is
somewhat adjacent to some things I'm working on. otherwise, I'm just kind
of hearing about it. So, just pointing it there, but I wouldn't have an
opinion whether it should be part of this or not. I just wanted to bring it
up for discussion.

Patrick St-Louis: Not particularly attached to this in any shape or form.

Manu Sporny: Got it.

Manu Sporny: Thanks, Patrick.

Sam Schlesinger: Yeah, I'm wondering when we're talking about adding this
to the charter, are we talking about adding schemes that do things like
this blind signatures and all the other tricks that this has? Are we
talking about adding this specific scheme to the charter?

Manu Sporny: So let me go back to the specific thing that we can add to so
when we add things to a charter they need to be very specific. They don't
like charters where we're handwavy we're going to work in the area of
postquantum unlinkable signatures. They don't like that that would get shot
down almost immediately. We have to say hey we have a crypto suite and it's
a postquantum crypto suite and it supports FIPS 20 whatever right? and the
more specific we are, the the likelihood that it gets accepted into the
post the PS signature stuff, is we could try and be specific about it and
we could try and put it into the charter, but almost certainly somebody
would come someone with a cryptography background would come out and say,
"Hey, too early, you need to, I know about this thing.

Manu Sporny: it hasn't had enough vetting, at least put some semblance of
it, start putting it through CFRG and then we can talk about adding it to
the charter at W3C. So PS is probably I don't know, it would have to be in
CFRG for a year for us to even consider proposing it for the next
verifiable credential working group whereas the postquantum scheme since
they have FIPS publications and all that kind of stuff that would be almost
certainly accepted into the charter it's concrete enough the one question I
had with the PS stuff and I believe this is true there's a variation of
them that's unlinkable right me meaning we think we could potentially add
some of

Manu Sporny: maybe some of the pseudonym and blind stuff to it.

Manu Sporny: I'm seeing a thumbs up from Yeah.

Greg Bernstein: it includes some of the blind and…

Greg Bernstein: unlinkability and the paper mentioned pseudonyms that they
could bring that in. So like I said they know the application space So
that's why I like that paper and it's closer once again to the thing based
on Falcon and so yeah you got it. Okay.

Manu Sporny: Yeah. Awesome. Thanks, Greg. Andre

Andrea Vesco: Yeah I also think that it's very early to start discussing
this stuff on this paper. I only want to mention I put this in the chart
this is another paper very very promising on latis of course a lot of
falcon assumption are also there and what we want to do in the next months
since we have now an implementation of this paper that is in the chat and
also the code of the paper that you are showing on is available is to do a
comparison analysis between the children.

Andrea Vesco: just to say that there are also other possible option and
since it's very very early to let's say select I think there can be value
in comparing the approaches

Manu Sporny: Absolutely. Yeah, that would be great. and the way we
typically approach that is we just have experimental crypto suites that
allow us to do a full implementation and see what it looks like key
signature sizes speeds compute memory usage and of course the safety
characteristics of what kind of unlinkability do we get okay okay so I
think what we're hearing is that probably too early for PS signatures to go
any sort of standards track. We want to do a little more experimentation
with them maybe over the next year. but for the postquantum schemes that
have FIPS publications, we probably want to include that.
00:40:00

Manu Sporny: We definitely want to include that in the next charter which
means that we will want to spend time on these calls over the next 3 months
or so trying to get the postquantum spec into a shape that we think it can
be adopted. we do have one. let me see if I can't What was I'm totally
blanking on the one second. It had the name q it's the Quantum Safe Crypto
Suite.

Manu Sporny: so we have this thing and we've got kind of key formats that
we're talking about and then we've got the algorithms for at least MLDDSA
so in theor theoretically practically we could suggest that they adopt this
but we probably want to do a bit more work in it and answer questions
should we include selective disclosure in here? What other FIPS specs
what's the strength levels that we want to include? Things of that nature.
Okay.

Manu Sporny: So, I'll cue that up for the next couple of calls just to get
that in shape. let's jump to our last topic on the agenda which is
continuing the discussion on everlasting unlinkability. last week we
introduced the concern around us not having everlasting unlinkability for S
pseudonyms. we proposed some ways that we might be able to achieve that.
Greg if you don't mind just kind of giving us an update on…

Manu Sporny: where those conversations are and the typical shape of the
solution. we're looking for

Greg Bernstein: No problem.

Greg Bernstein: My turtles are causing some splashing in the background, so
I will have to speak up. so the pseudonyms like we use are based on what's
known as the DDH decisional dippy helman assumption which is basically if
you can break discrete logs with a quantum computer you can break
decisional diffy helman

Greg Bernstein: so that means if there's a cryptographically relevant
postquantum computer you can with verifier collusion correlate link the
pseudonyms used at different verifiers and so we always have to remember
this is verifier collusion okay this is not intercepting message between
the holder and the This is no the verifier if you had encryption on that
communications. This is the verifier revealing the raw pseudonym value and
with other verifiers and they collude to be able to do this.

Greg Bernstein: So you need a relevant quantum computer to do this. S
proofs if you don't have pseudonyms and you come up with a BBS proof and
you go to verifier coming up with a new proof each time do not have this
problem. BBS signatures are forgeable if we have a quantum and relevant
computer. But the proofs are zero knowledge proofs created with new
randomness each time you create one and hence have this wonderful
everlasting privacy property.
00:45:00

Greg Bernstein: So, we've given folks using BBS this wonderful property and
then we're saying, " with pseudonyms, you don't quite get it." And so,
that's where the issue comes up. So, we've had some emails and such like
that and thrown things out with some cryptographers and gotten some
suggestions on ways to fix that. two approaches. one other little piece of
information about The original pseudonym system paper back in 2000 by Anna
Lysam Kaya proved that you can get pseudonyms if you have the existence of
things called one-way functions.

Greg Bernstein: We're using one of those which is based on exponentiation
in a group. That's where we have this discrete log problem. You could use
something like a hash function. However, if you use something like Shaw 256
as a hash function or some other hash function, that you have a more
difficult time coming up with a serial knowledge proof the NIM secret, and
that you use that and the context ID to compute the pseudonym. That's where
we get more into some of these general ZKP techniques.

Greg Bernstein: leero and these other stark type techniques. So that's one
approach. Now that's computational security because that's what you get
with hash functions, right? That's not information theoretic security based
on u quick email with Jonathan Katz and Jonathan came back and said, "Why
don't you use a vector of NIM secrets?" And I puted around a little bit and
I go, " if I do this the wrong way, I allow users to have multiple
personalities. If I do this the wrong way, it's no better."

Greg Bernstein: And then I finally hit on one flavor where I use not just a
single item. I've got this written up and you can get end use meaning you
can use it end times and you're guaranteed everlasting privacy. And what I
mean by guaranteed is my cryptographer friends in including Anna said,
"Yeah, that should work and I think I know where there's a proof for that.
I'll get back to you once I have some time." And Jonathan's also been away,
too.

Greg Bernstein: But you get end uses before somebody could have enough
information to figure out all those cryptographic secrets under once again
the cryptographically relevant quantum computer that can break discrete
logs. So we have these two different options ways of doing it based on a
hash function but then we have a more complicated CKP. The problem with
that is none of those techniques that are through any standards bodies yet.
So that's a little harder.

Greg Bernstein: we could add n a reasonable number like 10 or 20. that
makes no difference because that's exactly like what we do with verifiable
credentials with BBS. We produce a bunch of statements and things like
that. The scaling grows as the number of nonrevealed things. If we wanted n
much bigger like a thousand 4,000 or things like that then there's a
technique called bullet proofs that can be used to scale things
logarithmically. So we've got these options. We also have to evaluate how
much computational time it is.
00:50:00

Greg Bernstein: One thing I want to be clear is this stuff does not make
the signature from the issuer longer. It's what the prover when they commit
to their NIM secret they send something to the issuer. The proof of what
they send to the issuer gets longer. And the proof that they would send the
verifier about the pseudonym gets longer, but the signature that goes from
the issuer to the holder does not get longer. So that's still a very nice
property. So it doesn't impact the issuer as much.

Greg Bernstein: Anu other folks.

Greg Bernstein: That was my briefing. I have a write up. I've sent it
around a bit. If folks want to see it, just I put it in the PDF.

Manu Sporny: Okay, awesome.

Manu Sporny: Okay, awesome. Thank you, Greg, for the update on all of that.
so there are a couple of things we could put some thinking into during the
next call. I think unfortunately it's so I totally understand the signature
sent from the issuer to the holder the prover remains the same size and
that's good but I think the one that we really care about is the one that
goes from the size of the signature that is the one that goes from the
prover to the verifier meaning if that thing's gigantic which is what we're
saying it could potentially be if we wanted thousands to tens of thousands
to hundreds of thousands of uses

Manu Sporny: then all of a sudden that signature gets big and then we can't
do certain use cases like NFC tap to transmit the signature things like
that. It's not a total showstopper certainly not but it's one of those
things that compute that we need to think about. The other thing that I
don't think we have a good handle on yet is statistically what is a safe
level of verifier collusion? Like we're saying you do a thousand
presentations before verifiers could coll or 20 or so so we're looking at a
set size of colluding verifiers.

Manu Sporny: what do we think a safe value is there I mean it's certainly
not zero and it's not 100% not somewhere in the middle it's somewhere in
that range and so we probably need to think about how many dishonest
colluding verifiers do we think there could exist in an ecosystem could we
provide any guidance to people there Ideally we don't have to provide any
guidance at all because this is a purely random based mechanism where you
don't have to track what presentations you make to what verifiers.

Manu Sporny: but there's maybe a number there that we need to figure out
and it may be way lower than what we think maybe a thousand presentations
to colluding verifiers is a statistically acceptable level and we don't
need to reach for 10,000 or 100,000 or we have a discussion we're we
definitely want to if we have no other choice but to make this a
counterbased, state tracking based, mechanism. then we have to have,
100,000. That's the floor of what we're comfortable with. So, I think we do
need to have that discussion over the next couple of weeks. Sam, go ahead.

Sam Schlesinger: Yeah, I think my personal perspective on how many
colluding verifiers we should sort of be thinking about in our threat
model. I think sort of unbounded in the sense that relying parties can just
pop into existence and convince a user to sign up for their site with a
pseudonym and I would guess that they can sort of fabricate infinite
reasons roughly speaking to have a user sign up. I mean, maybe it won't
work for every user, but yeah, I'm definitely concerned about the idea of
just saying, okay, we'll just accept that if a certain number of verifiers
collude, then we'll be having problems, but that won't happen for a lot of
users. at least that's a very difficult sort of threat model to describe
given the way the ecosystem of data brokers works as it currently stands.

Sam Schlesinger: On another note, I feel like the information theoretic
security here, like the sort of statistical blinding property that Greg is
pursuing is very admirable. It makes a lot of sense. I mean, especially
given the novelty of postquantum schemes and everything and how it's not
exactly one to one with the security we'd get from one of those. it's even
better in a sense now. But I do have a lot of interest in benchmarking
hashbased schemes. for computational blinding so there was a discussion in
the email thread about SHA 256. But there are also things like Poseidon.
00:55:00

Sam Schlesinger: I just have a question for is Poseidon secure in the
context of a postquantum computer? I don't know myself.

Greg Bernstein: Yeah, I'm not sure either.

Greg Bernstein: These are all these questions to because there's these more
ZKP friendly hashes, but are they good postquantum? These are these same
issues. And some of these folks are coming out with some great sounding
ZKPS over for things like hashes, but it's like they don't reveal enough
and I haven't got my hands on them or to see just for this use. But yeah,
definitely want to benchmark

Sam Schlesinger: Yeah, that would be my thought is especially if what we're
doing is we're moving to this much more heavy credential and then we're
using bulletproofs my thought is that why not just move to benchmarking
both things simultaneously like this much more heavy credential with
bulletproofs potentially and also the computation of a hash function with
some benchmarks and see where it sort of lands because it's very nice to be
able to say you get this one secret and you can use it to generate infinite
credentials once you have more than one secret you're sort of putting a
little bit of a burden on password managers and other entities and I think
all that state that has to be managed by both password managers as well as
u issuers. it starts to look a little bit grim from a initial deployment
perspective. I understand if this ecosystem was very valuable and we wanted
to make it better, maybe this would be a nice thing to change. But getting
people to get in the door with me.

Manu Sporny: Yeah, plus one to all of that. it's never good when you say
that your cryptographic mechanism, needs to track state. it's almost, a
deal killer. we're out of time for today. thank you everyone for the
wonderful discussion. as always we will pick up this discussion next week
along with probably doing a pass through the postquantum scheme to raise
issues on things we definitely want to address before we hand the
postquantum scheme over to the verifiable credential working group. with
that have a wonderful rest of your day. have a wonderful weekend and we
will chat again next week. Thanks all. Bye.

Andrea Vesco: Thank you. Bye.

Sam Schlesinger: Thanks.
Meeting ended after 00:58:15 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Friday, 28 March 2025 22:04:08 UTC