[MINUTES] CCG Atlantic Weekly 2025-10-28

Meeting Summary: CCG Atlantic Weekly - 2025/10/28

*Attendees:* Alex Higuera, Benjamin Young, Dave Longley, Dmitri Zagidulin,
Erica Connell, Greg Bernstein, Gregory Natran, Harrison Tang, Hiroyuki
Sano, James Chartrand, JeffO - HumanOS, Jennie Meier, Joe Andrieu, Kaliya
Identity Woman, Kayode Ezike, Lucy (Qixue) Yang, Manu Sporny, Phillip Long,
Rob Padula, Ted Thibodeau Jr, Vanessa Xu, wendy seltzer, Will Abramson

*Summary:*

This CCG meeting focused on privacy considerations related to Verifiable
Credentials (VCs), particularly emphasizing "everlasting privacy" in the
context of potential quantum computing threats. Greg Bernstein presented a
detailed overview of the topic.

*Topics Covered:*

   - *Privacy Requirements in VCs:* Data minimization, tracking and
   linkages, and avoiding oversharing of data.
   - *Tracking and Anonymity:* Discussed how unique identifiers,
   cryptographic artifacts, and even issuance details can reduce anonymity and
   potentially track users.
   - *Mitigating Attacks on Credentials:* Addressing forgery prevention,
   replay attacks, and credential theft, with a focus on how different
   mechanisms affect privacy.
   - *BBS Signatures and Pseudonyms:* The role of BBS (Bulletproofs-based
   Signatures) in providing unlinkable proofs, replay protection, and how
   pseudonyms enhance privacy but require careful handling.
   - *Cryptographic Strength and Quantum Computing:* The importance of
   computational security, forward secrecy, and how BBS and its components
   (commitments, pseudonyms) provide "everlasting privacy" even against
   quantum computer attacks.
   - *Data Integrity and Multiple Crypto Suites:* Demonstrated how data
   integrity allows for multiple cryptographic suites within a credential,
   enabling both traditional (e.g., ECDSA) and privacy-preserving (BBS)
   signatures.

*Key Points:*

   - *Data Minimization:* Keep the least amount of data in a VC and
   consider issuing multiple smaller VCs over separate data.
   - *Cryptographic Artifacts as Trackers:* Cryptographic artifacts can act
   as tracking cookies.
   - *BBS for Privacy:* BBS proofs are unlinkable, providing strong
   privacy. The presentation header is for replay protection.
   - *Anonymous Holder Binding:* Prevents credential theft.
   - *Pseudonyms for Credential Abuse Mitigation:* Pseudonyms in BBS offer
   a way to assert identity and reduce credential abuse.
   - *Everlasting Privacy:* BBS signatures offer everlasting privacy
   against cryptographic attacks from both classical and quantum computers.
   - *Pseudonyms Constraints:* For perfect privacy, limit the number of
   different contexts used for pseudonyms.
   - *Data Integrity Support:* Data Integrity allows use of multiple crypto
   suites for issuers to use the most appropriate signature schemes, including
   BBS.

Text:
https://meet.w3c-ccg.org/archives/w3c-ccg-ccg-atlantic-weekly-2025-10-28.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-ccg-atlantic-weekly-2025-10-28.mp4
*CCG Atlantic Weekly - 2025/10/28 11:58 EDT - Transcript* *Attendees*

Alex Higuera, Benjamin Young, Dave Longley, Dmitri Zagidulin, Erica
Connell, Greg Bernstein, Gregory Natran, Harrison Tang, Hiroyuki Sano,
James Chartrand, JeffO - HumanOS, Jennie Meier, Joe Andrieu, Kaliya
Identity Woman, Kayode Ezike, Lucy (Qixue) Yang, Manu Sporny, Phillip Long,
Rob Padula, Ted Thibodeau Jr, Vanessa Xu, wendy seltzer, Will Abramson
*Transcript*

Harrison Tang: Hey, Greg. All right.

Greg Bernstein: Hey Harrison, how's it going?

Harrison Tang: Busy day because I'm actually head to a conference in a
moment. local.

Greg Bernstein: Local or do you have to get on a plane?

Harrison Tang: It happens to be in Anaheim about an hour and a half away.
So yeah,…

Greg Bernstein: Wait, you're in LA area?

Harrison Tang: I'm in LA. Yeah. Mhm.

Greg Bernstein: One of my kids is in LA,…

Greg Bernstein: West She works in Santa Monica.

Harrison Tang: Yeah.

Harrison Tang: Yeah. the funny thing is that it's not that far away from
here, but it takes longer,…

Greg Bernstein: Yeah, it's true.

Harrison Tang: right? Yeah.

Greg Bernstein: The commutes in the Bay Area, at the wrong time, can be
quite elaborate. Okay, let me put the link

Greg Bernstein: to the talk slides. Let me get you that chat. So, as usual,
they're all ready to go on my website. So, when we get close to time, I can
repost.

Harrison Tang: Great. Love it.

Harrison Tang: Did you go to the IW last week?

Greg Bernstein: No, I did not.

Greg Bernstein: I haven't been doing as many conference things. so I know
it's close, but

Harrison Tang: Yeah, I didn't get to go this time, too. So, yeah, I will
start in about a

Harrison Tang: Hey, …

Will Abramson: Hey, how you doing?

Harrison Tang: pretty good. Yeah, I gota head to a conference after this
call, but pretty busy. All right, we'll so, welcome everyone to this week's
W3C CTG meeting. today we're very excited to have Greg back to actually
lead a discussion on privacy consider considerations on VCs and amenity and
everlasting unlinility. So before we start just want to go through the
administrative stuff. So first of all just a quick reminder on the code of
ethics and professional conduct. I just want to make sure we hold
constructive and respectful conversations that we always have.

Harrison Tang: second I just want to do a quick note on the intellectual
property. Anyone can participate in these calls. However, all substantive
contributions to any CCG core items must be member of the CCG with full IPR
agreement so if you have any questions in regards to signing up for the W3C
account or the community contributor license agreement, please feel free to
ping me or reach out to any of the co-chairs. Now these calls are
automatically recorded and transcribed and the system will send them out
the transcriptions, audio recording and video recording in the next few
hours. All right, just want to take a quick moment for introductions and
reintroduction. So if you're new to the community, please feel free to just
unmute.

Harrison Tang: All any announcements reminders? Anyone want to share great
presentations or things or takeaways that they learned from IW last week?

Phillip Long: Hey Harrison, this is Phil. just for the things that IW I
wanted to note that I thought the SETI presentations state endorsed digital
identity aka bring your own ID were really interesting and their
demonstration also of delegation a credential that actually allowed a
parent to delegate privileges and restrictions to a child's use of
different services was
00:05:00

Phillip Long: Another one that was really very well done.

Harrison Tang: Great.

Phillip Long: Wayne Wayne Chang named that one SDI state endorsed digital
identity.

Harrison Tang: Cool. Wait. Savvy. how do you spell that? Si got it. okay.

Phillip Long: This is a h project. State of Utah.

Harrison Tang: All right. How reach?

Phillip Long: You should have Reach out to her to Wayne for a demo to this
group of that delegation thing. It's pretty impressive. Also, …

Harrison Tang: I'm making a note right now. I'll reach out to Wayne. Yeah.

Phillip Long: one last thing, my terms, which is a project Doc Surles has
been working on since 2004, I think, is a equivalent to a terms of use
criteria for your data. In this case, however, it's you as a customer to a
business that has a website. And it basically is a negotiated process
between you and that website around a contract for how your data will be
used. and they have gotten the EE to endorse the proposed process they're
taking. So they now have an e spec associated with it and…

Erica Connell: Oops.

Phillip Long: they're now working on a process very much like creative
commons…

Phillip Long: which has a statement about how creative work the IP is to be
similarly applied to the contracts that they would have with how your data
is treated when you're engaging with an online service.

Harrison Tang: Yep.

Harrison Tang: We have Ian Henderson here two weeks ago on October 7th to
talk about my test.

Phillip Long: He talked about Okay,…

Harrison Tang: It's very interesting. Yep. Cool.

Phillip Long: Joyce Surles did a really nice job there at IW.

Harrison Tang: Great. …

Phillip Long: Thank you.

Harrison Tang: thanks for the share, Bill. Anyone else want to kind of
share great talks or takeaways from IW? I'm assuming I wasn't able to go,
but I'm assuming there's a lot of agentic identity talks, right?

Harrison Tang: If that would be my bet, but I'm not so sure. Okay, they'll
say yes. Yeah, everyone is talking about agents.

Phillip Long: There was a whole extra day as a separate meeting following
the unconference structure for agentic communications.

Harrison Tang: Got it. Yep. I Saw that email. Thanks. All right. Anyone
else wants to share their insights or takeaways? yeah, please.

Kaliya Identity Woman: We had a zoom in presentation from James Felton
Keith who's run of Congress in the New York 13th district on a data as
labor platform. folks quite enjoyed him and it was a good discussion. You
can learn more about his run for Congress at his website jamespfelton.com
or votejfk.org.

Kaliya Identity Woman: And yeah, he's unique in that sort of I think he
before his time. He actually ran for the 2020 congressional run, but the
pandemic happened and yeah, it didn't work out. But he were running on this
platform at that time, too.

Harrison Tang: Thanks, Korea. Anyone else? By way, Phil, is there a good
talk on the agentic AI identity that you think we can invite to actually
present here at WPC? CG.

Phillip Long: I will go through the list and probably would be better even
than I in terms of potentially identifying a candidate.

Phillip Long: But I'll go ahead Julia.

Kaliya Identity Woman: Yeah, I think one thought is andor has an overview
presentation that's 60 or…

Kaliya Identity Woman: 100. It's crazy. He's posted the slide deck on
LinkedIn, but I think it's like a good overview, which is a starting point
to dive into more specifics, and I think the specifics are still emerging.
00:10:00

Kaliya Identity Woman: But yeah, I'm sure there are two or three other
folks to invite Bourbon S maybe. but we're still kind of processing what we
learned from that agentic internet workshop on Friday.

Harrison Tang: Got it.

Harrison Tang: You mean Endor was presenting on the MIT NDA project. Is
that what it is or different? Yeah. Got it.

Kaliya Identity Woman: There was maybe one person from Nanda there. There
was not a Nanda event.

Harrison Tang: All I'll bother you guys later. Anything else on the
announcements, reminders, IW takeaways?

Harrison Tang: All right. Any questions or updates on the work items? Last
calls for the introductions, announcements, reminders, and work items.

Harrison Tang: All right, let's jump right into a grades. …

Kaliya Identity Woman: Sorry.

Kaliya Identity Woman: I will announce the Africa regional event for IIW is
happening.

Harrison Tang: sorry. Yeah.

Kaliya Identity Woman: I want to say it's the third week of February. Maybe
did unconference uncomf.africa. I'll put the URL in, but if you want to go
to Cape Town and participate with the regional community gathering there is
happening.

Harrison Tang: Yeah, K Town sounds pretty sweet. All right, last talks for
announcement reminders. All right. so let's get to the main agenda. great.

Harrison Tang: Very happy to have you back and as always, you're very
prepared with the presentations and the link link is in the chat. but yeah,
please take it away. Yes.

Greg Bernstein: Can you see my screen?

Greg Bernstein: Okay, let's see if we can make it full screen. and these
slides are made as usual from markdown using revealjs. So there's lots of
nice features as far as being able to skip around as you want. So if you're
reviewing these later because I will not have time to go through all the
slides. Okay.

Greg Bernstein: So the theme here is everlasting privacy because this was
one of the last pieces of the puzzle, making sure we had everlasting
privacy, which means what happens when there's a cryptographically relevant
quantum computer to the stuff we've been doing. But I wanted to review what
we've learned about putting together a privacy enhancing the signature
suite plus the stuff that you need to go around with it and And we're going
to really hit the everlasting thing, too. So, who am I? I've got my
website, Grotto Networking.

Greg Bernstein: have been helping out on the crypto suites, the data
integrity work, and been doing some stuff with the IETF on standardizing
previous slide decks that have more background. I don't have time to do all
the details on advanced features. Okay, I'm not going to redo some of those
things as far as showing the proofs and things like that. So, they're
there. everlasting pricy down the road. mitigating attacks on the privacy
of credentials. What do we have to look out for?

Greg Bernstein: mitigating the attacks on the security of credentials and
extra attacks that come up when you have in enhanced privacy. What happens
when we have the quantum computer, a cryptographically relevant one? and
how can we get these things out early? And We've got the nice mechanisms
and data integrity. So, privacy requirements, data minimization. Okay? I
mean, I know in the old days we used to go, I got nothing to hide."
Nowadays, we really have to say it's none of your business. They don't need
to know that because everybody wants your data because they can sell. It's
something they can use and things like that. So, what can you do? Put the
least amount of data into a VC.
00:15:00

Greg Bernstein: You could even issue multiple smaller VCs over separate
data. We talked about this when we did the data integrity implementation
talk. I was going I got this The club does sailing, wind surfing, blah. Why
can't they do smaller credentials? Because it's not going to take up room
in your physical wallet. It's a digital thing, Or even nicer is let the
user the holder decide what to disclose v via selective disclosure
mechanisms of which we've got one standardized and one almost there rights
is standardized and BBS is just about there. Okay.

Greg Bernstein: So we got data minimization. Then we have tracking
linkages, things like that, Every breath you take, every move you bake,
every leaf you rake every cake you bake, we're watching you. Basically,
there's a whole lot of data sharing going on, And I know they're watching
me every time I bake a cake because they go to the same sites to get the
recipes, some governments are better than others about helping you
understand this. The California has got a privacy protection agency and has
released two things about helping you understand mobile app tracking which
is pervasive and website tracking plate people like EFF u they have got
good information about cover your tracks and things like that.

Greg Bernstein: not all tracking is bad. If you're outside the Golden Gate
and there's a tide that's flowing out away from the city and the wind goes
down, you might have trouble getting back. So, it might be good to let
people know where you are. So, that's tracking for safety. What about
tracking for security? one of one of the first u of the various attack
things is they teach you about lateral movement, right? As soon as the
adversary can get into your environment, they start moving around to see
what's there.

Greg Bernstein: People say that the adversaries have a better network
inventory or of all your systems than you will once they get in. So if
you've got something like open ID where you have a user, the relying party,
which means the resource, the server, whatever that the user wants to use,
and The identity provider has information history of all your loginins to
all those sites. And hence can detect unusual patterns characterizing
lateral movement. So that's very good from a security point of view. Except
you got to keep those records a long time.

Greg Bernstein: I was listening to a security podcast, and they were
talking about dwell times of how slow some of the adversaries move around
to avoid being detected of a year. Okay? So, we're not talking about smash
and grab easy to detect. We're talking about slow things. And so, people
have to keep those records, audit logs, all the places visited for quite a
long time. However, using a variant of this single sign on of open ID or
something like that in the public internet means handing over your entire
site visit history to the identity provider means giving up your privacy.
And so I don't use those things. I don't use those out there.

Greg Bernstein: And when I was teaching cyber security, I discouraged my
students too because it's a privacy thing. what happens with verifiable
credentials? Credentials are important and we take our credential as the
holder and show it to different verifiers, Those verifiers, they could
collude, meaning in a nice way, people say, "We're just going to share data
for your benefit." Right? So, the iers collude and share data. One way of
doing it, the verifiers could share data back with the issuer, right? The
verifiers and the issuer could all share information with a third party.
00:20:00

Greg Bernstein: All to see where you've gone and what you've been doing.
And it's our job in the appropriate setting, if a privacy enhancing setting
to prevent this. Okay, this is what we don't want all these different
situations. a verifiable credential, can contain data that could track us.
And in the privacy considerations section of the BBS draft, we have a
pretty good writeup about all this stuff in more depth. If you put any kind
of unique identifier in a VC, that can track you. If you even Okay.

Greg Bernstein: And even if it's not unique, it can reduce what technically
people call the anonymity set. How different is your data from everybody
else's? And so how much can we tell it might be you from another set of
people? Okay, this is what u keeps upsetting people. What about
anonymization of data sets? The big data people keep getting better and
better at figuring out, we can correlate this and that. The thing that we
got to be concerned with from the cryptographic point of view because,
people might not know is some cryptographic artifacts act just like
tracking cookies. Okay, I'm using tracky cookies just the same as unique
identifiers here.

Greg Bernstein: But if I say a cryptographic art back can act as a unique
ID, it doesn't get people concerned. But if you say it's a tracking cookie,
okay, some folks shared that other things people could do is try and
fingerprint VCs analogous to the device fingerprinting with cell phones,
browser fingerprinting and But I'm our job with the crypto suite is to make
sure there's no hidden that people understand if we have a cryptographic
artifact what's its properties and can it be used or not as a tracking
cookie. Okay.

Greg Bernstein: the people issuing the VC, they've got to keep the unique
identifiers out if it's a privacy critical situation or you got to use
selective disclosure so that can be removed. And this reduction of
anonymity set that means even issuance time if you choose to put down when
the credential was issued or the expiration date if you tend to get too
detailed on that that can reduce the number of people so it can help
identify people. Okay. So what can happen here?

Greg Bernstein: In digital signatures, and I'm sure Harrison and others
will alert me if there's any hands raised. With digital signatures, there
are a couple things that can act as tracking cookies. if the holder somehow
has to reveal a public key, and we'll see why that might happen. public key
data is unique. If it wasn't, that would be a problem. So, public key data
is just essentially unique. That's as good as social security number. EDSA,
ECDSA, BB postquantum signatures, however, not BBS proofs are all based on
algorithms that are very sensitive to input data and hence will amplify the
uniqueness of any input data.

Greg Bernstein: So the signatures themselves, okay, not BBS proofs, we'll
get to that, can act as a unique identifier, as a tracking cookie. Okay, so
that's a worry. Okay, and we have to take care of that. we have to make
sure that doesn't happen. security requirements. Okay, so there's
signatures. You got to forget prevent forgery and we got to worry about
some common things and we'll get into those replay credential theft and
abuse. So what do we mean by forgery prevention? And we've got some
technical cryptographic ways of talking about being able to create a
forgery.
00:25:00

Greg Bernstein: And we do this under a pretty tough criteria. We say that
the wouldbe forger can get signatures on any message that it would like up
to end times. And under a good cryptographic signature means that seeing n
signatures on messages of their choice, the adversaries, they can't create
a new one except for with a very very small probability. And sometimes why
we have to throw in small probability rather than zero probability?

Greg Bernstein: I said we can't have zero probability. Somebody could just
have a one in a bazillion chance of guessing a signature. You can't say
it's zero probability. Okay, we call these existential forgeries. Okay,
it's the job of a cryptographic signature scheme to prevent this. EDDDSA
and BBS satisfy both EUF and strong EUF. EDDDSA has binding signatures and
strongly binding signatures too. ECDSA does not. So, you present your
credential. Why can't somebody else just take that and present it again?
And if it's done electronically and there's nobody seeing you, that's known
as a replay attack. Okay?

Greg Bernstein: You just get that credential somehow and just replay it.
Okay, don't do anything. Okay, good website that talks all about replay
attacks. The issue is some mitigation approaches require unique identifiers
or public keys and we'll see example of that coming up. So, whatever you do
to prevent replay, if you're trying to be privacy preventing preserving,
not preventing, preserving, you've got to make sure that your mitigation
strategy doesn't use any unique identifiers or public keys.

Greg Bernstein: So when I was first learning about BBS, I go, what is this
presentation header that goes between a holder and a verifier? So this
would be guaranteeing a mechanism for guaranteed the integrity of
additional data between a holder and a verifier. one of the main uses of
this is replay protection. Because we're not going from the We're going
from the holder to the verifier. And it works like In BBS, the issuer signs
a verifiable credential. It goes to the holder.

Greg Bernstein: The verifier is going to send some kind of test message,
piece of data, timestamp, whatever that the holder will incorporate into
this presentation The holder then generates a BBS proof from that
information and the it sends that back. along with the presentation header
comes back. That's public information. Okay, BBS proofs are completely
unlinkable. They are completely privacy preserving. Okay, the proof part,
if the holder chooses to disclose information that's uniquely identifying,
then that's gone.

Greg Bernstein: But the cryptographic artifacts sent between the holder and
the verifier completely unlinkable. If the holder generates another one for
another verifier, okay, it's going to be completely unlinkable. So, the
replay protection provided by this thing called the BDS presentation header
that is only shared between holder and veriier. gets us that replay
protection. Another mechanism that we have to worry about and that we've
heard a lot about with hardware bindings is mitigating stolen credentials.
And the same method can also be used for replay protection. This is where a
holder has a public key for creating digital signatures.
00:30:00

Greg Bernstein: The public key is sent to the issuer for inclusion in the
credential that's sent to the holder. So the holder has a public key sent
to the issuer. The issuer signs the credential along with public key
information from the holder. So they have a credential that contains all
that stuff. The verifier can send a test message. And this can also act as
anti-replay. But it's also to prove that the holder really is the holder
and they know the holder's secret key. They know the holder secret. Then
the secure VC is sent along with a signed test message. But Greg, the
holder public key is a unique identifier is included now with the VC.

Greg Bernstein: Yeah, this is how key binding works in certain mechanisms
like SD jot. Okay, it binds and prevents credential theft, but you lose
your tracking privacy. BBS doesn't do it this way. and this is important
for our story coming up about what happens with a quantum computer. Okay,
BBS does something that sounds a lot similar, but we do something called
anonymous holder binding. The holder generates and securely stores Holder
generate a Sends a commit A commitment is not the same as sending the
secret. Commitments have to be hiding.

Greg Bernstein: The issuer can never figure out the secret from seeing the
commitment. The holder though can furnish a proof that commitment actually
is associated with that secret. Commitments also are strongly binding.
Meaning you can't change the secret after the commitment's been made and
have the information still verified. The issuer does a signing over that
commitment. We call this blind signing because they never actually They see
the commitment. The holder generates a proof and they can only generate
that proof because they knew the secret.

Greg Bernstein: If they don't know the secret, if the holder doesn't have
all the information that's in the credential plus their secrets, they can't
generate these Then they generate that send it to The verifier can be
assured now that this came The holder used anonymous holder binding and
these proofs are once again unlinkable. But if everything's really
anonymous, what about this situation for credential abuse?

Greg Bernstein: Prior to this, we've been saying, the issuer might collude,
and take information from verifiers. Verifiers might work with each other.
Verifiers might work with third parties. What if we have an evil holder?
Didn't you say that these proofs from BBS are unlinkable? What if we have a
credential abuser out there that says, "Hey Boulder, can you give me
access? I'll make it worth your while." Imagine doing that with some kind
of scary sounding accent. The holder even if it's preventing credential
theft, it's got that nice mechanism we just talked about. Anonymous holder
binding said, "Okay, I'll do that for you." Generates this proof with the
secret.

Greg Bernstein: goes, "Here you go, buddy." This credential abuser can now
visit the verifier. And this process can be repeated as many times as
there's credential abusers talking to the holder, They can turn this into a
business. this is the problem with purely anonymous credentials. We need
some way to kind of but not too much identify the holder. So, here's a
rebound. Credential is strong. We got good privacy got strongly bound to
the holder, but we got an complicit evil holder who is allowing or
profiting from the use of its credential.
00:35:00

Greg Bernstein: This is sometimes called a civil attack. Although when I
went and saw that at Wikipedia, it was more subverting reputation systems,
but the flavor is fairly similar. You, allowing multiple people to use your
stuff or kind of. So, we used to call that sometimes a civil attack. How do
we mitigate this? the original version of U BBS pseudonyms and I think the
title still is known as perverifier linkability. Okay, because this was the
original use besides allowing somebody to assert a synonymous identity
which is one of the reasons for pseudonym but this cryptographic pseudonym
is computed along with the credential.

Greg Bernstein: So this is a new piece of data. Okay, that goes with the
credential. we include a proof that the pseudonym was properly computed
based on the verifier context. That means we're going to make this unique
for a verifier. And technically we need an ordered fixed link set of older
secrets. Okay, for privacy, these pseudonyms across different verifiers
must be Okay, and they are. Okay, the complicated looking code that we have
in the U standard just does some nice stuff with these groups of secrets
and doing stuff in the elliptic curve and we'll get to that in a second.

Greg Bernstein: So what does our privacy preserving solution look like?
we've got essentially an unlinkable zero knowledge proof from the holder to
verifier that asserts the signature from the issuer. that means the
verifier is going to use the issuer's public key. So they're going to know
this comes from the issuer but it's going to be zero knowledge in the sense
particularly for us that these things aren't linkable. you generate a new
proof for a new verifier or even if you're returning to the same verifier
those proofs are unlinkable.

Greg Bernstein: the cryptographic artifacts if you know anything else you
have in it if it says I am over 18 that information they'll know okay we've
got anonymous holder binding to prevent credential theft so that means what
if somebody got your credentials straight from an issued credential they
broke into the issuer didn't get the issuer's public key I mean private

Greg Bernstein: key or anything couldn't forge credentials but they were
able to get the credential they can't create those proofs okay sudums allow
the holder to assert identity and reduce the threat of credential abuse
okay so when we were first working on getting these credentials together
based on BBS things kept coming up and I think this happened to other folks
working on alternate systems. So they threw in the problem with the key
binding issue was it completely linkable. So it was relatively easy for us
to put an anonymous holder binding based on blind BBS signature pseudonyms.
U we added in two. Okay. And we found that this was kind of gives us a
fairly complete solution.

Greg Bernstein: Then the question comes up will it last? Okay. So to answer
that question especially faced with this potential the cryptographically
relevant quantum computer is we kind of have to get into notions of
cryptographic strength. And you hear people talk about information
theoretic or perfect secrecy or perfect whatever indistinguishability. You
hear people talk about computational. then you hear people get down to
actual numbers. This has 110 bits of security or this has 98 bits of
security. This has 128 bits of security. And what does that mean? Okay.
00:40:00

Greg Bernstein: So an example of perfect secrecy, if you ever took a little
course on that had even a little bit about cryptography, there's a thing
called a one-time pad. So imagine you have a whole long string of random
bits and you mix those bits bit by bit along with your message. We call
that one time pad. That's actually our key. Okay, that becomes our key.
Okay, and it turns out a one-time pad scheme is perfectly secret under a
number of different notions of definitions of perfect secret. one that I
kind of like the best is indistinguishability because that carries over to
unlinkability.

Greg Bernstein: a good book, Cats and Lindell, for modern cryptography
that's understandable, covers a lot of good stuff. Okay? They'll tell you
that for an encryption scheme to be perfectly secret, then the encryption
keys must be longer or as long as the messages. a one-time pad. Once you
hear that, you go, "Okay, forget perfect secrecy because they're not very
practical." Wrong. There are other cases where we're going to see we do
have in perfect indistinguishability or unlinkability. But for encryption
schemes, Hence, we need things we need notions of computational security.
Okay?

Greg Bernstein: And in this case, we give people a finite amount of time
and don't allow them infinite resources. And that's what they mean by Not
that that means we're constraining our adversaries not to be infinite in
resources and memory and things like that. And they have to run for a
feasible amount of time. They're not given unlimited time because anybody
can crack it given, multiple ages of the universe. It's just not useful.
And this is why cryptography is interested in computationally hard
problems. And if a hard problem gets cracked or could get cracked, you go
look for a different hard problem. The other aspect of it is adversaries
can potentially succeed with some very small probability.

Greg Bernstein: Okay, as I said before, achieving zero probability in many
contexts is impossible. Why? Because they can guess. You've got a key
that's 128 bits. I'll just guess a key and by a random guess, I could be
correct. So, I've got a probability of 2 to the minus 128 of being correct.
So, that's why we can't achieve zero probabilities. So those are those two
different things you'll see come up. So let's look at an example. There's
something called forward secrecy. When you have these key agreement
protocols that give asurances that okay your session keys won't be
compromised even if long-term secrets are used in session key exchanges are
compromised. What? Okay.

Greg Bernstein: So everything like SSH, TLS, Signal, okay, they have this
notion of sessions, right? You visit a website, You're using a longer term
key. you're using the public key of the website, If the website's private
key got compromised, could somebody see all the communications you had with
that website? No. Why? because they use as part of Keys just for that
session. They call them ephemeral keys. They are only used for a session.
This is all done with a key agreement protocol which is based on the
hardness of the discrete log problem. You see what's going on here? So,
they call this forward secrecy. Okay.
00:45:00

Greg Bernstein: Sometimes it's mistakenly called perfect forward secrecy,
but it's not perfect. It's based on a hard problem. Okay, here's where
things got interesting and where ZKPs and all our commitment stuff comes
back in. commitment schemes are a bit of both. So when we do that key
binding thing for our anonymous holder our secret binding thing we commit
to a value. We got to keep it hidden from others and have it binding. Okay.

Greg Bernstein: You can have different flavors but basically you get two
combinations possibilities perfectly binding and computationally hiding or
you can get computationally binding and perfectly hiding. Okay, the BBS
blind signatures and pseudonyms make use of what are known as Person
commitments. And I show what those look like in more detail in those
previous slides when we were first talking about BBS advanced features.
Okay.

Greg Bernstein: and to let you know from a paper back in 2004, the
Patterson commitment scheme is information theoretically hiding and is
binding under the discrete long log assumption. That means we have perfect
secrecy in the hiding of that commitment in a perfect indistinguishability.
They can't figure it out even if they have a quantum computer. if they
could forge one and we'll get to that in a sec. but that's a interesting
case where you've got bothformational and computational notions of security
together.

Greg Bernstein: Okay, so let's get down to why we're worried. quantum
mechanics. Okay, one of the most fundamental interesting things in the
world, Wikipedia or see the theoretical minimum by Leonard Tuskin. The
notion of a cubit or however you want to spell it has been around since the
Stern Gerlac experiment in 192 First demonstration of quantum bit. The most
recent Nobel Prize in physics was awarded to my quantum physics professor
from Berkeley. I had them in the 80s. And these circuits were forerunners
of the cubits used in many quantum computers. So it's a wonderful field.
It's growing nicely.

Greg Bernstein: When I'm working on cryptography, sometimes I don't feel it
so wonderful. But let's be serious. Quantum attacks right now on symmetric
ciphers provide a square root speedup and they are not considered broken.
AES 256 would provide 128 bits of quantum security. Okay, which is
considered a ton. Okay, the problem comes with shor's algorithm. promises a
massive speed up in solving factoring like RSA discrete log problem. Okay,
and that includes discrete logs or elliptic curves. which we use. so long
as a sufficiently large quantum computer on the order of millions of
cubits, is available.

Greg Bernstein: I don't know why I like QU. this would spend the end of
EDDDSA as well as Okay, so this is the meaning of a quantum
cryptographically relevant. You get up to 100 cubits, that's not enough.
Signatures can be theoretic perfect security is The information hiding in
our Pison commitment is not affected. Okay, so people are coming up with
new algorithms for key exchanges. They actually don't call them key
exchange protocols. They call them key encapsulation mechanisms.
00:50:00

Greg Bernstein: So couple of postquantum signatures have come out.
Combining key exchange mechanisms is a little bit more complicated than if
we want to combine digital signatures to get the basic security level of
unforgeability. We can just sign a document with two signatures. And we're
ready to go. Privacy of the BBS with the cryptographically relevant quaum
BBS proofs have everlasting unlinkability. But wait, there's more. We'll
get to it.

Greg Bernstein: The commitments we use between the holder and issuer, are
perfectly hiding. So no leakage of secrets to a cryptographically relevant
quantum computer. Pseudonyms, which are an extra piece of information, can
be perfectly unlinkable under constrained use. And we're going to get to
that in a second. But first, how extreme is the unlinkability or the data
confidentiality or the unlinkability notion?

Greg Bernstein: Okay, this is straight from the BBS document at the IERTF
confidentality meaning that linkability okay cannot be broken by
adversaries even with unbound computational resources meaning perfect and
in possession of the signer's secret key if the signer's key gets
compromised it does not affect the unlinkability of the proofs. So this
guarantees the privacy and hiding properties of the BBS proofs. Okay. This
is called everlasting privacy. It's a strong property of It's only counts
for BBS signatures.

Greg Bernstein: But that's what we care about because that's going from the
holder to the verifier. Even if an issuer sees a BBS proof, they can't
figure out what holder it came from. Okay, that's unlinkability. for the
pseudonyms, this was the last piece because somebody said, "Hey, wait a
second. So, this is what our pseudonym calculation looks like. I didn't
make this up. We got advice from people like Jonathan Katz and Anna
Licensed Skaya and such like that cryptographers. Okay. However, let's look
at what happens if we only use the single secret. The pseudonym is
calculated as the NIM secret times the value What is Z?

Greg Bernstein: Z is just a hash of the verifier's context ID which is
information. So we can consider public information. If we have a
cryptographically relevant quantum computer, you can take a discrete log
represented symbolically like this. Don't get down into the details on it.
That's not exactly how we do it. But it's the general effect. So we take
the discrete log. Look there's This reveals the nim secret just by doing
some simple operations. Okay. Remember nondiscrete logs are easy. We
compute those all the So this property happens all the time when we use
logarithms, right? But discrete logs are hard. But not for a
cryptographically relevant quantum computer.

Greg Bernstein: So if somebody went and harvested pseudonyms from
verifiers, they would know They can reverse engineer the NEM secrets and
they could use that the NIM secret values that they harvest then to
correlate people. Hence don't use more than the number of different
contexts you visit. So that's why we have this constraint. It's
constrained. It's not perfect. I mean, yes, it's perfect secrecy, but no,
there's a little bit of work to do. If you want perfect secrecy, you have
to keep track of the number of different contexts. Not the number of uses
or the number of proofs generated, but the number of different contexts.
Okay?
00:55:00

Greg Bernstein: Under that condition, the system's underdetermined. It'd be
easy for a wallet to keep track of this because The wallet software has to
compute the pseudonym and the proof and those take effort. So, they would
want to cash at least the pseudonym. Okay? I make it sound like it'd be
simple for people to do that. The problem is you don't really know what
values belong to which. So I'm going to be running some of these ideas by
some cryptographers to see if we can establish some other notions so that
maybe we use it more than n times but we have a sufficiently large pool of
people out there we can say some things about the security.

Greg Bernstein: For right now, if you use it less than n times, you will
have perfect secrecy. It comes at a cost, but not very big compared to
anything postquantum or other types of ZKPs. BBS uses the notions of ZKPs,
but it's one of the most efficient ways of doing a zero knowledge proof
CKP. Okay, it adds 32 bytes per NIM secret to the proofs. Okay, it doesn't
increase the size of the BBS signature.

Greg Bernstein: it does require some additional processing when you do the
commitment and when you do proof generation verification. all these are
fairly reasonable. especially compared to anything postquantum. Can we do
this Data integrity says we can do proofs for multiple crypto suites. So we
have a the citizenship document right this is one of our examples permanent
resident card okay so we come here and in the proof section and this is
from data integrity the proof doesn't have to be a single proof it can be
an array a set of proof proof sets here we show standard ECDSA here we show
a proof for BBS

Greg Bernstein: This is coming from the issuer. So once again, BBS
signatures are very very small. So it's not the proof value here isn't very
big. Okay. The proofs can get larger but not too bad. That's all it takes.
So if we have a situation where somebody is going to issue us some kind of
credential and maybe in some situations they only trust the old fashioned
ECDSA, there's no reason that they can't also sign it with BBS for any
other use that we would like to use that's privacy preserving. Okay, I
think I used up almost all our time.

Greg Bernstein: Okay. are there any questions? Stop sharing.

Harrison Tang: Yeah, I think we have time for Any questions?

Greg Bernstein: Or I can jump up down and say we can have privacy real
quick. We're almost done. with both all the BBS related specs are working
group documents that are stable at the IETF. This vector of secrets was the
last piece that we needed to do to guarantee this everlasting privacy.
We've got that even if you may not need it. We wanted to make sure we had
it.

Greg Bernstein: Let people know we can have our privacy and we can do it
just alongside the other things too. But all right,…
01:00:00

Harrison Tang: Cool. Thank you.

Harrison Tang: Thanks, Greg, for another fascinating presentation. So, big
learn a lot myself. So, thanks a lot. Perfect. Right,…

Greg Bernstein: we're right on time. Sorry, the deck is available and…

Harrison Tang: this concludes this week's W3C CCG meeting. Thanks. Thanks,

Greg Bernstein: I'll post it again. The deck's already been posted. I
always post them before I give the talks and all the old ones are too. I all
Meeting ended after 01:00:49 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Tuesday, 28 October 2025 22:06:52 UTC