[MINUTES] Data Integrity 2025-06-27

W3C Credentials Community Group - Data Integrity Meeting Summary -
2025-06-27

*Topics Covered:*

   -

   *Community Updates:* Greg Bernstein announced progress on BBS pseudonyms
   for everlasting unlinkability and upcoming presentations at IETF and the
   Simons Institute workshop on ZKPs. Jesse Wright discussed ongoing work on
   ZKPs for RDF-serialized credentials and an upcoming presentation on Linked
   Web Storage at the Global Digital Collaboration in Geneva. Rohit Sinha
   introduced himself and his research interests in cryptographic protocols.
   -

   *Jesse Wright's Questions (Data Integrity):* Discussion focused on
   adding provenance information to signed credentials to indicate the
   originating credential type (e.g., driver's license vs. marriage
   certificate). The group agreed this is a high priority but needs to balance
   practicality and privacy concerns. RDF 1.2 data modeling was briefly
   discussed, but the group opted to avoid complexities and stick with
   existing mechanisms. String matching for issuer URLs was deemed the most
   efficient approach.
   -

   *Post-Quantum Cryptography Requirements:* The group discussed the need
   for everlasting unlinkability in post-quantum cryptography, acknowledging
   that unforgeability won't be guaranteed due to the eventual breakability of
   ECDSA. The priority was set as: 1) getting ECDSA to work, 2) implementing a
   post-quantum scheme, and 3) demonstrating BBS efficiency compared to ECDSA.
   -

   *Longfellow ZK Analysis:* Review of Yarm's analysis of Google's
   Longfellow ZK system, a cryptographic circuit-based zero-knowledge
   mechanism. Key points included performance benchmarks (prover ~1 second,
   verifier 0.5-1 second), circuit size optimization (using Zlib compression),
   and security considerations (potential attacks related to random number
   generators and flawed circuits). Concerns were raised about the claim of
   post-quantum security due to the reliance on ECDSA.
   -

   *Quantum-Safe Data Integrity Suite:* The group discussed next steps for
   preparing the quantum-safe crypto suite for W3C standards track submission.
   The main action items are to decide on crypto suite identifiers (including
   strength levels) and multi-key representation numbers. Future discussions
   will cover selective disclosure approaches (signing per triple vs. using a
   Lihero-based mechanism).

*Key Points:*

   - Provenance information for credentials is highly desired, aiming for
   efficiency in cryptographic circuits.
   - Everlasting unlinkability is crucial for post-quantum security, while
   unforgeability is less critical given ECDSA's limitations.
   - The Longfellow ZK system shows promise but needs further consideration
   of circuit security and vulnerabilities.
   - The quantum-safe crypto suite is nearing completion, with identifier
   and multi-key representation decisions needed before submission to the
   VCWG. Selective disclosure methods require further discussion.
   - Several networking opportunities were identified amongst attendees at
   the upcoming Geneva event.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-06-27.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-06-27.mp4
*Data Integrity - 2025/06/27 09:58 EDT - Transcript* *Attendees*

Greg Bernstein, Hiroyuki Sano, Jesse Wright, John's Notetaker, Manu Sporny,
Parth Bhatt, Phillip Long, Rohit Sinha, Sanjam Garg, Ted Thibodeau Jr,
Tom's Notetaker
*Transcript*

Manu Sporny: All right. Hey folks, let's go ahead and get started. We've
got a decent gathering of folks here. welcome to the Friday data integrity
call for the credentials community group. during this call we try to push
forward some of the data integrity work that we want to go standards track
at W3C and we also do some amount of kind of touching base on R\&D that's
happening on future kind of zero knowledge proof systems and how to
integrate them with data integrity. we've got a pretty open agenda today.

Manu Sporny: let me go ahead and, pull up what that is. so if anyone, wants
to add anything to the agenda today, I think we'll have, plenty of time to
do that. the agenda today, let's see, let me go ahead and share my screen.
will consist largely of kind of finishing up answering the questions that
Jesse raised last week. I think I provided some feedback in Slack, but to
get it circulated to the broader group, let's go ahead and kind of review
what was said there. the next set of changes that we might need to make to
the quantum safe crypto suites to prepare it for adoption by the BCWG.

Manu Sporny: we'll want to talk about that. and then I'll go ahead and add
the excellent view that Yarm did on the long fellow zero knowledge stuff.
This is Google zero knowledge solution. He did a great analysis of it and
we'll spend a little bit of time talking about any other updates or changes
to the agenda that we want to cover? All

Manu Sporny: Let's go ahead and get into kind of community updates. let's
go ahead and do some introductions. these are all optional. You don't have
to introduce yourself if you haven't been on the call but would love to
hear from you Roit if you'd want to provide some background on kind of
yourself, why you're interested in this group, what you're trying to
achieve, that sort of thing.

Rohit Sinha: Yeah, absolutely. Hi, nice to meet everyone. so yeah, I have a
academic background. I do research on improved cryptographic protocols for
multi-party computation and zero knowledge proofs and yeah I work closely
with professor Sanjang Gar maybe some of you familiar with him and he
suggested I attend these calls because we've been looking in the identity
space and yeah we think

Rohit Sinha: this could be of interest to us. Yeah. But yeah, look forward
to working with y'all.

Manu Sporny: Okay, wonderful. welcome to the call, wonderful to have you
here and, this is a friendly group, so please feel free to, ask questions
whenever you have them and even, some of the research that you're doing. We
can spend a call,…

Greg Bernstein: Hey just wanted to give a quick update.

Manu Sporny: looking into that to see, what you're working on and then how
it might overlap with some of the other people working on things here. What
we really try to do is kind of connect people together to move the R\&D
stuff further along faster.

Rohit Sinha: Great. Thank you.

Manu Sporny: Over to you, Greg.

Greg Bernstein: We are almost there on updating This is BBS pseudonyms for
what we call everlasting unlinkability. and we've got a slot to present
that at the upcoming CFRG for the July IETF meeting.
00:05:00

Greg Bernstein: The other thing is in mid July there's going to be a
workshop at the Simons Institute that's out in Berkeley theory of
computation.

Greg Bernstein: They've been doing a bunch of different things but this is
going to be covering all about ZKPS pretty much and all the big names are
going to be there.

Greg Bernstein: So, I'm going to be sitting in on that myself, probably
remotely. And that's kind of what's happening right now.

Manu Sporny: Thank you for that update,…

Manu Sporny: Greg. go ahead, Jesse.

Jesse Wright: Two things.

Jesse Wright: I wanted to close the loop on the work that I am currently
doing. Some of it is with Sanjam on ZKP for RDF serialized credentials. So
I think that's the connection as to why Sanjam was recommending Roit join
this call. So I wanted to close the loop on that. The second announcement
is next week I'll be at the global digital collaboration in Geneva. And on
the 2nd at 9:40 I'll be presenting a session on linked web storage which is
one of the W3C groups that I'm an editor for.

Jesse Wright: The positioning of that session is to engage with industry
bodies interested in attached storage for digital credentials and see if
they would like to use LWS or…

Manu Sporny: Wonderful.

Jesse Wright: get involved with LWS to make LWS the grounding standard for
web attach storage.

Manu Sporny: That's great to hear that be we've got a number of other
people from the community that are going to be there. I will try to put you
in touch with them so you can, know who they are and meet up with them. Joe
Andrew is going to be there who does a lot of stuff with decentralized
identifiers and Brent Sundell from the verifiable credential working group,
the chair there is going to be there as well. we will have multiple people
from True Age and Kexus there. They're the ones that use this stuff in the
retail sector. So they've got a retail event digital wallets, they're going
to be there as well.

Jesse Wright: Awesome. Thank you.

Manu Sporny: I think maybe Kim Duffy from Decentralized Identity
Foundation's going to be there. So, let me try and put you in touch with
those folks so you can meet them in person. since you'll be there anyway.

Jesse Wright: I know Kim because DIFF and LFDT are supporting the LWS
session. all the others would be great to have connections with.

Manu Sporny: Okay, wonderful. yeah, please, if I don't get that to you by
the end of this weekend,…

Jesse Wright: Thank you

Manu Sporny: please ping me. So, I'm sure to do that. Yeah, of course. all
right. Any other community news or quick updates before we jump into our
main agenda. All right, let's go ahead and jump into the main agenda.
Jesse, I think we're going to focus on answering the rest of the questions
that you asked that you started raising last week.

Manu Sporny: And then after that we'll talk about the quantum safe crypto
suite changes that we want to make. and the long fellow ZK stuff I think
will fit in before the quantum safe crypto suite stuff. So it's Jesse the
Longfellow ZK stuff and then the quantum safe crypto suites is the rest of
the agenda. Any other updates changes to the agenda before we proceed?

Manu Sporny: All Jesse, over to you if you don't mind. I don't have the
remainder of your questions in front of me, but if you've got them, please
please let us know. I'm clicking on the Slack channel now to remind myself
as well. Here, I've got them. I've got them.

Jesse Wright: Can you give me one minute to pull them up then?

Jesse Wright: Okay, brilliant.

Manu Sporny: …

Manu Sporny: how can I share these easily?

Manu Sporny: Let yes,…

Jesse Wright: I could share my screen in the Slack channel…

Jesse Wright: if that helps.

Manu Sporny: Yeah, I'm trying to not expose every Slack channel I have to
the world.
00:10:00

Jesse Wright: That's very fast. present now. Entire screen. There we go.
And over to CCG data integrity. So the provenence was the first question as
to so there when we're signing credentials we could include in the
signature which credential they came from even though typically the data in
ABC is in the default graph.

Jesse Wright: We could in some somehow indicate the broader credential that
this has been signed within so that you have the provenence of this is a
driver's license in which the date of birth was found from as opposed to a
marriage license or as opposed to some other kind of document. And this
kind of just changes the structure of the inputs that we're giving to the
circuit. So it is good to know whether this is a high need or not.

Manu Sporny: Yes, it is definitely a high need. because when you're mixing
multiple types of credentials together, the verifier often wants to know
which general category a particular claim is coming from. And I think that
gets more and more complicated the more types of credentials you accept.
and so there is definitely an upper limit here, Jesse, that we want to like
at some point it gets ridiculous, And we want to stop before we get there.
and so I think last time we were saying maybe three to four different types
of core input documents and there's also a privacy consideration here.

Manu Sporny: So, take driver's licenses for instance. when you query, let's
say for whatever reason you want a home address off of a driver's license,
the verifier wants a home address off of driver's license. for example,
you're shopping on an online store and they want a verified delivery
address. they could ask, "We just need a verified home address." we don't
care if it comes off a driver's license or an X or a Y. So they give three
different types of credentials. and they don't really care who they want to
know it's one of the 50 verified issuers.

Manu Sporny: for example in the United States there can be 50 issuers of
driver's licenses and they want to know that the issuer is coming from one
of the people on that list without necessarily identifying specifically who
from that list that could also be more relevant in the European context
where you don't really want to know which European state has said that
although I guess in that case potentially your address gives away anyway
who the issuer would be. but let's say it's a date of birth or something
that doesn't provide that a first and last name right and I want a verified
first and last name off of an identification document and I'm okay taking a
passport or driver's license but I don't want to know which country issued
that to you.

Manu Sporny: I accept, this list of, countries or states, but I don't want
to know which one is. so there is a pro it would be nice to be able to
support that. I think the Longfellow ZK stuff is asserting that they
support that kind of check in the cryptographic circuit.

Manu Sporny: And so I think we should try to achieve that. But I think we
also would like to know how much more complex does that make the
cryptographic circuit, right? Mhm. Mhm.

Jesse Wright: Yes. …

Jesse Wright: so I've got a data modeling question with my data modeling
hat on. this provenence would be easier to manage using perhaps more of an
RDF 1.2 data model in that you can…

Manu Sporny: Yep.

Jesse Wright: then have claims like the DVLA asserts and then have as
quoted triples all of the assertions that then form the content of your VC.
And all of this then lives within the same graph rather than needing to
have these two kind of distinct layers of this is the graph of asserted
information and then we're doing proofs over this graph and then we're
doing proofs over the properties of particular credentials or…
00:15:00

Jesse Wright: credential issuers. Has there been a discussion on the data
modeling side now that RDF 1.2 is coming to fruition?

Manu Sporny: So the data integrity stuff happened kind of triggered the 12
work meaning that when we create the proof graphs those are separate from
so let's see a verifiable credential has a set of statements that have been
asserted in one graph and…

Manu Sporny: in another graph is the proof on the first graph.

Manu Sporny: So, we have some variation of what you're talking about. I
don't know if it's in the best form. I know that what we have right now is
what we're going to have to go with because if what we have now doesn't
work, it is going to require a rechartering and…

Manu Sporny: and an extension to RDF data set canonicalization, which would
take two to four years, I think we have the concept of multiple graphs.

Jesse Wright: Yeah, let's not do that.

Jesse Wright: Fair enough. Okay. RDF 1.2.

Manu Sporny: But I don't know if you're talking about some of the RDF star
stuff. I don't know if you're talking about we have quads like that's
supported and we can refer to triples in a particular graph.

Jesse Wright: I basically mean RDF star as the way of modeling things.

Manu Sporny: Okay. Yeah.

Manu Sporny: I don't think and it depends on which feature, but I don't
think we can depend on RDF star just yet. That would be another four-year,
exercise and we'd have to go through the entire kind of security review and…

Manu Sporny: and that sort of thing. yeah.

Jesse Wright: Let's not touch that so it is for provenence in a quad format
then where we might just have somehow mint a quad term to represent the URL
that the credential was fetched from or…

Jesse Wright: some other way of indicating what type of credential or what
graph this came from it just adds one extra term in the array of inputs
that you're signing that doesn't introduce too much overhead.

Manu Sporny: Yep. Yeah.

Jesse Wright: It might be 30% extra

Manu Sporny: So what one thing there are two ways that we could discover
the provenence information. There is for a verifiable credential. There is
a predicate for issuer and that maps to a URL which is typically a
decentralized identifier but could be any URL. Right?

Manu Sporny: So there is a string match that we could do on that as a
string equivalence match that we could do on that with an input the other
more indirect thing and this has to do with this information is in a
separate graph and the verifiable credential statements. there's a proof
graph and then there's the actual statements that are being made meaning
the expression of the credential subject and who issued it.

Manu Sporny: in the separate graph that proof graph is signed using a
public key where the base of the URL not the fragment identifier but
everything leading up to that is typically the same base issuer URL so
there are two places where we could do this I would imagine that the
simplest thing to do would be to just do a string match in zero knowledge
against the issuer so there would be a statement there would be a triple or
a quad that said the issuer of the verifiable credential is x fixed string

Jesse Wright: Okay.

Jesse Wright: So the important piece of provenence missing if we just do
issuer is if we've got one issuer issuing two different types of
credentials.

Manu Sporny: Mhm.

Jesse Wright: So if you've got a government agency that's also issuing I
don't know tickets to enter the parliament building are there use cases in
practice…

Manu Sporny: Mhm. Yes,…

Jesse Wright: where we would need to distinguish between different types of
credentials issued by the same issuer.

Manu Sporny: because for example, a Department of Motor Vehicles can issue
multiple different types of credentials that assert an entity's driver's
license on multiple different types of credentials.
00:20:00

Manu Sporny: So I think that is for example your driver's license asserts a
driver's license number and your vehicle title could assert your driver's
license number on it.

Manu Sporny: But in some DMVs one of those things is very strongly well
vetted information on the driver's license whereas on the vehicle title it
can deviate. it's not always guaranteed to be clean data and that sort of
thing. So what credential type is asserting what data really does matter in
practice?

Jesse Wright: And that will always be information that is described in the
type of the credential.

Manu Sporny: Yes. Yep. Yep.

Jesse Wright: So we've got within the graph the issuer and the type. So as
long as we're able to say this issuer and this triple all emerge from the
same graph,…

Manu Sporny: Yep. Correct.

Jesse Wright: then we're satisfied that this date of birth is coming from a
driver's license from the diva. Awesome.

Manu Sporny: Exactly right.

Jesse Wright: Thank Okay, I think I'm happy with the provenence question.

Manu Sporny: And that is always guaranteed to hold in at least verifiable
credentials. any other input from anyone else on the Providence question?
Anything else that Jesse should keep in mind? And again, Jesse, it would be
wonderful to have this feature. it wouldn't be the end of the world…

Manu Sporny: if we didn't. Okay, great.

Jesse Wright: It shouldn't I think it's fairly straightforward to get this
to be honest.

Jesse Wright: So, we'll just do it from the get go.

Manu Sporny: Okay, that sounds great.

Manu Sporny: the thing I'm most interested in is what happens to the
cryptographic circuit. if we have a list of 100 issuers, my expectation,
the gut says it really shouldn't do much to the cryptographic circuit.

Manu Sporny: Meaning you're just doing a string comparison and the input to
the sparkle query is like a giant list of URLs. But I don't know what that
happens. Yeah, go ahead. Sorry.

Jesse Wright: So the way that we're or…

Jesse Wright: my understanding of the way we're signing or wanting to
represent the triples that get signed is each term including the full
string will get mapped to a single value in the field representation.

Manu Sporny: Yep. Excellent.

Jesse Wright: So we can do a string match by just doing a single equality
between these two integer field values in the circuit. which is very low
effort to do. The thing that may be more difficult if we're wanting to
encode things this way is doing substring matches or…

Manu Sporny: Yeah.

Jesse Wright: doing any kind of operations on strings. However, there is an
option to just encode character by character basically and sign each
character as a different field representation. Then you can do string
matches.

Manu Sporny: Got it. Yeah. And I think we want to avoid that.

Manu Sporny: If we apply the optimizations we think are going to be
beneficial for cryptographic circuits, I think what we're going to ideally
end up with are integer comparisons at the end of the day for full URLs,
right? So each URL will map to a specific integer and then the
cryptographic circuit would just need to do a integer comparison which the
expectation is that that is a very efficient thing to implement in the
circuit.

Jesse Wright: Exactly.

Manu Sporny: Okay.

Manu Sporny: the unanswered I don't know how we would do is I guess with
the input query if let's say that in the United States 50 issuers for
driver's licenses the input query will say I will accept any one of these
issuers we would need to map that the issuer URLs to we would need to

Manu Sporny: map the issuer URLs to integers. But I think we can use the
same compression table to do that mapping.

Manu Sporny: So I think we're okay. the only corner case is what happens if
there isn't a mapping correct.

Jesse Wright: so that it's not a key value type mapping.

Jesse Wright: Each string has a unique field representation.
00:25:00

Manu Sporny: Okay.

Jesse Wright: So I don't think that there doesn't need to be a mapping.

Manu Sporny: All again, details so we can get to much much later in the
process.

Jesse Wright: Yes, very happy.

Manu Sporny: Okay, that sounds good. All right. so you're happy with the
answer for number two? okay.

Jesse Wright: So the question here is around the postquantum requirements
both for the signature and…

Manu Sporny: Let's go to the next one if you don't mind explaining the

Jesse Wright: when we're talking about postquantum requirements my
understanding is that the two main requirements are around forgeability
i.e. Can you forge a signature or can you forge a proof object and around
everlasting unlinkability i.e. can you snoop on this credential collect it
and then when postquantum cryptography comes along understand sorry and
when postquantum computing gets to a point later on we can then decode the
credential and understand what was in there.

Jesse Wright: The current implementation we are looking at would have a BBS
signature on the table of triples and the way we're using this signature
would provide everlasting unlinkability but would not prevent forgery in a
postquantum future. so I wanted to know whether that is satisfiable for a
issued credential right now and then secondly whether the same requirement
on a proof object of yes it's satisfying everlasting unlinkability but no
it's not satisfying non-forgeability is an acceptable state right

Manu Sporny: Yes, the short answer is that's an acceptable state. So going
back to what's driving the decisions around this. so in the European
digital wallet initiative, M MOC and MDL have been chosen as one of the
formats there. but those are not unlinkable things. So when you present
your MDOC MDL, it's a super cookie. You're completely trackable, right?

Manu Sporny: So you use it for age verification and…

Jesse Wright: What the f***?

Manu Sporny: then you go and use it somewhere else that expresses the same
age assertion you are crackable between two areas to and then a bunch of
cryptographers came out and said hey that's really bad right don't do that
and so the response from and this community has been saying that for years
that's the whole reason we've been working on BBS and

Manu Sporny: the NDLM doc folks were like no no no that's not important and
it turns out yes it is actually important when you use this stuff at the
scale of society and so the Google's response I mean their backs are pretty
back pretty badly against the wall at this point because all these
decisions have quote unquote been made and written into legislation in the
EU and they needed some kind of unlinkable mechanism and that's where Obby
Aby's great work along with the rest of the cryptographies community's
great work on kind of Longfellow ZK the cryptographic circuit the Leharero
stuff has kind of come in and could potentially save the day there.

Manu Sporny: But the other thing driving that is that everyone and again I
think this is very misguided but everyone says it we have to use HSM's and…

Jesse Wright: Sorry. What was HSM?

Manu Sporny: ECDSA to do the signature because that is what we know and
trust and what's out there and as we all sorry hardware security module so
the HSM is a hardware security module that exists on Android devices and
Apple devices. So on mobile phones and HSM are also largely deployed by
governments to make sure that the digital signatures that they create can't
have the private key stolen easily, or The private key is locked in this
hardware security module. usually it's non-exportable and there's tamper
resistance on the HSM.

Manu Sporny: there's a huge industry around protecting these private keys
and these things tend to be very old systems. It takes years to get them
approved through cryptographic testing. some of them are designed such that
if you open the case the chip blows up and self-destructs. All kinds of
protections are around this stuff. and ECDSA is broadly deployed and really
not much else is as broadly deployed as ECDSA. so basically the constraints
on the current system at least that Google's trying to put out there is
that we have to use ECDSA and it has to provide this unlinkable mechanism.
00:30:00

Manu Sporny: the downside of using ECDSA is it's really old and complex and
you end up with cryptographic circuits that are far more complex than you'd
really prefer and that's even made worse when you have to parse a JWT or do
JSON parsing in your cryptographic circuit. so that's just the background
of kind of why we are where we are. there's this presumption that we will
be using ECDSA which again is a bad assumption so we shouldn't hold
ourselves to the same assumption but it has to at least work for ECDSA for
feature equivalence and then what that means is that whatever this solution
is it is not postquantum resistant the second we've got a cryptographically
relevant quantum computer you can forge GCDSA signatures all day long which
means you can create MDL

Manu Sporny: and MD docs that are forged which means that it doesn't matter
if you're doing the presentation in an unlinkable way the base documents
completely compromised because ECDSA is completely compromised. So that's
why I say here it would be really nice if the solution that we are creating
works for the new postquantum signatures MLDDSA stateless hashbased DSA and
in theory what we've heard is that it should work it should be just fine
meaning that the cryptographic circuit can operate on a postquantum
signature just as easily as it can on an ECDSA

Manu Sporny: the algorithms for verification would be different but that's
in theory the only thing that should differ okay so all that to say this
DSA has to provide everlasting unlinkability because we don't want a
cryptographically relevant quantum computer to then be able to go back and
reveal who was actually providing these theoretically unlinkable
presentations. so we don't want people to be found out after the fact. We
don't want people to store and decrypt later store all these believed to be
unlinkable presentations and then 10 years from now find out that they've
been completely linked across all of their interactions over the course of,…

Jesse Wright: No, picture

Manu Sporny: a decade. so that part has to hold.

Manu Sporny: but unforgeability does not with ECDSA because nobody's
expecting ECDSA to survive a cryptographically relevant quantum so that's
the set of statements. the second set is if we can show this working for
postquantum signatures as well that is the holy grail. if we can show it
works against a cryptographically relevant quantum computer and it is
everlasting unlinkability that gives us a very solid path towards getting
this standardized at kind of a nation state level like NIST will finally
basically be like okay that's worth working on right whereas NIST is not
interested in working on BBS…

Manu Sporny: because it's pairing based cryptography it's going to be broken

Manu Sporny: with, a cryptographically relevant quantum computer. I know
that was a lot of background, but that's kind of the thinking that's going
into that.

Jesse Wright: It's useful background.

Jesse Wright: The implementation that we were looking at the moment ties us
to BBS as the signature that we're taking as input to the circuit and that
is not e ECDSA. we can work with ECDSA but that will be a lot slower as you
mentioned it increases the circuit complexity quite significantly would it
be better for us to go with EC ECDSA for now despite the cost so that we've
got this backwards compatibility with ECDSA or…

Jesse Wright: should we do the proof of concept now with BBS to show it's
fast and if you use a BBS signature this is what can be achieved what is
there a priority in your mind between the two of those Okay.
00:35:00

Manu Sporny: That's that.

Manu Sporny: Yeah, I mean that is a very tempting research to do. I mean,
ignoring, there's so many hours in the day and only so much, time that
folks can put in, we'd love to see how much more efficient PBS is than
ECDSA. but there's this annoying time thing that's in the way.

Manu Sporny: I probably ECDSA has meaning ECDSA has more value because that
is one of the pretty strong constraints that people have which is you have
to be able to have HSMS generate the signature here and that is what people
are going with. Again, I think it's a bit short-sighted. I so order if we
had to put priorities on this, I would say getting ECDSA to work is number
one priority. and then getting a postquantum scheme to work is number two.
And then demonstrating how much more efficient BBS is than ECDSA would be
number three.

Manu Sporny: And the reason I say that is that by the time, we would be
able to get a BBS thing out there, let's say it takes another three years,
we're going to be,…

Jesse Wright: I That's

Manu Sporny: in halfway through 2028. The switch over, time is 2032, which
basically means we're in the market for four years before the thing's
obsolete, right? and given how long it takes to, deploy stuff, it'll take
four years to even get onto the curve where, good adoptions happening and
then all of a sudden people, want to switch away from that.

Manu Sporny: so I think that's largely the reason that had we been able to
move faster on BBS for example move forward a decade ago I think we'd be
based on BBS right now but given the whole national standards organizations
just do not want to pay attention to anything that's not postquantum secure
at this point did that help kind of at least with prior

Jesse Wright: Okay, that's very helpful.

Manu Sporny: Sorry. Yep.

Jesse Wright: Thank you so much.

Manu Sporny: Any other comments on kind of thinking around the postquantum
requirements?

Manu Sporny: Any other questions, concerns, Jesse on this before we move on
to our next item? And thank you very much for working on this Jesse.

Jesse Wright: No, I'm good.

Jesse Wright: Thank you.

Manu Sporny: We really appreciate It is important research and development.
that has very huge impacts if we can get this out there into society. let's
talk I mean this is a good segue into the long fellow write up that YRL did.

Manu Sporny: long fellow I don't know where the word longfellow came from.
I don't know if anybody else does. but this is the same thing as the that
Google put out. this is Ob's Abby Shellott's work along with Mutu and a
number of other people in the cryptographic community. the Longfellow ZK
stuff. Yarm did a great write up. I thought, it kind of simplifies what the
paper's trying to demonstrate. I guess it's named after bridge. but this is
a cryptographic circuit based zero zero knowledge mechanism.

Manu Sporny: there is an ITF draft for it called lib ZK that Matteo I think
and Obby put forward that draft is here lib zk the zero knowledge proof
library. and in it they kind of highlight how to put this zero knowledge
mechanism together. so there's a good paper here that's finally been
released. we've known about this for about a year, but Google was finally
able to release it recently. So there's all the details here on how to
implement it. They do have an implementation that they've provided to
anyone that kind of wants to get early access to it in C++.
00:40:00

Manu Sporny: I think that's what Yarm used to do his analysis. and as Phil
says in the chat, the Longfellow Bridge connects Cambridge to Boston. so
that's interesting. I think that's where Abby was before. he was at
Northwestern, so I don't know. okay, lets see. so it uses Longfellow just
as a reminder everyone uses the Lehiro scheme which is fairly recent.
That's the thing that kind of kicked off all this interest in cryptographic
circuits. it also uses some check which has been around for a very very
long time.

Manu Sporny: So the argument here is that it's not using bleeding edge
cryptography or it's using useful long longstudied versions of
cryptography. I think that statement is highly debatable because it would
have to go through some kind of national cryptography body which it might
be going through ISO. and given that the size of Google maybe they get this
through okay I think so Yarm did a good analysis here API looking at the
inputs there's some good write up about how the calculations done how the
circuit's generated what you need to generate the circuit

Manu Sporny: what you need to feed as input to the prover. that sort of
thing. there's a view on kind of compliance to standards who implemented
what cryptographic primitives like ECDSA and Shaw 256 did some benchmarks
on speed which is pretty decent. prover takes about 1 second to run.
Verifier can take anywhere from half a second to run. with optimizations
they used a version that wasn't as optimized but came out with good numbers
there. So they've confirmed that the verification and proving takes about
same time that Google's asserting. the size of they provide some input on
the size.

Manu Sporny: so the circuit sizes are about 85 megs to 100 megabytes, but
when you do zib compression on it, it brings it down to 315 kilobytes,
which tells us that there's a lot of bit repetition in the cryptographic
circuit, which was interesting. so I've got some questions back to Yarmmill
on why he thinks it is or if a binary domain specific language might be
more efficient than just zib. there's some analysis on privacy and hamming
distances to make sure that the signatures, look random enough like they're
almost indistinguishable from random noise. and he confirmed that at least
per his measurements that seems to be hold true.

Manu Sporny: there's some analysis on security considerations around this
can be considered postquantum secure. Again I personally strongly disagree
with that since the basis is ECDSA. I think it's very misleading to say
that Holistically the circuit based stuff is I think postquantum secure. I
think that's a fine assertion to make and that's I think the assertion that
everyone's making but I think people are getting very confused by that
statement about this being postquantum secure in a world where
cryptographically relevant quantum computer exists compatible with TEES.
the HSM TE is trusted execution environment.

Manu Sporny: SE a secure ve. These are variations on a hardware security
module. and so they're saying, this is where the ECDSA requirement is
coming from. Y's noting that, if you can be tricked into using a different
cryptographic circuit or a different ZK circuit, meaning one with a flaw in
it, that you can get bad results from that. I think that's addressable
fairly simply through just trusted circuits. but there's a question around
how many different types of trusted circuits do you have out there? Where
do you get them from? How do you verify that they're legitimate? so there's
some level of proving that the circuits don't have flaws in them that I
think it's an interesting kind of problem.
00:45:00

Manu Sporny: I don't know if there's been a lot of work in proving that the
cryptographic circuits don't have flaws in them. he also mentioned that
there's potential attacks if you were able to hijack the random number
generator input. but again it doesn't seem like a super bad thing. he also
mentioned that there's work happening on revocation which is interesting
there's work on verification without having the issuers's public keys
that's the thing we were talking about earlier in the call about he also
mentioned that sdj jot is a failing rule by standard operation and is a
waste of time for everyone I tend to agree with that

Manu Sporny: but that was interesting that the reason he's saying that is
because even the Google folks basically just used a JWT. They didn't feel
like SDJ was really bringing anything to the DOC so they've kind of proven
that it can work on MDOC and JWTs. and we've had discussions with Abby
where Obby said he's interested in getting it to work with the data
integrity stuff, but movement on that I think has been pretty slow because
they're largely focused on MDOT for the European Union. That's where they
have a big problem that they need to solve. and Yarm is going to be at GDC
in Geneva.

Manu Sporny: Jesse. So, sorry, I think you dropped,…

Jesse Wright: Brilliant. I will

Manu Sporny: Yarm is going to be at, GDC in Geneva, so you might want to
meet up with him and say hi. Okay. All right. That's kind of an overview of
Longfellow. Sent a bunch of other questions back to the mailing list.
hopefully our mail will have the time to respond to any questions,
concerns, any other comments on stuff? All that is it for our next kind of
agenda item is what to do with the quantum safe data integrity suite.

Manu Sporny: What else do we need to do to make sure that it is ready to go
standards track at W3C? just to remind everyone what Will did was he
created crypto suites for each one of these mechanisms for MLDDSA stateless
hashbased signatures, Falcon which have smaller signature sizes and SQI
which would be wonderful if SQI ended up being a cryptographic scheme that
we could use because it has really small public key sizes and really

Manu Sporny: small signature sizes. So 128 bytes versus multiple kilobytes
to tens of kilobytes in size. so there are four here ML MLDDSA and SHS are
FIP standards now NIST standards which means that usually other nation
states kind of adopt those standards. Falcon is expected sometime this year
or next year. again, more efficient signature sizes. And then SQI, no ETA
on if that would be standardized. It's still going through the
standardization cycles. So, I think these are the four that we're going to
say we'd like the VCWG, the verifiable credential working group to
standardize. some common algorithms have been pulled out already.

Manu Sporny: So this was work that we wanted to see happen as well. So
that's good. And then the rest of these kind of are pretty simple
straightforward things. I think what we do want to do at some stage here is
we have to make a decision on what we're calling each one of these MLDDSA
44 which is the strength parameter for the signature or do we put just the
date and we specify that it's MLDDSA 44 in the text.
00:50:00

Manu Sporny: So the identifiers for the crypto suites we need to probably
come to a decision on that. I think it's fine to keep it experimental for
now so that everyone knows. but this one SHs could probably be SHs I don't
know if we mentioned a strength level there. Falcon we SQI we don't. and
the strength levels are usually bound up in the year identifier for the
crypto suite. so there's that. I think we need to make a decision there.
And then for the multi key representations, we need to pick some numbers.
so those two things I think are the concrete things we need to do before
handing this over, which means we're almost there. One more set of pull
requests and I think we're in good shape to hand that over.

Manu Sporny: the other question that we should probably cover not during
this call but on a future call is what do we want to do with selective
disclosure? so what we could do is we could do the same thing we did with
ECDSA which is a signature per triple which would mean that creating a
selectively disclosed MLDDSA thing could take I forget what the signing
speeds are right now but I

Manu Sporny: I mean, it could take upwards of multiple seconds to generate
a driver's license, but that's what it would take because you have to sign
separate every single quad separately, that would take anyway, and then the
signature size would be massive, right? Number of triples times a
postquantum signature. but when you go to present that you only need to
expose the signatures that you want to have verified. that's one approach
that we could take. the other approach we could take is we're not going to
do the whole sign per quad approach for the postquantum suites. We're going
to use the cryptographic circuit based approach.

Manu Sporny: meaning we would use some kind of liarero polycommit mechanism
instead but that would be kind of fairly bleeding edge stuff and if that
stuff does not the problem here is that if that stuff is not standardized
on an acceptable time frame we can't do a global standard for this until
it's standardized. So for whatever reason the whole Longfellow CK stuff
takes 5 years to standardize that means that we couldn't push this standard
out for another 5 years versus if we just did not take the Lihero approach
for selective disclosure but we took the more traditional sign each quad
approach. there's nothing stopping us from doing a global standard fairly
quickly with this approach.

Manu Sporny: So, I think that that's kind of the set of discussions we need
to have on the postquantum suite before we say it's kind of ready to go. In
theory, we could get through those discussions in the next month or so. Let
me pause here, see if there are any questions or anything else, any other
items that folks think we should work on for the quantum safe crypto
suites. so that means that we're pretty close with the quantum safe crypto
suite stuff. we just need to raise some PRs to make some suggestions here
and I think we'll be in good shape to hand that over to the verifiable
credential working group. All right, I think that's our call for today. Are
there any other items that folks wanted to cover?

Manu Sporny: Yes, please.

Jesse Wright: I noticed Sanjam joined the call.

Jesse Wright: Maybe it's good for you to quickly introduce yourself.

Sanjam Garg: Hi. I'm a professor working in cryptography at UC Berkeley. So
we're kind of interested in the space. we've been working with German Reed
and on trying to build the first person credential the crypto behind that.
the Google system that you mentioned Albi there's another system from
Microsoft called the Crescent that one of my students worked on. yeah so
we're wanted to hear what the requirements are and what people were
thinking. So thank you
00:55:00

Manu Sporny: Welcome to the call, Sanjum. if you're in Berkeley, Greg on
the call here,…

Sanjam Garg: Fantastic. Maybe we can see sometime.

Manu Sporny: who is also working on kind of unlinkable cryptography is the
lead on the standardization is in your area.

Manu Sporny: So, yeah,…

Greg Bernstein: Yes, I'm down in Fremont and…

Greg Bernstein: I frequently get up to Berkeley and an alum. Are you going
to be at the Simons Institute? They're having a proofs workshop in July.

Sanjam Garg: Yeah, I'm going to be dead. Sounds good.

Greg Bernstein: Okay, maybe I'll try and attend in person then. Okay.

Manu Sporny: there we go. Wonderful wonderful to see those connections
happening. all right. that is our call for today. I think we are going to
cancel the calls for next week. I mean it's 4th of July in the United
States and the GDC things also happening. So I don't think everyone's going
to be busy with other things. So I'll send out that cancellation for next
week. But the following week we'll meet again. I'm sure there will be
things to catch up on with the GDC event having happened. and we'll try to
push this quantum safe stuff forward.

Greg Bernstein: Sounds good.

Manu Sporny: And then Greg, if you have any updates, from the BBS stuff, we
can cover that as well. All right, that's it for the call today. have a
wonderful weekend and we'll see each other in two weeks. Take care. My
Meeting ended after 00:56:55 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Friday, 27 June 2025 22:06:03 UTC