[MINUTES] Data Integrity 2025-08-15

W3C Data Integrity Community Group Meeting Summary - 2025/08/15

*Topics Covered:*

   1.

   *Handling of Numbers in Data Integrity Specification (Issue 343):* The
   group discussed the challenges of serializing numbers (floats, decimals,
   large integers) across different architectures, potentially leading to
   signature verification failures. The consensus was to avoid adding
   normative changes to the spec (version 1.1) that would restrict number
   usage. Instead, the spec will include a warning about potential
   interoperability issues. Implementations *may* provide tools to help
   developers identify problematic numbers, but this is not a requirement.
   2.

   *Generalizations of Algorithms and Spec Location:* The group agreed to
   move common algorithms (transformation hashing, proof configuration,
   selective disclosure functions) from other specifications (like the
   post-quantum spec) into the core data integrity specification. Future work
   may involve improving the algorithm descriptions using a clearer
   function-based syntax to improve readability and implementation ease.
   3.

   *BBS (BLS) and Post-Quantum Unlinkability Updates:* Greg provided an
   update on the BBS signature scheme, noting that performance data from
   recent work is not yet publicly available. The group discussed the
   potential of integrating LongFellow's ZK approach, particularly concerning
   the use of Pedersen commitments vs. Pedersen hashes, and the need to
   clarify with the LongFellow team to ensure soundness (preventing false
   proofs). Further investigation and communication with the LongFellow and
   SkiSign teams are planned to explore post-quantum unlinkability.

*Key Points:*

   - The group prioritized avoiding breaking changes to the data integrity
   spec to prevent widespread disruption and confusion.
   - A clear warning about potential interoperability issues with number
   serialization will be added to the spec.
   - Several algorithms will be consolidated into the core data integrity
   spec, with future work focused on improving their description and
   presentation.
   - Further investigation is needed to fully understand and incorporate
   aspects of the LongFellow ZK scheme into the data integrity landscape.
   Discussions with relevant researchers are planned.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-08-15.md

Video:
https://meet.w3c-ccg.org/archives/w3c-ccg-data-integrity-2025-08-15.mp4
*Data Integrity - 2025/08/15 09:55 EDT - Transcript* *Attendees*

Benjamin Young, Dave Longley, Denken Chen, Greg Bernstein, Hiroyuki Sano,
Manu Sporny, Parth Bhatt, Phillip Long, Ted Thibodeau Jr, Victor Lu
*Transcript*

Manu Sporny: All right, we've got a good group of people here. Let's go
ahead and get started. welcome to the data integrity call for this week.
It's August 15th, 2025. it looks like everyone has found the new meeting
link, which is great. we had to change the meeting link I guess due to some
kind of weird bug in Google Meet, our meeting was handed over to some other
random person on the internet and that they got the meeting transcriptions.
We don't know who it is other than their name is Zoe Ey. so if anyone knows
Zoe, please ask them if they could please return our transcript.

Manu Sporny: but in the meantime we are switching to another call link that
we now have control over. So hopefully the transcripts will continue to
work after this meeting. We'll see towards the end of the day. we do have
an agenda today. there is a discussion that is unfortunately kind of
splattered between multiple groups around it started off in the did working
group in the work item actually did resolution questioning why we weren't
using numbers floatingoint numbers we said that's because if you use a
floatingoint number

Manu Sporny: can't quite control how it's serialized on different
architectures and that can result if the digital signature failing to be
verified on other systems and so we had just said don't use it. It's not
worth it. we got pretty strong push back on that primarily by Steven Curran
at the province of BC. since then we discussed in various meetings did
working group this call and it looks like the consensus path forward is to
allow numbers and then warn people and we're going to have a discussion
today about the details of what we actually want to write. it looks like
it's going to be a pull request to data integrity. So that's the main item
we have today.

Manu Sporny: we also want to cover generalizations that need to be made to
the algorithms and talk about which specs they might go in. I think we're
probably not going to get an update on BBS cuz Vasillus is in Europe and on
vacation this month. but any updates from Greg? We'll get those towards the
end of the call and then just talk about some of the challenges with the
long fellow is ZK discussion was great that two weeks ago. We really want
to move forward on it but we found some details that might complicate
things so we might want to talk for about 15 minutes at the end of the call
about that is the agenda for today. Are there any updates or changes to the
agenda?
00:05:00

Manu Sporny: anything in addition we want to discuss? if not, let's go
ahead and jump into the first item. this is going to be in the data
integrity specification issued 343 here. one thing I forgot to mention is
that the verifiable credential working group is now meeting again. So that
we took a break over the summer. We are now very much in maintenance mode
in that working group with the ability to work on new addition extension
specs like the render method and confidence method. Those are very much in
scope and we can move them forward. but we need people to show up to the BC
working group and move them forward.

Manu Sporny: if people don't show up, the work won't get done and we won't
move that stuff forward. So, just a general heads up to folks that we need
to show up to those meetings if we want to move stuff forward. The cadence
of the VCWG meeting is once a month now. and the work mode is basically we
can raise as many PRs as we want to u in the interim and in between. We are
going to focus a bit more on incubating the rest of the specs to move them
over to the VCWG. We are hoping to have a new charter for the VCWG by TAC
in November which is going to be in Coobe Japan. and all that to say that
BCWG is now out of the summer hibernation. They're meeting regularly.

Manu Sporny: We are going to publish a version 1.1 for BC data integrity
and what we're going to talk about today is probably going to go into that
version 11 one document.

Greg Bernstein: That's bad.

Manu Sporny: So we're not talking about something that's going to take
months to get in there. We're talking about something that right after we
finish talking about this, we're going to raise a PR and it's going to go
out in version 11 the editor's draft of it. there is an issue 343 in VC
data integrity adding a security consideration for the serialization of
numbers. we are going to provide the following guidance that expressing
arbitrary precision numbers such as decimals and floats and strings is
problematic. Integers above 32 what a 32-bit architecture can support is
problematic.

Manu Sporny: fractions don't always serialize the same way on different
architectures. 64-bit floats don't serialize in the same way as 32-bit
floats. And all of these can result in mismatches between architectures
when you digitally sign something. and we also need to make it clear that
not all architectures follow e 754. CPUs used to for a while, but there are
some shortcuts that you can do to really speed up calculations and the new
AI processors do that. they lose precision so that they can do many more
billions to trillions of calculations per second. CUDA architecture being
an example of that. okay.

Manu Sporny: So we can put language in the data integrity specification
around this warning people that if you use numbers you do so at your own
numbers in these categories you do so at your own peril. I'm wondering if
we want to go further and say things to the effect and we can't do this in
version 11. We cannot add new features. Right? So just want to make it very
clear that any kind of normative text that would change the conformance or
really it's adding a new feature is not allowed unless we feel it's a
serious security vulnerability and we need to fix this. I don't think this
falls into that category.

Manu Sporny: so what we could say and I'm going to start with the most
extreme thing we could do is that all data integrity processors operate in
a safe mode by default. They will not allow you to use numbers that could
result in bad signatures or signatures failing to verify across
architecture. So by default they won't let you serialize integers unless
you convert them to a string yourself.
00:10:00

Manu Sporny: if you try to use a number that is above a 32-bit value if you
try to use a floatingoint number a decimal things of that nature it will
kick out an error. it'll say, Nope, sorry. we're trying to digitally sign a
number that is probably not going to be verifiable across all
architectures." So, that's the most extreme thing that we could do is
introduce a safe mode to data integrity processors that does that by
default.

Manu Sporny: the other thing that we could do is say that should throw a
warning. and that would allow people to continue to do that, but they
might, end up being surprised when the thing they signed, all of a sudden
can't be verified on certain architectures. or we can do none of this. We
could basically say, what, we warned good luck. the major concern let me
stop there. Those are the options. Let's hear from folks. What are your
thoughts on each one of those options? Go ahead, Dave.

Dave Longley: I think an additional option to that would be to say that
implementations could offer features that do things like that and we can
let people experiment with that if they want to as opposed to making any
specific recommendation that an implementation do a very specific thing
because I think it's not that clear that what the end result would be.
especially with people who want to use numbers, they might end up just
always turning features like that off and then they're not that very
effective. it seems like we don't know if that's a good idea. we understand
what effect we want to have.

Dave Longley: and fundamentally we understand this is also an
interoperability problem and the interoperability problem can lead to other
trouble but it's probably the most likely thing to happen is that a
signature is going to fail in a frustrating way. and I'm a little bit
concerned about us trying to tell other implementations even say saying
that you should have this special feature as opposed to you shouldn't do
this because of interoperability problems unless you really know what
you're doing and implementations could offer a feature to help with that.

Dave Longley: yeah, and never mind the fact that if we're going to make any
normative change that we can't do that right now. So any kind of would be
troublesome at the moment.

Manu Sporny: I heard that's definitely another option. I am really
concerned that most developers don't understand this completely under
here's my concern is we have a big federal government or multiple really
big states start issuing things. We've had some contact with their
developers who do not seem to really understand the stuff, at depth. They
will use the defaults and they'll just continue to kind of use things that
they've been using and they will issue millions upon millions of these long
live credentials and then find out after they're at scale that their
credentials are failing in the field.

Manu Sporny: and then they will blame the specification and the technology
and that sort of thing, right? It's just going to be yet another complaint
that people have around, data integrity or canonicalization that sort of
thing. so that's why I'm concerned about, us just warning people is not
going to stop the people that don't know any better. They're not going to
read the spec. they're not going to read the docs, if the libraries even
talk about this stuff, and they will just, shoot themselves in the foot.
And especially with these things that are digitally signed and then
potentially not verified or you don't hit a 32-bit architecture for a long
time, it ends up becoming this thing that just, doesn't work. go ahead,
Dave.
00:15:00

Dave Longley: The counterargument to that I think is they'll put their
numbers in. This is a data modeling problem. So they'll design their data
model. They'll have their numbers. They'll go to use data integrity to
protect their data and it won't let them. And the con and the result of
that will be we're not going to change our data model. We're going to
change our securing mechanism. And then they'll offer something that is
more troublesome in a worse way. And that'll happen potentially more
frequently. so I think we're talking about the number of people that this
will affect will hopefully be a very small number. And what we don't want
them to do is reach for other technologies that'll let them do the wrong
thing.

Dave Longley: It would be better for them to make their data modeling
mistake and be choosing better technologies generally and then have this
error even in a small area but they have a better technology choice
everywhere else because I would expect that they would make the wrong
technology choice as a result of this and it would be a wider problem like
that. it seems to me like that would be the more likely outcome than the
things that you were listing.

Dave Longley: I don't understand how they would come to the conclusion to
in fact, and when I compare the two scenarios, it seems to me like what
you're suggesting is they're going to not make the right technology choice
no matter what. And so then we have to decide whether we're going to give
them a safety feature or let them proceed and have interoperability
problems. And I don't think either one of those is really that accurate.
but go ahead.

Manu Sporny: Yeah, I mean they can always turn it They can always turn safe
mode off, but at least they're consciously doing it in that case they're
consciously doing it. Then they have to go and read about it and understand
it and that sort of thing versus them not even knowing that they're making
a mistake,…

Manu Sporny: right? that's

Dave Longley: Yeah, but they're making it at a totally different level from
data integrity.

Dave Longley: This is a data modeling problem that they're going to make.
So, they're going to this it seems like it's not even the right spec for
that to happen. they're making a data modeling choice and then later they
want to digitally sign that data, but they've already made this choice for
their data model. And so, it's too late. And so since it's too late, let's
not have this annoying problem that everyone's going to have if they try to
protect their data using this choice and choose this other choice. And now
because of that, we have worse problems than the interop problem. And in
fact, we'll have additional interop problems. We'll have different interop
problems and we'll have power imbalance problems and we'll have
centralization problems. So that seems worse to me and that's all because
we had a flag that encouraged people to go in a certain direction where
that flag is not in the right place because it doesn't happen when they're
modeling their data.

Manu Sporny: Go ahead, Ted.

Ted Thibodeau Jr: I'm not choose your own adventure path straight in my
head,…

Ted Thibodeau Jr: but you said one thing there that caught me. when you
have to go and read about this option that you need to turn on or off to
avoid some error or whatever. In my experience, that does not necessarily
lead to understanding anything at all except use this switch.

Ted Thibodeau Jr: There are plenty of deployments out there that have some
switch or other turned on because it eliminates an error that was
irritating people at some point and it was the wrong choice except that it
eliminated that error at that point in their development and they just left
it that way because it was easy.

Manu Sporny: Go ahead,…

Ted Thibodeau Jr: That's all I got.

Manu Sporny: Ve. Yep. Thanks. Go ahead.

Dave Longley: Yeah, that I agree with that.

Dave Longley: And I think there are situations where having that flag be in
the way makes sense because it's such a small number of use cases where
you're really making a big mistake versus there are legitimate use cases
for using floats or whatever. and I any implementation we come up with to
say that that's going to try and split that and draw some lines somewhere
is going to be a guess that we have about the technologies that people are
commonly using. We can't write a sensible implementation for that, I don't
think. And they're going to be use cases that will flip the flag that are
going to be fine.
00:20:00

Dave Longley: And that's totally different from putting a flag into a
system where it's like if you're doing this it's really really challenging
to justify what you're doing. and that's not the case with numbers for a
lot of use cases. And so I expect people to find the flag to be annoying,
to find it too late after they've already designed their data model and to
just be turning the annoying thing off or saying, "Let's not even use this
because everyone has to turn this annoying thing off. Let's go somewhere
else." And that creates potentially unless someone's got a better solution
than what data integrity is offering, they're now choosing a worse
technology to solve their problem.

Manu Sporny: All right. So, it sounds like the consensus is no safe mode,
no flag, not even suggesting implementations provide a mechanism and just
the text will warn people using these class of numbers could cause
interoperability problems and to be aware of that when they do
serialization. Or just be aware that that's going to be an issue if they
serialize or could be an issue when they serialize.

Dave Longley: So I don't know I don't feel strongly as to whether or not we
say implementations can provide you something to flag whether or not this
might be a problem. but I don't think we should be telling implementations
that they have to do that or that they should do that or exactly how they
ought to. so we can leave people some space to sort of work with that and
that's better than telling the whole community that any of these
implementations that you pick up are going to have this annoying problem.

Manu Sporny: Okay. go ahead, Phil.

Phillip Long: Yeah, this may be a naive question probably is, but is there
any mechanism by which a test environment can be offered that at least
flags where before they deploy flags whether this is likely to cause an
issue or is that just too diverse the outcomes that can result in a problem
that a test environment can't anticipate?

Manu Sporny: Your head.

Dave Longley: So, there's another specification called IJSON. it's an RFC
out of IETF. and I think we've linked to that, but that's probably one of
these links here. We could say that an implementation could test to see
whether or not a number works with that specification. that doesn't mean
it's going to work everywhere, but we could say, you could run your data
through this or your implementation could check for it and that could give
you some slightly higher assurance that you're going to have interoperable
JSON. and that is something we could talk about, but I said, not something
that I should tell implementations they have to use or have to default to.

Manu Sporny: Yeah, I think that this is just boiling down to we can say
implementations may provide additional tooling that provides errors or
warnings when problematic numbers are dis discovered but it's a may in that
then we leave the door open for implementations doing whatever they need to
do me meaning Phil your suggestion would be I mean even the test
environment You would have to have a feature that flagged these in
interoperability challenge numbers.

Phillip Long: And I appreciate that that it's not going to be one that's
universally going to pick up all the major problems and give people a heads
up to what to do. I'm just trying to narrow down things in a way that gives
people directions for pursuit that aren't just throwing things away and say
the heck with it.

Dave Longley: Yeah, it'll be challenging to write that interface as well
because you might put something into that interface that has to be
converted before it can be checked. so it can be detailed to write that
thing…
00:25:00

Phillip Long: Yeah. Good.

Dave Longley: but I appreciate the idea of having such a tool and that is
if possible to write a sensible one with a sensible interface would be a
useful tool.

Phillip Long: Thank thanks for the consideration.

Manu Sporny: All right. So I think we have enough for the note that
implementations may provide features that author help developers avoid
these interoperability issues. But leave it vague that that can be up to
the implementation. okay I think that's it. any other comments? let's see.
Hold on.

Manu Sporny: Let me also discuss data integrity all signed not specify a
safe mode or should requirements around avoiding interoperability numbers.

Manu Sporny: Anything else that we should say before we move on from this
item? Mhm.

Dave Longley: Yeah, my one comment would be calling it a safe mode.

Dave Longley: I know we're not going to do it anyway. It felt more like an
interrupt mode than a safe mode. this isn't a security consideration and we
don't want people to think that it is.

Dave Longley: anything can become a security consideration but you would
have an interrupt problem and your signature would not validate would not
verify and you would not proceed so it's like an interrupt mode.

Manu Sporny: Mhm. Yep.

Manu Sporny: Updated the text. anything else we want to cover on this
issue? So this will remain editorial. It will go out I can create a PR for
this and we'll merge or it'll follow the verifiable credential working
groups on to the next item. let's see generalizations that we should make
to the algorithms and the specs.

Manu Sporny: I note that I don't think is here. I forget I was it in the
quantum safe spec that some common algorithms were pulled out
transformation hashing and proof configuration. so these common algorithms
where are they linked to from here, right? Okay.

Manu Sporny: So has already done this refactoring at least for proof config
and transformation and hashing. we could move this into the data integrity
specification. it would be an editorial change because the algorithms
wouldn't actually change what the output is. are there so there at least
these ones that I think we could move into data integrity at this point and
that would simplify some of this stuff.

Manu Sporny: are there other algorithms we want to consider? I know that V
I think specified selective disclosure functions. And so we could move all
of these to the data integrity spec. Let me pause there. I'm seeing thumbs
up from Dave and thumbs up from Greg. Do we need to I think the first thing
is we just move them full stop, right? And then just fix up the references.
00:30:00

Manu Sporny: I think this entire section 3.4 would move to data integrity.
and then we'd move these common algorithms to data integrity and that would
be the first pass. Do we want to do anything more than that in this rather
is there a second pass? And what would we want the second pass to Go ahead,
Dave.

Dave Longley: depending on how much work we are willing to and are able to
take on at the moment. Something that would improve the readability of the
spec would be to have a good pseudo have a better syntax for expressing
these algorithms as functions. Have some pseudo syntax? It's not specific
to any language. It's not a hard requirement that you ment the implement to
that specific syntax, but each one of these algorithms has inputs and
outputs and a name associated with it. And we kind of did that a little bit
in this specification that you have up right now, the ECDSA spec. We talk
about label map factory function. that's an actual function somewhere else.

Dave Longley: I feel like somewhere else we kind of talked about these as
functions and you pass things in sort of the way that other IATF specs do
it it's a little bit cleaner to assemble and put those things together and
understand that they're individual primitives that you can reuse. And
that's certainly what all these functions are. These are selective
disclosure primitives that can be reused in any crypto suite. And so future
work if we have time and want to do it is to come up with better names and
clear input outputs and kind of clean it up that way.

Dave Longley: So everything kind of is a separate function which is close
to what we have in this spec but that's certainly not the case with the
other algorithms that we were going to move into the core spec from the
postquantum spec.

Dave Longley: I don't think we've defined those clearly as this is a
function with that notation and if we have better notation it's easier I
think for implementers if they want to follow that well yeah so I don't
know…

Manu Sporny: Okay. …

Manu Sporny: I kind of hear at a high level what you're asking for, but I
wouldn't know how to do that into PR. Maybe an example would probably be

Dave Longley: if this is the best spect so if you looked at the BBS spec in
IETF or if you looked at the I think it's hybrid public encryption they
define interfaces and they use notation that looks like mathematical
functions somewhere in here there's going to be a interface yeah

Manu Sporny: This

Dave Longley: Yeah, sender OS2 IP takes X as a parameter returns a
function. some of these look a little better and they actually have pseudo
code for what these things are but you're defining each of these using this
sort of notation and then you can refer to it in algorithms that call into
these functions and then if you define one of these primitives that you can
reuse it.

Dave Longley: It's sort of a definfined set of inputs and outputs. Yeah,
that might be a better example right there. that explain how the stuff
works. Then if you're writing an implementation and you want to use which
pieces you need to write and then you have all the primitives and then how
to put them together for the higher level stuff and it makes your life
easier as an imple. Yes, that's a good example of how this works and it's
all very high level.

Dave Longley: You can see there's no specific implementation details around
these interfaces.

Manu Sporny: Okay. …

Manu Sporny: Acknowledged. It's a significant amount of work, I think. Yep.

Dave Longley: Yes, it would be

Manu Sporny: So, we need someone to volunteer to do that. because I
definitely do not have the time to go through all the algorithms and do
that. heard any other things we would So I think the order of operations
here is at least just move the content as it is around. put it in the right
places, fix up the references in the first pass.
00:35:00

Manu Sporny: We're going to do common algorithms and we're going to do the
selective disclosure common algorithms. that'll be the first pass and then
the second pass might be to start from the bottom the primitive functions
and see what transformation like Dave is talking about might look like and
then just kind of go from there.

Manu Sporny: try to find a pattern that seems to work and then move forward
there. go ahead, Greg.

Greg Bernstein: We may not be that far off on the selective disclosure …

Greg Bernstein: because I mean it's been a while since I implemented these
things but they are sub functions and things like that. so when we move it
and do a review, maybe we'll make it that we can make a decision cuz some
of those may not be that far off and it may just be helpful because,…

Greg Bernstein: we do have a number of small subfunctions and such and they
feel like So that's my only input on that. we may be closer there than we
might think.

Manu Sporny: Mhm.

Manu Sporny: Okay, sounds good. all right. anything else we want to talk
about with respect to generalizations to algorithms and the specs they
should go then in the remaining time we can get kind of update from Greg on
anything BBS related you want to mention and then point out something that
we found out about what Obby might have been talking about. We need to ask
him some questions around that.

Manu Sporny: I did have another item here which we're looking for
postquantum unlinkability that is one of the things that we asked Abby and
Matteo and Sergey if they're working on it and they said they're thinking
about it but nothing actual concrete working on it. So we want to see what
we're able to accomplish with BBS is achievable with postquantum scheme.
there have been multiple cryptographers that have said yeah no problem you
just do this but there's no detail necessarily on what they mean there ski
sign is the only one that I know of that has kind of a claim and multiple
authors that we might be able to get in touch with.

Manu Sporny: So, we may want to reach out to the ski sign folks and say,
"Hey, you've got some of the papers that you've written that says that you
can do pairing based stuff with the approaches you're doing here. do you
have more information on that? Are you interested in talking with us about
it?" getting some requirements and maybe getting some more focus from them
on that. okay. Greg over to you.

Manu Sporny: any updates on BBS andor do you want to mention the thing that
we found out about Peter the difference between Peterson commitments and
Peterson hashes

Greg Bernstein: Okay.

Greg Bernstein: Unfortunately I do not have any performance update the u
the rust code that the syllus used to generate the numbers at the IETF
meeting is not available yet he hasn't made that public and so we'll just
have to see But the numbers were very impressive. So, we really do want to
double check on those.

Greg Bernstein: one thing I did the other last night was I did check to see
if I could get the long fellow public code to build and after puttsing
around a few hours to get it to build and it has lots of sub performance
benchmarking stuff and got some of those to run which I know that does not
sound like a big deal
00:40:00

Greg Bernstein: But any of the ZKP stuff can be quite a challenge and this
was the first time getting something with Lero and some check kind of stuff
to build for me because the artwork stuff is rust and so yeah you can get
it to build but it's not complete. They don't have a complete hero type of
thing. They have some commitments and such. So, I take what, each little
piece of being able to reproduce something as we go along.

Greg Bernstein: as far as there's one of the things that got us excited
about when we last met with Abby and folks was if they drop some of they
were saying, if we drop worrying about JWTs or MDL or MDOCK, whatever it's
called, we may be able to do much better and it would

Greg Bernstein: look kind of this way and they mentioned something that
looked like either a P what's known as a Person commitment or a Person hash
and Person commitments are very much something that are in not stated but
very much something that are inherent to BBS and are well known to deal
with some other ZJP techniques known as sigma protocols which can be very
efficient and we know how to deal from BBS.

Greg Bernstein: However, there's another flavor of the same thing that
looks very similar known as a Patterson hash. And it's like that as a hash
function that's got some concerns. And so we're going do they mean a
Patterson commitment or a Patterson hash? And should we try and engage
back? should we have multiple people question them back and say hey we got
very excited we haven't heard from you. So we're doing the hurting of
cryptographers thing which some of us unfortunately are getting used to
where it's like that's sounds great okay what did you mean and how would
this work?

Greg Bernstein: So that's my understanding because both and both me and
Dave did some lookups on better person hashes and we saw there's Some of
those same concerns though are reflected though in things that we do with
BBS. There's these things called generators and it's very important to make
sure how you build those things or actually I found out that a lot of
considerations were taken in making sure how we built those things there
was no special hidden relationships that could be exploited because
somebody was asking on the CFRG list can't you do

Greg Bernstein: make these things easier and such and such. And I had a
discussion with Basilis on that and he was saying, " we don't want these
special relationships." And then we in one of the papers Dave founder blog
posts found out, that applies to Patterson hashes and such like that. So,
it's not clear that some of those relationships or concerns about pets and
hashes would cause problems because they're kind of well known and kind of
were taken into account in the BBS kind of point of view.

Greg Bernstein: But we still need to get clarity of what this technique
would because we haven't had a chance to bounce around any other
cryptographers because I don't know who's seen it besides two weeks ago the
presentation that we received. Other folks have issu comments on that. Dave
and Monu
00:45:00

Dave Longley: Yeah, I just think it's not clear how what they're using in
the long flow scheme that was proposed. in BBS clearly when we go and we
calculate certain parameters there's a B parameter that's used with
calculating that's combined with a domain and…

Greg Bernstein: Yeah. Looks just like

Dave Longley: some other things and that thing if I remember correctly that
is a Person hash it's just you're multiplying yes you're just and that was
what was up on the slide when Obby was presenting which was you take each
individual message and you assume that those messages are independent
messages and you multiply each one by an independently generated generator
which is a public point on the curve that you're using and then you add and
you combine all those things up. and that is my understanding that I think
that's all a Person hash is and if they're using it in the same way that
we're using it in BBS and they're not doing any other tricks there.

Dave Longley: The only question is could you because you put this into
Longfellow in whatever they put around the circuit could you prove
something that was false about that? I don't think there's that would be
the security concern. and I would presume that would all be in Aby's mind
too. And when he presents whatever it is they're doing specifically, I
would hope that that would not be a problem. I just know that if this was
happening at the same time that other paper sort of came out and said if
you're not careful with how you hash things or what parameters you can take
into a zero knowledge circuit program, then you can end up proving things
that are false through your circuit.

Dave Longley: And that involves a program that had its own hash going into
it. So, is there any possibility of some similar pro problem here where you
could make one of the messages cause the hash to have this sort of property
with the circuit?

Dave Longley: And that seems to me like where there might be a concern and
I would think there would be a way to ensure that that isn't a concern or
mitigate it.

Greg Bernstein: Yeah, that's exactly…

Greg Bernstein: because when you look at the history of Longfellow they say
some check is this technique techique they're using. But when you read the
paper, it's really a whole set of optimizations because there's some check
then there's GKR and then they put optimizations on GKR and all those kind
of things. This long history of people doing better and better.

Greg Bernstein: that paper that on proving false things was actually a fiat
how you take these interactive protocols turn them into non-interactive so
it was an attack based on not doing fiat shamir which is the transformation
this huristic thing correctly and it was an attack on GKR and so that's
what surprised people because so it's a valid thing but there are solutions
for it too.

Greg Bernstein: So this is why when people say there's no new cryptography
that's for the ECDSA part the proof system which for long fellow what that
we'd want cryptographers to review for and what would need cryptographer
review is not proving false things that's what people call soundness. So
establishing that it's sound meaning that you can't prove false things.

Greg Bernstein: So in the ZKP literature terms for there's the zero
knowledge part then there's the soundness part and the soundness part is
the holder could not falsely prove an assertion to the verifier. So that
would be the work of getting Longfellow through not a signature scheme
because they're doing a zero knowledge proof on a well-known signature
scheme but they're doing the proof technique is it
00:50:00

Greg Bernstein: out. Yes.

Manu Sporny: All So, what are the next steps here? we have to just ask Abby
and Matteo and Sergey what are we I guess we need more details like we want
to analyze this. We need more details. Ideally, we'd like to see some code
that we could, actually use. I think that's where we are right now. Go
ahead, Dave.

Dave Longley: And I think the other thing could be to link to that other
paper and say what is going to be said in the Longfellow RFC or in this
very specific scheme that would provide confidence that this sort of
problem isn't happening with this approach and they should be able to do
that. It seems like there shouldn't be a problem. that you just have to
make sure to be careful to mitigate that issue.

Manu Sporny: Okay. …

Manu Sporny: So, who wants to take an action to get in touch with them? I
don't know, Greg, if you wanted to or

Greg Bernstein: Yeah. …

Greg Bernstein: I'll do it I'll get an email out to them, especially if
it's got to wait that everybody on the data integrity call was excited and…

Greg Bernstein: we need some details and I even got their code to compile
and a couple benchmarks to run. So, it's like, guys, we're excited.

Manu Sporny: Yep.

Manu Sporny: So thanks for taking that, Greg. I'll take an action to
contact the ski sign folks. It might be good to just have them come in
here. at a time that's probably worse than most of them are in Europe I
think. it's a German, French, Swiss kind of coalition of academics. so
maybe we can bring them in to kind of talk a bit more about Skiign and let
them know hey we're also very interested in the pairing stuff. I'll go
ahead and send that out and see if I can get them to respond.

Manu Sporny: luckily there's just a contact skyscign.org email address, so
I'll use that. I think that's it for this week. anything else before we go?
All right. we, might meet next week. if there are things to discuss. If
not, we might cancel. we probably need to start seeing PRs to meet more
regularly, meaning on the Quantum Safe crypto suite. we need to talk with
Brent to figure out how if we should split work between BCWG and this group
this group is meeting regularly.

Manu Sporny: It'd be good to have everybody's, focus on some of the PRs we
do to the data integrity specs in BCWG, but their IPR concerns there of
some people in here are not in the BCWG. And so, we could not accept input
from those folks. so I'll need to chat with the chairs and staff about can
we use these calls to discuss some data integrity related items. I think
that's it for this week. have a wonderful weekend and, we'll be in touch
about the agenda, for next week. Thanks everyone. Take care. My
Meeting ended after 00:53:54 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Friday, 15 August 2025 22:06:03 UTC