[MINUTES] CCG Weekly 2025-07-22

W3C CCG Meeting Summary - 2025/07/22

*Topics Covered:*

   - *Introductions and Administrative Matters:* Welcome to new attendees,
   review of code of conduct and IPR agreements. Clarification that
   participation doesn't require signing agreements, but substantive
   contributions do.
   - *Data Transfer Initiative (DTI) Trust Registry:* Lisa Dusseault
   presented the DTI's work on a trust registry for secure personal data
   transfer. The focus is on personal data portability, addressing both
   competitive and non-competitive use cases (e.g., transferring data between
   competing services, or transferring data to a preferred printing service).
   - *Trust Registry Architecture and Functionality:* The registry uses a
   centralized (but not necessarily single-server) model, simplifying the
   verification process for data transfer destinations. It aims to reduce risk
   and liability for large platforms while providing a cost-effective solution
   for smaller companies. The system relies on existing verification methods
   used by large platforms and leverages HTTPS GET requests for verifiable
   data.
   - *Data Donation Platform Prototype:* Discussion of a prototype data
   donation platform built upon the trust registry, enabling researchers to
   access user-donated data for studies while addressing privacy and
   anonymization concerns.
   - *Funding and Sustainability:* Exploration of funding models for the
   DTI's trust registry, including grant applications and potential
   usage-based fees, while aiming to avoid models that hinder participation by
   smaller companies.
   - *International Scope and Regulatory Considerations:* The registry aims
   for an international scope, acknowledging the challenges of adapting to
   country-specific regulations.
   - *Trust Signals and Nuance:* Discussion of the importance of nuanced
   trust signals, emphasizing the need to balance caution with the need to
   allow companies to develop track records and learn from experience.
   - *Future Integration of DIDs and VCs:* The group discussed the
   potential future integration of Decentralized Identifiers (DIDs) and
   Verifiable Credentials (VCs) into the data portability ecosystem, with the
   need for further investigation into specific use cases.

*Key Points:*

   - The DTI's trust registry aims to simplify and standardize the
   verification process for secure personal data transfer, benefiting both
   large platforms and smaller companies.
   - The registry employs a practical, low-tech approach currently,
   planning future integration of more advanced technologies like DIDs and VCs
   when appropriate use cases emerge.
   - A key value proposition for large platforms is reducing legal and
   reputational liability, while for smaller companies, it reduces the high
   cost of individual verification processes.
   - A prototype data donation platform is under development, offering a
   privacy-preserving way for researchers to obtain data for studies.
   - The funding model for the registry is still under development, aiming
   for a balance between sustainability and accessibility for smaller
   organizations. The international scope necessitates careful consideration
   of regulatory landscapes.
   - The group highlighted the significance of nuanced trust signals and a
   graduated approach to dealing with potential issues from participants
   within the registry.

Text: https://meet.w3c-ccg.org/archives/w3c-ccg-ccg-weekly-2025-07-22.md

Video: https://meet.w3c-ccg.org/archives/w3c-ccg-ccg-weekly-2025-07-22.mp4
*CCG Weekly - 2025/07/22 11:56 EDT - Transcript* *Attendees*

Alex Higuera, Chandima Cumaranatunge, Dmitri Zagidulin, Erica Connell,
Fireflies.ai Notetaker Ivan, Greg Bernstein, Gregory Natran, Harrison Tang,
Ivan Dzheferov, James Chartrand, JeffO - HumanOS, Jennie Meier, Joe
Andrieu, Kaliya Identity Woman, Lisa Dusseault, Mahmoud Alkhraishi, Phillip
Long, Przemek P, Ted Thibodeau Jr, Will Abramson
*Transcript*

Lisa Dusseault: Okay.

Harrison Tang: Thanks, Lisa, for actually jumping on and taking the time to
present here today.

Lisa Dusseault: Happy to be here.

Lisa Dusseault: Is there any preamble that you all do?

Harrison Tang: Yeah, we'll go through the administrative stuff, but I'm
actually catching a flight right now, so Mimu will be leading the call.
Yeah, we usually start around 9:03 or 9:04. So

Mahmoud Alkhraishi: Hi We're just giving it three more minutes before we
get started just to get everybody on the call.

Mahmoud Alkhraishi: Okay, let's get started. thank you Lisa for joining us
today. as a reminder to everyone, please make sure that you have signed our
code of conduct that's our code of ethics and professional conduct. You've
also signed our IPR agreements. Any substantive contributions to W3CCCG
will require those.

Mahmoud Alkhraishi: Will I see your on queue?

Will Abramson: Yeah, sorry.

Will Abramson: I think I just wanted to add my interpretation of what you
just said, which I think is slightly different. I think anybody can
participate in these calls, right? you don't need to sign this agreement to
participate, but if you want to make substantive contributions to any CCG
work items, then we require you to sign the IPR agreement. And that's about
intellectual property stuff so you don't introduce things that you have a
patent for patent for maybe that's how I understand it anyway obviously we
want everyone to agree with our code of conduct ethic

Mahmoud Alkhraishi: No, you're absolutely right. and I will link the IPR
agreement in chat for anybody who has not had the opportunity to review
introductions and reintroductions. Is there anybody on the call who would
like to introduce themselves who hasn't been here before? Obviously, Lisa,
you're going to have an opportunity to go through all that in a second, but
anybody else? Ivan, please.
00:05:00

Ivan Dzheferov: Hey, pleasure to meet you all. I was invited here by Will.
We met at Geneva at GDC. I'm a product manager working in the field. I want
to build wonderful products in this space and I observe closely what's
going on. So, I'm interested to learn and grow. That's all. Thank

Mahmoud Alkhraishi: Welcome, Ivan. We're super excited to have you join us.
anybody else would like to introduce themselves or perhaps reintroduce
themselves? Announcements and remind Does anybody have any announcements
they would like to make or any reminders they'd like to talk about? All
right. Hearing none, then I guess Lisa, the floor is yours. Would you mind
introducing yourself? You're also on mute right now.

Lisa Dusseault: Yes, thank you. I'm the CTO of the data transfer
initiative, which is a nonprofit seeking to protect and extend the right to
data portability, which might not be the most high urgency human right that
comes to mind. and there are many human rights that we also defend, but
this one we think supports privacy and choice and in a way that that is a
little bit subtle and underrated. So, that's what the data transfer
initiative does. And what I came to talk about today was the trust registry
we're building to support data portability and secure data transfer and how
that works.

Lisa Dusseault: I don't even know if I even want to go slideshow if that's
all right with you because I know you all know a lot of very technical
stuff about this space. and so I've been thinking what the relevance of
this presentation for you is to have me dive into a corner of the internet,
a use case or a selection of transfers where DIDs and verifiable
credentials aren't actually necessary.

Lisa Dusseault: And it remains to be seen what use cases would make them
necessary or make them valuable enough to integrate, but why we've gotten
to this point with very basic technology and what we expect to build in the
future. So we're currently in a pilot program with the data trust registry.
and like I said, I'm going to skip around slides, so I apologize if my
narrative is not as smooth as it normally would be, but there's just stuff
you all know so already that it's pointless to dive into that. I will
mention a few common examples. There's lots of examples where users
transferring their personal data and our scope is purely personal data.

Lisa Dusseault: We're not talking about trying to establish trust for data
transfer. Enterprises can solve their own trust problems. But for personal
data, there's lots of use cases. Some of them are competitive, which is
where some of the regulatory emphasis has been. If you've heard about data
portability in a DMA context, then in the digital markets act or in the UK,
the organization that is concerned with this data portability right is even
more about the ability for other companies to compete with platforms.

Lisa Dusseault: Then you'd think that this problem was all about
competitive data transfer of a user wanting to be able to transfer their
playlist to a different service that is not one of the major platforms and
giving them choice perhaps being able to choose a service that offers more
privacy or just getting off of one of the major platforms. But not all the
examples are eitherors. A lot of them are yes ands like being able to
transfer to a photo service that I want to pick to print my photos. Last
time I went to print some photos and since I have an iPhone Apple presented
me with a list of Apple selected photo printing services some of which were
cheap and some of which were not as cheap. But it was Apple that selected
that list of photo printers and I could not add to them or choose a
different list.
00:10:00

Lisa Dusseault: And that's not in any way competitive with Apple holding my
photos. That's a different issue. but it's still the kind of thing that
ought to be solved by any decent personal data transfer ecosystem. the
state-of-the-art before our trust registry has been companies doing
verification checks that are a combination of So an automatic check is
these companies all check to see that the applicant has a privacy policy
and that the link resolves. but a manual review is to read the SOCK 2 or
CASSA report and make sure that it covers data security and doesn't have
any red flag issues uncovered in a data security audit.

Lisa Dusseault: But these different platforms have inconsistent
requirements and that adds a lot of uncertainty, risk and overhead for any
startup that wants to apply to AB and C and get verified by all of them. we
also risk having different outcomes. If a startup was suspected of being
involved in identity theft by one of these giant platforms, there's no
process that would spread that information to other platforms that approved
that data transfer destination in any reasonable time frame. it's also a
problem when it's a false negative. when a startup is having trouble
getting approved by one company, but they've been approved by two others.

Lisa Dusseault: then they're so close to being able to compete, but they
might need to support all the major platforms in order to really get off
the ground, raise their money, sign up their beta users, and focus on their
product and their innovation rather than on the whatever is blocking them
from getting approved by one out of the platforms that hold most of this
kind of the data trust registry I often diagram it as centralized because
compared to different companies holding different unilateral trust
registries and to be clear when a company has a verified list of thirdparty
partners that they will work with that is that company's idiosyncratic
unilateral internal trust registry right the trust registries are in far
more places than we call them as trust registries

Lisa Dusseault: whatever if you call them directories or registries the
same thing exists in a lot more places than we recognize and we're starting
to recognize this you all know that in the did and VC space which people
are trusted to issue VCs this is another trust registry problem and solved
in different ways by different partners so compared to the different
companies that all have their own completely different trust registries
that are Our nonprofit trust registry is centralized, but it need not be
run on a single server. It is today, but there's no reason, as you all
know, that it needs to be run on a single server by a single entity in the
longer run.

Lisa Dusseault: we didn't get started on this until we really understood
the value proposition for the platforms to shift their verification
processes to a third party. one of the reasons I highlight this now is
because this is the history of how we got here and why as you'll see the
trust registry that we are building depends on an extremely similar
verification process to what these large platforms are doing. if they're
going to reduce their risk and liability, which we need them to do to be
able to get this out of their hands and hand it over to a nonprofit acting
in the public interest. We need to make this seem safe and normal and not a
very big change and well understood. And one of the key ways of doing that
is doing pretty much exactly what these big platforms did internally and
now we're doing it in a nonprofit.

Lisa Dusseault: So that allows them to reduce legal liability of the worry
that if they approve a company and that company does identity theft are
people going to sue the large platform for making that decision? Great.
They get to take the exact thing they were doing, hand it off to somebody
else and not be responsible for making that decision. and the reputational
liability is such a fuzzy thing. It depends on things like when you go to
one of these platforms and transfer your data to somewhere else, they say
you're transferring this to a destination that was vetted by the data trust
registry, not by us. So, don't come back and blame us if you've made a bad
decision. and that reputational liability is an important asset or
important risk to those companies.
00:15:00

Lisa Dusseault: But it's also helping them deal with regulatory pressure
which is why some of this is coming to a head right now and really actually
seeing activity the value proposition for smaller companies. We are really
really trying to keep it simple. We're trying to push back on the larger
companies who as if we're going to ask for all of these things, they want
us to ask for all the things, right? The larger companies would like the
barrier to be high in order to, the more barriers you have, I know this is
a bit of a fallacy, but nevertheless it's a commonly held one. The more
barriers you have, the harder it is for bad actors to get past all the
barriers and become part of the trusted ecosystem when they shouldn't be.

Lisa Dusseault: But we don't think that's appropriate. we want applications
to be simple and lowcost as far as possible and focus on the best ways of
mitigating risk, not on applying all ways of mitigating risk. this is one
of those well-known situations where if you're doing a perfect job, you're
probably bringing everything to a halt. you can't run a large system like
this with zero risk. There's going to be some bad actors that slip through.
You can't know what is in a founders's heart when they start their company.
They could be a perfectly good company to all involved and then decide to
suddenly start using their repository of personal data for identity theft
or get bought or get hacked. we can't prevent all of those risks.

Lisa Dusseault: we can only try to balance the value of allowing users to
transfer their data and the information they have when choosing to do so
against that risk. we are currently in our pilot program. We have a number
of companies in the trust registry. If you go and look at our actual site,
you can see one of them that choose to make their entry public. We have
several others that have been reviewed but haven't yet decided to make
their entry public because we're still putting together the pieces that
make the connection work. The pilot has companies verified but not
companies connecting via the registry yet. we're still locking that down
with a couple of big anchors.

Lisa Dusseault: but we do have a couple of smaller platforms interested in
using the registry during the pilot to vet their partners. While large
companies have a reputational and legal liability of vetting their partners
who get access to data, small companies just have a cost that they cannot
afford a startup can't afford to teach somebody how to run verification
processes and have an application process and have their partners supply
documentation and review their privacy policies. They don't even have the
expertise in house to do this. And so they just end up, saying, I know you,
so of course you can be our third party developer and have an API key, and
if users grant access to you, you can have their data.

Lisa Dusseault: But I know you system it doesn't scale very so small
companies need a third-party platform to vet their partners even more than
large companies do. when we mature the system and start to scale it and
open it to not just invite only additions to the pilot but allow all
applicants. then we will need to know more about how it's funded. We are
currently applying for grants. Any pointers on how to get this funded are
there is an opportunity here to have the registry be usage funded in when
it becomes a valuable enough ecosystem to be worth paying to join.

Lisa Dusseault: although we're very aware of the problems of a payto join
model that penalizes the smallest companies who we most want to encourage
and allow into the ecosystem. So would undermine the competitive aims of
the digital markets act commission the commit the European commission that
wants to encourage competition and others who want to allow the longtail to
enter this ecosystem and be able to participate. So, usage, level fees is
another possible model to make this a self-sustaining nonprofit in the
medium to long run.
00:20:00

Lisa Dusseault: Unlike a lot of registries that have some relationship to
regulatory measures, this is currently international. we don't envision
taking a lot of steps to handle different countries regulatory environments
differently. I spoke with financial services registry operators recently
and they were talking about how their platform is designed so that it's all
containerized they can roll it out to Brazil and now Brazil has a financial
services trust registry and it'd be nice to not have to do that model
because it's clearly more expensive to the internet to the users of the
internet for every country to have

Lisa Dusseault: to have its own registry and user data is not subject to as
many detailed per country regulations yet. So we still have a chance to
make an international system which helps it to be lowcost and self-
sustaining. If it were per per country, I don't know how the economics
would work and it per country would definitely be a damper on crossborder
competitiveness in an internet where some things are winner takes all. that
would be a serious dampener.

Lisa Dusseault: one of the documents I refer to, I have a link on the
reference link slide defines the pillars of trust which we worked with a
fantastic cyber security expert at Venible to develop and this is a way of
saying so companies who are asking to verify their partners for access to
personal data were asking all kinds of questions. Which of these questions
are legitimate? Which of these questions which of them relate to a real
threat. And so we went from the ground up to justify and explain and
organize and have an ontology for the things we were asking in our
verification process. and that gives it a legitimacy not're partly in the
sense of here's what we're asking and you should provide it to us.

Lisa Dusseault: So if a startup doesn't want to provide a privacy policy,
we can explain why it's justified that should. so that when the companies
say you need to ask companies if they're planning on serving ads, we can
say no. That's the plan to serve ads is not related to whether or not a
user can trust the service that they're bringing photos to. there are
things related to serving ads that are relative, but let's ask about those
things, not just the do you serve ads? That's more of a possibly
anti-competitive question to ask a API applicant.

Lisa Dusseault: this is where I get to my reference slides and so this is a
good point to pause and ask if there are any questions I think that there
are some things in the reference slides an architecture diagram a little
bit simplistic for y'all but still it's a diagram what the API of the
registry actually looks like a couple more interesting a data donation
platform mock talk up in the reference slides. But before I get into that,
I thought this was a good opportunity to pause for questions about the
overall system.

Mahmoud Alkhraishi: I'm not hearing any questions and there's nothing on
queue. Take your fuel.

Lisa Dusseault: Okay.

Lisa Dusseault: I think I'll just copy these links into the chat and then I
can move past this slide. you will all be completely unsurprised by the
transfer architecture the protoco communication architecture for that we
proposed for data transfer the user asks for one point that we've really
harped on with the platforms we're working with is that we think that a
user's request for data transfer is usually
00:25:00

Lisa Dusseault: around not where they're coming from. It's a much more
positive I guess reason to say, I want to bring my photos to this service,
which I think will be great, than to say I just want to leave this service
that has my photos. How do I get away from it? so, we want to make sure
that the architecture supports destination So the user is attracted to sign
up to a new service. Maybe a friend invites them and they create an
account. Now they're identified at the new service, but they're also
identified at the service that holds their existing data. So during their
onboarding, the destination has an opportunity to offer do you want to
transfer some data from another service to get you started here on this new
service you're trying. And then now the user can agree to provide that.

Lisa Dusseault: And because they're identified already to both parties,
they log in using whatever login mechanisms both sides use. we don't have
to present any credentials for the transfer to work. We can just use OOTH
and have the destination identify itself to have the source verify that
with the trust registry which is live and can provide data about the
destination to be cached for an hour or a day and then the data transfer
can take place between verified source and destination. We also have in
this diagram the opportunity for the destination to verify the source. and
this is not as important as the source verifying the destination to see
that it has good data security privacy policies and checks a few boxes
there.

Lisa Dusseault: But the destination also also frequently wants to know if
the source does content scanning for CSAM or copyright infringements and
that basically the source can vet the content that it's about to transfer
that at least pass some filters for harmful content. What is literally in
the registry is I'm going to make this a little bigger for you all.

Lisa Dusseault: the connection trust information and the connection
information is broken into whether the service is acting as a source or
acting as a destination but it's all in one very cachable h JSON response
to an HTTPS get so each service that acts as a source a destination can
provide the URL

Lisa Dusseault: that returns this response and it has enough information to
be able to verify the parties. And so once again, we're doing something
that's very unsurprising to our partners and we're relying on the verified
domains to make sure that the entity that the trust registry is verified is
the same entity that's talking to the platform and asking for data. this is
how these services currently operate. They verify a company's domain to
make sure they know who they're talking to. And so we're also verifying
domains and then using the same verified domain to request the service
entry and know that the company verify knows that they're talking to the
same ed entry. the words trust and entry come up way too often in these
conversations.

Lisa Dusseault: I need to disambiguate. one of the things that we're really
excited about and working on a lot this summer, because we have a PhD
student intern who's a great software engineer and he's already been here
with us two weeks and already landing code is we're building a prototype
service for data donation. So the idea that if we have a workable ecosystem
for data transfer, it can also be a workable ecosystem for data donation.
And in a world where major companies are shutting down their data access
APIs because they don't want them used to train other companies AIS or
whatever reasons they have, researchers are sometimes at a loss where to
get their data.

Lisa Dusseault: several social media research, groups have grown up being
able to access Twitter and Reddit APIs and Meta APIs. And with those APIs
being changed, reduced, restricted, even shut down, those researchers still
have budget, and yet they don't have a good way of getting new data for
their studies. There's also a lot of kinds of data that have not really
been available at all. And so there isn't as strong a community of
researchers, but if we make it available, if you build it, you will come
kind of proposition. But how many researchers will be interested in
somebody's search history or their commenting history on a few platforms if
they know they have a reasonable path to get it from users who choose to
donate it.
00:30:00

Lisa Dusseault: And this choosing to donate rather than forcing the
platforms to open up research APIs really solves a lot of privacy and an
anonymization I can never say that word anonymization problems you don't
have to worry about anonymizing a bulk set of data or the worry that a
researcher or somebody with access to the data set will deanonymize it if
the user has donated it. you still want it to be securely stored and
accessible only to the researchers who legitimately should have access to
that data but the users donated it and they might even be willing to donate
it not just for one study but for many.

Lisa Dusseault: So I hope that I know this is probably too small but we
have this idea that if you post a research study homepage and say to post
your call on X and threads that hey world we have a study if you want to
participate in a study on nutrition and diabetes and your search history
whether you search for buffalo wings or sprouted wheat bread for example
then come and donate search history to our study and here's the page that
attracts somebody to say yes, I want to participate in the study can have
things like here's the survey if there's a survey. Here's the button to
donate your Google search history. Here's the button to click to donate
your duck.go

Lisa Dusseault: go search history and provide it and with that a flag that
defaults to I want to share my search history with all of the researchers
that are part of this particular research group and I predict that a lot of
users will want to share with a whole group. It's a lot less than making
your data public, but it makes you feel like not only I'm helping other
studies. And how easy is that? What does the research group look like? What
is the unit of a research group? hard to say yet what this could become,
but right now we've got research groups like Indiana University's OSM,
Princeton's aggregator, Stanford's internet observatory.

Lisa Dusseault: These are all existing data aggregation groups that support
teams of researchers with one data center. they hire a DDB administrator
and a few other people to securely host research data in a way that
supports multiple researchers. So, these data aggregators for research
already exist. and we're talking to a bunch of them about this work in this
ecosystem. so this is what it would look like for one of those studies to
ask a user for access to their Tik Tok data if Tik Tok was part of this.
We're participating in a couple of standards areas which you probably know
about.

Lisa Dusseault: in particular I've been working on I should update my links
here where I talk about OOTH to refer to DOP because I've come to think
that DOP is the right OOTH extension to add so that sources and
destinations that don't previously have a relationship can identify each
other. and we're tracking the trust registry query protocol work going on
in the trust over IP group. and there's more to trust marks are a standard
we're interested in the long run.

Lisa Dusseault: trust marks, pre-authorizations, federation, verifiable
credentials, the trust registry query protocol. Yeah, we think that the
very boring technology we use right now, which is a SQL database and a
website, can really be extended to use much more interesting advanced
technology as we become established with the boring expected do stuff
that's more powerful and more flexible more secure. Although fundamentally
it's the processes that are the assurance in this system. The processes
that say we need to make sure that you are a registered C organization that
you're responsible to some legal authority before becoming part of the
trust registry.
00:35:00

Lisa Dusseault: That's an important assurance and it's not a technical
shiny solution and the process that says if we get enough complaints about
a data destination that they say they're getting your photos to contribute
to research or to size them for printing but they're actually then turning
around and selling them for identity theft. If we get enough complaints
about a data destination, then to delist them from the registry, to start
by putting them under a warning or advisory status and then delist them.
One of the things we've learned talking to the big platforms about how they
do their internal unilateral trust registries is that they have a lot of
trouble dellisting somebody.

Lisa Dusseault: Once somebody has access to their APIs, there's a high
barrier to removing them from access to those APIs because it can be seen
as a very anti-competitive destabilizing mo move that the platform might
have one of a number of motivations to do. whereas a nonprofit operating in
the public interest can make the decision to dellist a participant in the
registry frankly much easier.

Lisa Dusseault: I think it's worth saying that another thing that I've
learned about this is I'm applying some things I've learned about some past
history I have in trust in social systems. So how do you maintain civility
in systems where with moderation or without moderation? How do you make
participants in a online forum or social media site able to trust each
other and interact with each other in a safe way. And one of the things
that we've learned in that arena is that nuance is important. showing
whether somebody's a new user and has made 20 posts or somebody has been
around for a long time and made 20,000 posts. That's the nuance of trust in
social media.

Lisa Dusseault: And there needs to be nuance and trust in a trust registry.
some of that and some of that is in which level somebody's trusted at, but
there's more opportunities for nuanced trust signals besides just have you
been approved at level one, have you been approved at level two? Okay, so
great.

Lisa Dusseault: This looks like a good spot to stop and leave this on
screen so people can see and take my first question. Hey, Will. Yeah.

Will Abramson: Yeah. Hey,…

Will Abramson: Lisa. I think you were in the IW session I ran, right? I was
kind of around this. I just wanted to say I really appreciate that, concept
like trust is nuanced, and finding multiple factors or a multitude of
factors of drug signals that we can aggregate and combine together to make
better sense of who's showing up in these spaces is really powerful right I
think like you said a big signal is constancy like this same entity has
been representing themselves and…

Will Abramson: contributing as a constancy across time that's really
valuable to know so I just want to say this is great thanks Maybe I'll ask
a question…

Lisa Dusseault: Damn it.

Will Abramson: then because I didn't ask another question. What do you
think is some of the most interesting powerful trust signals that we are
ignoring today or not paying enough attention to?

Lisa Dusseault: Good question. And I need to think a lot more about that
one in this space.

Lisa Dusseault: I think right now we're airing on the side of excessive
caution and blocking access too often. So, if we start to move the needle
more towards allowing companies to participate and develop a track record,
because until you develop a track record, how do you get permission to
develop a track record? then we can start to think about what are the
signals that really allow you to see danger as early as possible. I would
love to be evidence-based on that and…
00:40:00

Will Abramson: Cool.

Lisa Dusseault: we're way too early to be evidence-based on that. So this
is such a dodge but I'll say it.

Lisa Dusseault: We need to gather evidence about what information is
associated with later on finding that an entity really could not be trusted
or had really been lying and then start to use those signals.

Will Abramson: I mean that your answer reminded me of Austramm's rules for
governing the commons, right? and graded graduated can't remember the exact
term but punishments right the initial punishment isn't just kick them out
it's like maybe sit them try and…

Lisa Dusseault: Yeah. Yeah,…

Will Abramson: get on a call and have a conversation with them and then if
they keep doing that we can grade it increase the punishment as it were
cool thanks Maybe I'll ask one more general …

Lisa Dusseault: I'm a big fan of Austramm's work and that has definitely
influenced my thinking.

Will Abramson: where can we find out more like…

Will Abramson: where do we follow this work if we want to All Cool.

Lisa Dusseault: the links that I put into the chat in the first one is the
actual live site.

Lisa Dusseault: Like I said, there's only one company that has decided to
publicly list their approved registry entry, but there are more that are
just flying under the radar right now that have been approved in our
registry. I should mention we have a blog DTI has a blog and occasionally
we post about this as well as other data portability related issues and
privacy related issues. Hey Ted

Ted Thibodeau Jr: Hey So, this is a timely tangential question. and it's
not for you to answer, but others will have to think about it. I've put it
into the chat.

Ted Thibodeau Jr: Why does Fireflies want to access all of my calendars in
order to let me see the transcription and notations on this meeting? It
seems a very, very big ask. And certainly I'm not going to give it to them
because I have calendars that have sensitive bits of nature and that's just
the way it is. So yeah, that.

Lisa Dusseault: Yeah, you're right.

Lisa Dusseault: I can't answer that specific question, but I do have an
optimistic view of what's possible with the architecture where OOTH allows
both the sides of an authorization dance to present any web in pages they
like and any number until they finish the process and hand it back to the
other party.

Lisa Dusseault: offers a lot of possibilities for this to go sure Fireflies
should ask for access to less information, but also the calendar server
could say, "Do you want to limit the calendars that this system is asking
for when they do their side of the OOTH dance?" one of the examples I've
been telling people is when they say it's just too dangerous to allow
access to your photos on Apple, so we can't do that." And then say " be
smart. Be clever developers. You have lots of clever developers.

Lisa Dusseault: you can filter somebody's photos for things that are likely
passport photos, driver's licenses or medical records or tax records that
they've taken photos to supply to their tax accountant at the last minute
at tax deadline. Not that I've ever ever done that. And of course there are
no such a things in my Apple photos because that would really be an
identity theft problem, But anyways, there's nothing to prevent the
platform hosting my photos from filtering those out from saying we by
default apply a filter that doesn't include those photos when you share
them with another party. And if you'd like to disable that filter, you can.
and protect users.

Lisa Dusseault: Ivan. …

Ivan Dzheferov: Yeah, I was wondering big companies have this bundling
model of keeping users in their ecosystems like for example Apple you buy a
MacBook because all your contacts are there and all the rest of your
different types of data and suddenly this requirement comes into place to
have data portability. I was wondering how do they see this big change and
how do you believe it's going to impact them?

Lisa Dusseault: yeah, that did we thought a lot about this. This is our
biggest open question before starting this work a year ago. We started to
prototype it a year ago and show and show sim some of these slides in more
of a pitch deck for what we should do.
00:45:00

Lisa Dusseault: there's certainly a temptation to keep all the data inside
a remote and keep customers but it's becoming untenable and costly to do So
if regulatory compliance means that these platforms have to open up then
the barriers they've become kind of just a cost center for them. The
barriers that they put up no longer make them a lot of money by keeping
customers inside the moat.

Lisa Dusseault: they merely are an extra cost and still don't protect them
from liability like the checks that Meta had on Cambridge Analytica because
Meta didn Meta Meta had a process back then to vet who could have access to
their API and they had a consent process where Cambridge Analytica asked
people for consent to have access to all of their friends lists in order to
particip ipate in some fun social survey before it was discovered that
Cambridge Analytica was massively excfiltrating data, users, friends,
graphs and all of their data. And that was a huge reputational problem for
Meta even though they for Facebook back in the day as it was back in the
day even though they vetted Cambridge Analytica and Cambridge Analytica got
consent.

Lisa Dusseault: So their vetting processes aren't protecting them and
cannot reliably protect them. So I think the regulatory pressure has caused
a shift and a collapse in the value of these systems they'd set up. That
doesn't mean it's going to be fast for these systems to disappear. it's
always very slow to get a bureaucracy to change its procedures and to
change its approach and the app verification systems inside these big
platforms is a bureaucracy. Did that answer your question Ivan?

Ivan Dzheferov: Yes.

Mahmoud Alkhraishi: Are there any other questions on Q? I don't see any. At
least please go ahead.

Will Abramson: This is not a question for you, Lisa. Apologies. This is
kind of I guess back to community announcements. I wondered if anybody
knows what the status of our website move is. I haven't heard if Manu
executed that or not yet, but the website that he put up there's an example
on below. So I just wanted to Okay,…

Mahmoud Alkhraishi: I don't believe that's done.

Will Abramson: Yeah.

Mahmoud Alkhraishi: Are there any questions for Lisa that anybody would
like to add in? Lisa, are there anything you would like to bring up?

Lisa Dusseault: Since I'm talking to a community of experts, I'd love to
know where you all think that DIDs and VCs what is your prediction of where
the use case that will tilt in favor of introducing those technologies to
the data portability ecosystem.

Mahmoud Alkhraishi: Do we have any volunteers?

Lisa Dusseault: I do think it will happen, but I'm kind of cloudy in my
head about exactly when and which technology and for which reason. maybe
that's just a question to leave with y'all.

Lisa Dusseault: I'm reachable on the internet.

Lisa Dusseault: Pretty easy to find my email and hit me up.

Mahmoud Alkhraishi: Thank you so much for sharing,…

Mahmoud Alkhraishi: Lisa, and thank you to everybody who attended the call
with us today. I appreciate everybody showing up. And thank you again,
Lisa, for the wonderful presentation. please feel free to reach out to Lisa
as she just offered. And if anybody has anything else they'd like to share,
please do it right now. All right. I'm not seeing anything. Thank you
everyone. Have a wonderful rest of your day.

Will Abramson: Sweet. Thanks. Thank you.
Meeting ended after 00:49:58 👋

*This editable transcript was computer generated and might contain errors.
People can also change the text after it was created.*

Received on Tuesday, 22 July 2025 22:17:01 UTC