- From: Manu Sporny <msporny@digitalbazaar.com>
- Date: Sun, 20 Jul 2025 11:42:24 -0400
- To: "public-credentials (public-credentials@w3.org)" <public-credentials@w3.org>
On Fri, Jul 18, 2025 at 6:44 AM Pryvit NZ <kyle@pryvit.tech> wrote: > Will, I think it’s interesting to see your faith in institutional trust remains, because globally it’s on the decline: https://www.oecd.org/en/publications/lack-of-trust-in-institutions-and-political-engagement_83351a47-en.html I don't think that is Will's point; his point is that, generally speaking, we (as a global society) have identified certain centralized institutions to do some of this credentialing and enforcement for us because it's more efficient (and safer) for it to happen that way. I don't think it's "faith"... to me, at least, it's reality. That's why issuers matter -- because none of this credentialing stuff works if you don't have issuers that people trust today. That doesn't mean that we optimize for issuers over holders... but we do realize that relevant issuers matter. We do, actively optimize for holders -- because it's their privacy and autonomy that we're trying to protect. Now, I do think that there are other technical communities that ARE optimizing primarily for issuers, but I don't think that's what's going on in the CCG (but am happy to have the debate if folks think otherwise). I'm still having a hard time understanding what you (and Christopher) mean when you say "an alternate architecture" (I did read your blog post, more on that below). For us to shift the dynamic further away from issuers, we would, as a society, need to find alternate institutions to do some of that "trust establishment" work and that sort of societal change, taken to an extreme, seems unrealistic to some of us. Now, that doesn't mean that there are certain institutions that provide centralized trust that can go away with a more decentralized solution... but society has to agree on what that new mechanism is (and we're building technology here, such as DIDs, to help provide better alternatives to things like the accidentally-centralized-and-over-used Social Security Number). Take driving, for instance. Your locality has something akin to a Department of Motor Vehicles whose job it is to test and license driver's of motor vehicles. I, personally, don't want to be involved in testing and enforcing if other people are allowed to operate such a lethal device. I certainly don't trust some of the people in my local community to make that determination... so, we've all gotten together and formed this centralized institution called a DMV to do that trust work for us. > Here's another blog post I wrote that I think provides a legitimate example to how we can shift who plays what roles within the SSI triangle to achieve a more decentralized and private means of content moderation to protect children. I hope it helps take things from the abstract to the concrete like Manu mentioned previously. > > https://kyledenhartog.com/decentralized-age-verification/ It does help quite a bit, thank you Kyle for taking the time to write the blog post and providing something concrete that we can analyze. One of the things it helped clarify for me is that by "different architecture" you seem to be saying "Let's take the primitives we have -- DIDs, VCs, etc., but put them together in a different way so that the protocols delegate responsibilities to the edge -- to the browsers, parents, and school teachers instead of the adult content and social media sites." Speaking as a parent that is stretched very thin, and who sees how thin teachers are stretched in my country -- I really dislike that idea :). Why is the burden on me, as a parent, to stop my kid from being pulled into a social media website that is designed to be addicting? :) No, I want a fence put around that thing with a "deny by default" rule around it. So, putting a "this site could be dangerous to mental health for kids under the age of 14, and honestly, it's probably dangerous for adults too." warning on the site isn't very effective. Now, I know that in your blog post, you also mention that it's really the child's web browsers job to get a credential from the operating system, which might get it from the child's guardian to make the determination to show the content to the child. If this is the case, you're shifting a massive amount of liability onto the operating systems and web browser, putting them in the position of policing content. I don't understand how that is not a really scary and massive centralization of power into the OS/browser layer (worse state than what we have now)... not to mention a massive shift in liability that the OS/browser vendors probably don't want. When a lawsuit happens, how do the OS/browser vendors prove that they checked with the guardian? Do they (invasively) subpoena the browser history from the individual? Or do they just allow the government to grab the credential log from the OS/browser? How does the browser determine what content is being shown on the website? Content can vary wildly on a social media platform, and even within a single stream. I've seen G content turn into debatably R content in a single show watched by 8 year olds. Different localities have different views on offensive content. All that to say, these reasons are often why the burden of proof is shifted to the content/product provider. If you want to sell that stuff, you have to do so responsibly -- which seems to be where society largely is these days. So, you're asking for a pretty big shift in the way society operates. The other thing that struck me with your blog post was that, while you were moving the roles around (browser becomes the verifier, operating system becomes the holder, guardian becomes the issuer), there was fundamentally no change in what the roles do. That is, I didn't see an architectural change... I saw a re-assignment of roles (issuer, holder, verifier) to different entities in the ecosystem... but at the end of the day, it was still a 3-party model with massive centralization and liability shifted to the OS/browser layer. There was also no explanation of how the guardian proves that the child is their responsibility -- birth certificate, maybe? Now we have to start issuing digital birth certificates worldwide in order to use the age-gated websites? Even if we do that, we still depend on a government institution as the root of trust. IOW, it seems to me like the architecture you're proposing is, in practice, an even more centralized system, with a much higher day-to-day burden on parents and teachers, with unworkable liability for the OS/browser vendors, that still requires centralized institutional trust (birth certificates) to work. I do, however, appreciate that the approach you're describing pushes the decision out to the edges. The benefits seem to be that centralized institutional trust (birth certificate) bootstraps the system, and once that happens, the decisioning is opaque to the centralized institution and the age-gated website (it's between the guardian and the os/browser layer) with minimal changes to the age-gated website. I think the hardest thing might be getting the OS/browser vendors to agree to take on that responsibility and liability. You'd also need a global standard for a "Guardian approval to use Website X" credential, but that is probably easy to do if the browser/OS vendors are on board. Legislation would also have to change to recognize that as a legitimate mechanism. ... or, alternatively, the website just receives an unlinkable "over 14/18" age credential under the current regime. I'm not quite seeing the downside in including centralized issuer authorities in the solution that issue unlinkable credentials containing "age over" information. There are 50+ jurisdictions among DMVs alone that issue that sort of credential in the US -- hardly centralized. In any case, one of those is far easier to achieve (both technically, politically, and from a privacy perspective) than the other, IMHO. -- manu PS: Note that I didn't really take a position on this whole "you need a digital credential to view age-gated websites" debate. It feels like a solution in search of a problem -- website porn and social media addiction was supposed to destroy my generation -- and we had NO guardrails, nor were our parents aware of the "dangers". In the meantime, it looks like we've found more effective ways to destroy civilization, so if "age-gated websites" is among our leading use cases, I suggest we're not tackling the most impactful societal problems (scaling fair access to social services, combating fraud and other societal inefficiences, providing alternatives to surveillance capitalism, combating misinformation, mitigating climate change, etc.). :) -- Manu Sporny - https://www.linkedin.com/in/manusporny/ Founder/CEO - Digital Bazaar, Inc. https://www.digitalbazaar.com/
Received on Sunday, 20 July 2025 15:43:05 UTC