Re: When Technical Standards Meet Geopolitical Reality

Thanks Manu for the long post in response. I’m responding in line to try and break it down a bit more, but as usual I tend to over author things a bit so apologies to everyone for another long post. I do think there’s good content in this that will help everyone understand the value in focusing on a use case to better understand why trust architectures we choose introduce subtle but powerful design choices later. I think that’s the most important thing for us to take away from this as technologists irrespective of where each of us individually fall on the topic of age verification laws. So given that, I’m going to engage because I know Manu is responding in good faith, but if the topic becomes too separated from the point that this is about trust architectures I’ll likely attempt to redirect back to this point or disengage.

> We do, actively optimize for holders --
> because it's their privacy and autonomy that we're trying to protect.

I don’t necessarily agree with that position. If I’m no longer allowed to attest my own name or date of birth, how do I have greater autonomy? If I can’t attest false information on a site, how do I have greater privacy? This is how the introduction of the trust architecture changes how these principles appear.

> Now, I do think that there are other technical communities that ARE
> optimizing primarily for issuers, but I don't think that's what's
> going on in the CCG

the CCG is largely ambivalent in “how” the technology is used. At best for the most part we’ve made things we see as wrong non-normative notes (for the people who participate in both CCG and WGs and the CCG reports we produce) even though we spend significant time debating these topics because we do want to make it better but can hardly ever reach consensus on how.

> I'm still having a hard time understanding what you (and Christopher)
> mean when you say "an alternate architecture"

I don’t think you’re alone in that, but I’m wondering if it might be a confirmation bias rather than a lack of understanding. Your constructive critiques suggest you understand what I’m saying, but disagree with the outcomes and differences in principle it will lead to. That’s okay, sense I expect reasonable minds can differ. Especially with complex topics. Many of us in this community attached ourselves to the trust architecture that x509 uses for TLS certs early on (even though we wanted to “reboot the Web of Trust”). If we believe this is the only viable solution then we tend to stop asking ourselves, “how else can I solve this problem and what do I gain or lose by doing it differently?” because of that bias. That’s why I’m challenging the idea that the best way to achieve this is with the current trust architecture that’s assumed.

> For us to shift the dynamic further away from issuers, we would, as a
> society, need to find alternate institutions to do some of that "trust
> establishment" work and that sort of societal change, taken to an
> extreme, seems unrealistic to some of us.

Ahh, this is where I think my historical nerding out has painted a different picture for me. When we look throughout history including far back we realize that there’s been many different forms of trust established relative to the problem that was being solved. One example, I found interesting is that in NZ when the country first formed the governance structure was monarchy by decree as it was a colony of the British empire. However, in practice waiting months for instructions (due to geographical locations) from the monarch on how they felt about a law was impractical. So, the government here operated semi-autonomously with a governor acting as a delegate for the decisions of the Monarch. This effectively made the governance structure distributed rather than fully centralized around the empire. In this way, the practical problems of the day led to different solutions and different approaches for how to govern. Let’s take your driving example too and modify the problem in a hypothetical scenario to show how the problem at hand makes a difference. If transportation methods existed where everyone could travel anywhere on the planet within 1 hour and it required a similar skill set to driving that needed testing do you think we’d have the same structure to DMVs as we do today? Probably not and this points out how important it is to shape the architecture to the problem at hand. I’ll point out some of the structural flaws later on that occur because we’re misapplying the trust architecture to solve content moderation for children.

> "Let's take the primitives we have
> -- DIDs, VCs, etc., but put them together in a different way so that
> the protocols delegate responsibilities to the edge

In the specific case of age verification on the Web, yes that’s what I’m arguing for. In the case of TruAge, I wouldn’t because it’s largely impractical and also largely accepted today. Although I’m sure it was a moral crisis when it was first introduced. I think each use case needs to be evaluated separately rather than always bootstrapped onto some other trust architecture each time. As you point out, we’ve got new problems with Social Security Numbers because of the over reliance on that approach. We’ve Hyrum’s law’d our way into new problems doing that.

————

Note for reader, this is where I’ll start breaking down the structural tradeoffs that we face because of the current design structure and my proposed design structure.

> Why is the burden on me, as a parent, to stop my kid from being pulled into a social media website that is designed to be addicting?

Isn’t that the social responsibility and agency we implicitly accept when we have children in today’s world? Isn’t that the same responsibilities we revoke when a parent makes the very tough decision to put their child up for adoption or has taken from them when child protective services steps in? Even with the current proposed solution, at some point, parents have to be the enforcement mechanism because they’re more involved in the child’s life than anyone else except maybe the teachers. How can someone who’s never met the child determine their developmental needs better than the parent or teacher? Furthermore, as I alluded to in the post this is already largely managed by school IT teams today. Like literally a full time job for many of them. So, parents can delegate to them as they see fit, and IT admins can delegate to teachers as they see fit (e.g. so the teacher can override access to sites needed for a lesson rather than calling up the IT admin). This delegation places the responsibility in the correct hands, while ensuring effective and scalable enforcement (that the IT admins already have within their network). In this way it’s more adaptable to the needs of the child and the desires of the parents too.

> No, I want a fence put around that thing with a "deny by
> default" rule around it.

Kind of like the DNS filters used on school networks today that address the specific issues encountered by the school and work well today when on their network?

> So, putting a "this site could be dangerous
> to mental health for kids under the age of 14, and honestly, it's
> probably dangerous for adults too." warning on the site isn't very
> effective.

Yeah, I’m not thinking of this as just a warning as pointed out by the separate guardianship problem. At least not in the case of children who do require additional oversight to meet their developmental needs. For adults looking to self filter, they’ll use just the content moderation tools and act as their own “guardian” (but without the need for most of the OS tools). Thats largely because I think my architecture can also solve a separate misinformation problem for adults, but I’ve left this out of scope for the discussion. Courts or family members could step in as guardians there but I’ve not done a deep critical analysis to say it would work in practice. The reason I feel reasonably confident in this approach for children is because it’s what’s already been effective for kids at schools. It’s once they leave the school that the systems no longer work.

> If this is the case,
> you're shifting a massive amount of liability onto the operating
> systems and web browser, putting them in the position of policing
> content.

Ahh, I don’t think this point was clear enough. They aren’t policing the content, the user or guardian is and there’s not assumed strict liability in the legal sense like what’s being placed on sites who regulators selectively enforce against and won’t be able to enforce effectively. At best I think sites have a strict liability to make sure content is tagged appropriately.
As Ada Palmer points out, censors aren’t the all knowing organization that George Orwell makes them out to be in 1984. At best they’re underfunded agencies that rely on making examples to create a chilling effect so people or businesses self censor. So if self censorship is the effective measure why not just build our tools to achieve this? This is already what we’re seeing from OFCOM in their public statements is this “talk tough” stance because they likely know they won’t be able to enforce beyond big tech. If I remember correctly, the only applies to large sites anyways.

For example, how are they going to stop children from downloading an open source, third party ATProto client (Bluesky’s protocol) that doesn’t enforce age verification? Or how are they going to enforce a kid from setting up their own ActivityPub node or using an one that they have no jurisdiction over? Or how are they going to enforce effectively on comments sections of news organizations? (because yes, these laws are that sufficiently vague but that’s not an issue for us to debate) History dating back to the inquisitions to the more recent comic book bans suggests they won’t be able to. That’s why building tools that align with self moderate (with the ability to have guardians control this) are more scalable and effective. We just so happen to also get decentralization because that’s how self moderation scales best not because it’s necessary or centralization is evil.Privacy only comes in so far that the guardian wants to allow it and in most cases today, the IT admins already can see everything happening on the child’s devices today that they manage.

I’d suggest people speak with their local IT admin to get a full understanding of what they already do and can do because of the likes of ChromeOS and Windows. Furthermore, this aligns with the practical reality of who already understands the problem and effectively mitigates it now to protect children. Do you really expect the product managers in Instagram, YouTube, and Twitch to understand these laws and see it as their responsibility to protect your child? That’s what we’re effectively saying today assuming they don’t get it wrong and then get a wrist slap from regulators and have to change it.

> When a lawsuit happens, how do the OS/browser vendors prove that they
> checked with the guardian? Do they (invasively) subpoena the browser
> history from the individual? Or do they just allow the government to
> grab the credential log from the OS/browser?

I’m not a lawyer, but I would guess any lawyer would say it depends on the facts and circumstances. In the architecture I propose though, the goal isn’t to use hard power because it’s largely ineffective except as a means to creating a chilling effect. In practice, I don’t expect there will be massive change in liability to how it works today. IT admins don’t get sued because a site was missed and parents and teachers only get sued (criminal not civil) if gross negligence occurs harming the child. Furthermore, sites will only get sued if they don’t block something at the regulators request. By recognizing content on the Web gets generated faster than it can be classified we also have to understand this liability will only be enforced when gross negligence can be shown and the default is going to be to age gate features as heuristics not content. In other words, this just gives regulators the power to go to a select number of large sites and require them to block information they missed that was brought to their attention. It doesn’t actually provide a scalable moderation system. Unless we assume all communication happens only on those sites which is a bad assumption considering we are communicating as a group over a mailing list and anyone can join. I’m looking forward to seeing what regulators come back and ask for from technologists when kids figuring out they can setup an unmoderated mailing list and IRC channels to get around this. :)

> Different localities have different views on
> offensive content.

Which is exactly why I’m suggesting that the responsibility falls to the parents because this same reasoning applies all the way down to the individual level We’re fundamentally deciding who gets to set the rules of what is moral within speech when we moderate. Now, I don’t argue that at least of this still remains at the centralized level. That’s also why I’m not making an extremely illogical call to change CSAM laws. Those are effective and meaningful and should stay. But do we really believe that the same measures we use for CSAM are also the most effective measures for talking about mental health (Depression is a topic that’s censored in the UK); or for that matter any determination of what is right and wrong to disseminate on the Web within any particular jurisdiction? This is why it’s important that our trust architectures consider where the best place for power to be placed through the selection, representation, or creation of roles.

> All that to say, these reasons are often why the
> burden of proof is shifted to the content/product provider. If you
> want to sell that stuff, you have to do so responsibly -- which seems
> to be where society largely is these days. So, you're asking for a
> pretty big shift in the way society operates.

That’s because no one else sells a global platform for connecting and sharing ideas today. After all, that’s what makes the Web such a foundational shift and is what has caused us to spend a few decades grappling with this problem. In that sense, as I alluded to earlier, because we’ve got a fundamentally new problem we have to search for fundamentally new solutions rather than reverting back to our old solutions that haven’t worked since the printing press days.

Fun fact, books used to be censored by people crossing out portions of text. Some censors would put extra effort into making sure it couldn’t be read and then on a different page only put a small line through it. Additionally, elites such as academics and politicians had access to the uncensored versions in limited circumstances such as when they needed to argue against the censored works. It also led to changes in ideas. For example, it’s believed that Des Cartes may have radically altered his theories on Mind-Body Dualism due to the Galileo trials of heresy related to his Heliocentric views which was their modern moral crisis. Just think how regulators desire to moderate mental health may alter history and our ideas on mental health 200 years from now. That is the long order effects that our trust architectures can have due to the butterfly effect.

> there was
> fundamentally no change in what the roles do. That is, I didn't see an
> architectural change... I saw a re-assignment of roles (issuer,
> holder, verifier) to different entities in the ecosystem

To me, that seemed to be all that’s necessary to show the shift in power because it aligned better (not perfectly) with my principles. I’d be interested in evaluating alternative solutions that others suggest different tradeoffs than the two proposed currently. For example, mine does have a bit more configuration complexity which I chose to delegate to School IT admins to mitigate this a bit. This is also why I call on use case designers at the end of the blog post to think differently each time because this evaluation needs to occur each time we utilize them. If new roles are needed (such as trust anchors, distributed ledgers, etc) then they should be introduced specifically as the requirements set.

> but at the end of the day, it was still a 3-party model with massive centralization and liability shifted to the OS/browser layer.

Yes, agree it centralizes the design of the tools, but not the enforcement mechanisms which is where the hard power lies. That still falls back to the guardians which may be where my design fails. It may not move the needle enough such that we can have agency because that very same agency is what limits the adoption and therefore protections. If that’s the case though then we should probably also look at moving away from IT admins of schools doing content filtering on their networks and instead make Cisco, DNS servers, and ISPs manage this enforcement because that’s been working well for anti-piracy laws (which is just another form of content moderation) over the past 2 decades. :)

> There
> was also no explanation of how the guardian proves that the child is
> their responsibility -- birth certificate, maybe? Now we have to start
> issuing digital birth certificates worldwide in order to use the
> age-gated websites? Even if we do that, we still depend on a
> government institution as the root of trust.

ahh I see you’ve fallen back into a deference to authority architecture here. In practice, it will be on the parent or school who likely paid for the device. I would like to think parents at least know what private devices their children use to access content. For public devices it would be managed like at a public WiFi or library.

If they don’t do it (or feel incapable of doing it) they can either defer that authority in a limited scoped manner such as to an IT school admin. This is how parental controls worked on televisions too (without the delegation capability), so it’s not that different. Similarly, because this right is granted to the parent or deferred by them it grants them the moral discretion to determine right and wrong which is an advantage. This is the reason people get upset about school book bans is that the moral discretion is to broadly applied today. That’s also what happens if we defer this power onto governments and large tech providers but at a much larger scale. It will exacerbate the underlying problem we’ve found with the broader Web which is fundamentally “who gets to moderate?”. Today, it’s been large corporations who opt to do it as little as possible because it’s expensive and affects their bottom line. Tomorrow when it’s placed in the hands of regulators it’s going to be an endless political fight over every topic. Why not decentralize this power to the parents and most importantly make it easier for them (so it is still convenient) so they can choose how they want to handle it and just end the debate?

> I think the hardest thing might be getting the
> OS/browser vendors to agree to take on that responsibility and
> liability. You'd also need a global standard for a "Guardian approval
> to use Website X" credential, but that is probably easy to do if the
> browser/OS vendors are on board. Legislation would also have to change
> to recognize that as a legitimate mechanism.

Correct, there’s a few different standards I see as necessary here:

1. The OS needs the ability to send and receive credentials and to store well known and trusted identifiers of guardians. We need a protocol for this, but the data model and identifiers we’ve already got. We need the OS to also build the APIs for apps to rely on here too.

2. We need an HTML extension to tag content so the browsers and Apps via WebView can understand what to remove. We’ve already built something like this at Brave called PageGraph that works like this but it’s not a standard. We just use it to attribute changes to a page to identify webcompat problems introduced by ad blockers. Others may be able to come up with a better solution to this too. Alternatively, to make this work better we should likely add content classification built into the browser later on since that avoids the same problem we have today where only a limited set of sites will adhere to this. This isn’t a relatively simple and scalable problem for every site to do, so the long tail of sites will need this or may be blocked as the capabilities get expanded.
Luckily, the OS are already on board with helping to solve the problem (because most need this for sites they also operate) so the question is whether they’d be willing to take a different approach. That’s going to come down to if regulators want to. That’s the part I’m less convinced will happen in the near term. I’m not certain regulators want to give up the control to tell big tech how to moderate their platforms which I believe may be an underlying motivation. My hope is they see the effectiveness to this alternative solution that operates at the edge (either now or in the future) and correct course. The alternative is users start routing around these sites by migrating to decentralized social media platforms and the long tail of the Web. Then we’re back at the drawing board with a system in place that didn’t work and will likely be repurposed when the next moral crisis occurs as is the case for most moderation capabilities throughout history. I guess big tech is no longer influential though in that scenario which some may cheer on.

> Note that I didn't really take a position on this whole "you need
> a digital credential to view age-gated websites" debate. It feels like
> a solution in search of a problem -- website porn and social media
> addiction was supposed to destroy my generation -- and we had NO
> guardrails, nor were our parents aware of the "dangers". In the
> meantime, it looks like we've found more effective ways to destroy
> civilization, so if "age-gated websites" is among our leading use
> cases, I suggest we're not tackling the most impactful societal
> problems

I too take this view on a personal stance that we’ve got bigger problems on our hands. I personally find them to be an inconvenience and a motivation for why I’ll stop using more large tech platforms. However, I recognize the moral crisis people feel they legitimately face with raising their children in these modern times so wanted to offer an alternative solution that addresses the concerns of parents and general netizens.

Also, I’m not sure if people red through the lines to understand that my architecture may be able to solve the generic misinformation problem too. I largely left that out of scope though because it’s not the main topic at hand, but I do think it’s worth exploring. This is done through the social layer where we individually determine our own morals and use client side control of the algorithms and content we engage with by subscribing to moderation lists that help with this. Then when the sites see us engaging less in that content because it’s blocked client side it will stop serving it to us. This is the same way bluesky is approaching moderation with their content moderation lists too, so I won’t claim this as some radically new idea. I’d say the only novel thing I’m adding is the guardianship proposal and generalizing it across the Web.

To the original point though, the key we must consider when designing use cases is who plays what role. We need to look beyond defaulting to institutional trust and learn to apply the technology in unique ways. Sometimes, they will be useful but not always. In doing this we will be able to build a new web of trust that better suits our needs and takes the Web in a more direction that better reflects the foundational principles it’s been built on.

- Kyle

On Mon, Jul 21, 2025 at 3:44 AM, Manu Sporny <[msporny@digitalbazaar.com](mailto:On Mon, Jul 21, 2025 at 3:44 AM, Manu Sporny <<a href=)> wrote:

> On Fri, Jul 18, 2025 at 6:44 AM Pryvit NZ <kyle@pryvit.tech> wrote:
>> Will, I think it’s interesting to see your faith in institutional trust remains, because globally it’s on the decline: https://www.oecd.org/en/publications/lack-of-trust-in-institutions-and-political-engagement_83351a47-en.html
>
> I don't think that is Will's point; his point is that, generally
> speaking, we (as a global society) have identified certain centralizeds
> institutions to do some of this credentialing and enforcement for us
> because it's more efficient (and safer) for it to happen that way. I
> don't think it's "faith"... to me, at least, it's reality.
>
> That's why issuers matter -- because none of this credentialing stuff
> works if you don't have issuers that people trust today. That doesn't
> mean that we optimize for issuers over holders... but we do realize
> that relevant issuers matter. We do, actively optimize for holders --
> because it's their privacy and autonomy that we're trying to protect.
>
> Now, I do think that there are other technical communities that ARE
> optimizing primarily for issuers, but I don't think that's what's
> going on in the CCG (but am happy to have the debate if folks think
> otherwise).
>
> I'm still having a hard time understanding what you (and Christopher)
> mean when you say "an alternate architecture" (I did read your blog
> post, more on that below).
>
> For us to shift the dynamic further away from issuers, we would, as a
> society, need to find alternate institutions to do some of that "trust
> establishment" work and that sort of societal change, taken to an
> extreme, seems unrealistic to some of us. Now, that doesn't mean that
> there are certain institutions that provide centralized trust that can
> go away with a more decentralized solution... but society has to agree
> on what that new mechanism is (and we're building technology here,
> such as DIDs, to help provide better alternatives to things like the
> accidentally-centralized-and-over-used Social Security Number).
>
> Take driving, for instance. Your locality has something akin to a
> Department of Motor Vehicles whose job it is to test and license
> driver's of motor vehicles. I, personally, don't want to be involved
> in testing and enforcing if other people are allowed to operate such a
> lethal device. I certainly don't trust some of the people in my local
> community to make that determination... so, we've all gotten together
> and formed this centralized institution called a DMV to do that trust
> work for us.
>
>> Here's another blog post I wrote that I think provides a legitimate example to how we can shift who plays what roles within the SSI triangle to achieve a more decentralized and private means of content moderation to protect children. I hope it helps take things from the abstract to the concrete like Manu mentioned previously.
>>
>> https://kyledenhartog.com/decentralized-age-verification/
>
> It does help quite a bit, thank you Kyle for taking the time to write
> the blog post and providing something concrete that we can analyze.
> One of the things it helped clarify for me is that by "different
> architecture" you seem to be saying "Let's take the primitives we have
> -- DIDs, VCs, etc., but put them together in a different way so that
> the protocols delegate responsibilities to the edge -- to the
> browsers, parents, and school teachers instead of the adult content
> and social media sites."
>
> Speaking as a parent that is stretched very thin, and who sees how
> thin teachers are stretched in my country -- I really dislike that
> idea :). Why is the burden on me, as a parent, to stop my kid from
> being pulled into a social media website that is designed to be
> addicting? :) No, I want a fence put around that thing with a "deny by
> default" rule around it. So, putting a "this site could be dangerous
> to mental health for kids under the age of 14, and honestly, it's
> probably dangerous for adults too." warning on the site isn't very
> effective.
>
> Now, I know that in your blog post, you also mention that it's really
> the child's web browsers job to get a credential from the operating
> system, which might get it from the child's guardian to make the
> determination to show the content to the child. If this is the case,
> you're shifting a massive amount of liability onto the operating
> systems and web browser, putting them in the position of policing
> content. I don't understand how that is not a really scary and massive
> centralization of power into the OS/browser layer (worse state than
> what we have now)... not to mention a massive shift in liability that
> the OS/browser vendors probably don't want.
>
> When a lawsuit happens, how do the OS/browser vendors prove that they
> checked with the guardian? Do they (invasively) subpoena the browser
> history from the individual? Or do they just allow the government to
> grab the credential log from the OS/browser? How does the browser
> determine what content is being shown on the website? Content can vary
> wildly on a social media platform, and even within a single stream.
> I've seen G content turn into debatably R content in a single show
> watched by 8 year olds. Different localities have different views on
> offensive content. All that to say, these reasons are often why the
> burden of proof is shifted to the content/product provider. If you
> want to sell that stuff, you have to do so responsibly -- which seems
> to be where society largely is these days. So, you're asking for a
> pretty big shift in the way society operates.
>
> The other thing that struck me with your blog post was that, while you
> were moving the roles around (browser becomes the verifier, operating
> system becomes the holder, guardian becomes the issuer), there was
> fundamentally no change in what the roles do. That is, I didn't see an
> architectural change... I saw a re-assignment of roles (issuer,
> holder, verifier) to different entities in the ecosystem... but at the
> end of the day, it was still a 3-party model with massive
> centralization and liability shifted to the OS/browser layer. There
> was also no explanation of how the guardian proves that the child is
> their responsibility -- birth certificate, maybe? Now we have to start
> issuing digital birth certificates worldwide in order to use the
> age-gated websites? Even if we do that, we still depend on a
> government institution as the root of trust.
>
> IOW, it seems to me like the architecture you're proposing is, in
> practice, an even more centralized system, with a much higher
> day-to-day burden on parents and teachers, with unworkable liability
> for the OS/browser vendors, that still requires centralized
> institutional trust (birth certificates) to work.
>
> I do, however, appreciate that the approach you're describing pushes
> the decision out to the edges. The benefits seem to be that
> centralized institutional trust (birth certificate) bootstraps the
> system, and once that happens, the decisioning is opaque to the
> centralized institution and the age-gated website (it's between the
> guardian and the os/browser layer) with minimal changes to the
> age-gated website. I think the hardest thing might be getting the
> OS/browser vendors to agree to take on that responsibility and
> liability. You'd also need a global standard for a "Guardian approval
> to use Website X" credential, but that is probably easy to do if the
> browser/OS vendors are on board. Legislation would also have to change
> to recognize that as a legitimate mechanism.
>
> ... or, alternatively, the website just receives an unlinkable "over
> 14/18" age credential under the current regime. I'm not quite seeing
> the downside in including centralized issuer authorities in the
> solution that issue unlinkable credentials containing "age over"
> information. There are 50+ jurisdictions among DMVs alone that issue
> that sort of credential in the US -- hardly centralized.
>
> In any case, one of those is far easier to achieve (both technically,
> politically, and from a privacy perspective) than the other, IMHO.
>
> -- manu
>
> PS: Note that I didn't really take a position on this whole "you need
> a digital credential to view age-gated websites" debate. It feels like
> a solution in search of a problem -- website porn and social media
> addiction was supposed to destroy my generation -- and we had NO
> guardrails, nor were our parents aware of the "dangers". In the
> meantime, it looks like we've found more effective ways to destroy
> civilization, so if "age-gated websites" is among our leading use
> cases, I suggest we're not tackling the most impactful societal
> problems (scaling fair access to social services, combating fraud and
> other societal inefficiences, providing alternatives to surveillance
> capitalism, combating misinformation, mitigating climate change,
> etc.). :)
>
> --
> Manu Sporny - https://www.linkedin.com/in/manusporny/
> Founder/CEO - Digital Bazaar, Inc.
> https://www.digitalbazaar.com/

Received on Monday, 21 July 2025 00:29:43 UTC