Re: The Slopification of the CCG

Kyle,

You say

Identity credentials are highly unlikely to stop this either which I suspect is where many in this community would want to turn. Identity credentials just turn the issue back into a key management problem and we don’t really have a great way to prevent a user from sharing their keys with their agent. That problem persists whether the system has a delegation solution or not too.
I think there may be an important "but" to this. I think some of the things you suggest later may relate to it, or some of the ideas that Will discussed later. I'm definitely sure that there has been much more discussion about things like this and more attempted approaches to similar things that I am aware, as I still consider myself a newbie here. However, let me state my view...

While you can't prevent a user from sharing their keys with their agent, you can have, like you said "pseudo-reputation" systems attached to keys, that take time and good contributions to build, and are deteriorated when providing lower quality contributions. I believe this can be achieved without systematically breaking sovereignty. These hypothetical system(s) could span across multiple mediums, they don't need to be constrained to single contexts, and be optional and complementary rather than strictly enforced, but they could help both as a deterrent for people haphazardly sharing unfiltered AI contents (I refuse to use the word slop because I feel it has connotations that challenge civil conversations and is pretty much a slur, even if I understand what people mean by it), and as a way for people to identify and neutralize persistent sources of it.

In my view, this is no different to what we already do in our physical embodied life. We have face recognition embedded into us (most of us), and we learn to create an internal opinion of other people based on their interactions with us. When somebody consistently steals our time with pointless drivel and unfiltered contributions, we don't need to put them in jail, put a sign over their heads that says they are unworthy, or (generally speaking) prohibit them from participating in public life. We simply don't pay as much attention to them, because we know who they are and what their usual approach to contributions is. Identity online simply can replace the face recognition in a way that is more flexible and preserves sovereignty better, as well as being better equipped to deal with the volume.

As I said, I'm sure I am unaware of the extent to which similar ideas have been proposed and explored. I am also very aware that in the same way that some people here are using questionable predictions of what AI will become that, whether grounded or not, remain just a prediction and not a current reality that can be wielded as a definitive argument for what to do right now; what I am discussing here is also a prediction or a hope, rather than a current reality. But in the same way that I think it's valid to work towards better AI tools, I think it's valid to work towards systems that enable us to better filter through the ocean of information in ways that respect sovereignty for all sides involved, can be personalized, and respect our own intelligence. I think it's a dream worth pursuing, and I believe it relates directly to the current matter.

But in the meantime, I feel that discussing like we are doing seems to already be shaping a lot of moderate people's views into compromises that may make this mailing list more comfortable for everybody involved. One way or another, we will find out.


Juan Casanova Jaquete

Assistant Professor – School of Engineering and Physical Sciences – Data Science GA Programme

j.casanova@hw.ac.uk<mailto:j.casanova@hw.ac.uk> – Earl Mountbatten Building 1.31 (Heriot Watt Edinburgh campus)



Email is an asynchronous communication method. I do not expect and others should not expect immediate replies. Reply at your earliest convenience and working hours.



I am affected by Delayed Sleep Phase Disorder. This means that I am an extreme night owl. My work day usually begins at 14:00 Edinburgh time, and I often work late into the evening and on weekends. Please try to take this into account where possible.




________________________________
From: Kyle Den Hartog <kyle@pryvit.tech>
Sent: Sunday, April 19, 2026 06:28
To: Steve Capell <steve.capell@gmail.com>
Cc: Melvin Carvalho <melvincarvalho@gmail.com>; Marcus Engvall <marcus@engvall.email>; Manu Sporny <msporny@digitalbazaar.com>; public-credentials@w3.org <public-credentials@w3.org>
Subject: Re: The Slopification of the CCG

You don't often get email from kyle@pryvit.tech. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>
****************************************************************
Caution: This email originated from a sender outside Heriot-Watt University.
Do not follow links or open attachments if you doubt the authenticity of the sender or the content.
****************************************************************

In case it helps, here’s how things are going in terms of AIPREFs WG and the impact on search crawlers:

https://x.com/grittygrease/status/2044152662673752454?s=20


In other words, we don’t really have any enforcement mechanisms here to stop this. In fact I highly suspect some people are using them in this conversation right now unless their writing styles dramatically changed in the past few years. My email client started noticing it via machine learning I suspect and filtering threads to my spam inbox like this most of the time given I engage a lot less these days. Personally that’s been a good enough solution for me.

Identity credentials are highly unlikely to stop this either which I suspect is where many in this community would want to turn. Identity credentials just turn the issue back into a key management problem and we don’t really have a great way to prevent a user from sharing their keys with their agent. That problem persists whether the system has a delegation solution or not too.

So where do we go? I’m not exactly sure. Here’s the leading theories and their tradeoffs that stand out to me for the generalized solution of AI generated content:

1. https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/ - users just stop engaging in these spaces and retreat to closed door forums. Then we lose the open collaboration that made the Web great.

2. Re-hash DRM debate by making it so users can’t actually access their keys used to sign their identity credentials. This seems to be the current path governments like. It optimizes enforcement but also entrenches access to the Web around a select number of OSes and reduces who’s allowed to access and contribute to conversations on the Web. I also see that as a bit short sighted.

3. Re-introduce fingerprinting (and pseudo reputation to that fingerprint) based identity like what CAPTCHAs do. That works well for service side enforcement but in mailing lists like these not so much. So likely will need user controlled filtering like what spam filters in email clients do as well.

4. Is the most interesting but most unproven. We shift how people are reachable and build out Horton Protocol like what Mark Miller proposed years ago at ActivityPub conf. They may have already tried this and had issues. I’m not exactly sure: https://www.youtube.com/watch?v=NAfjEnu6R2g


In any case though, we don’t have much of a solution right now in our particular forum and outside things like 3, I don’t expect much to change in a coordinated manner right now. Looking forward to seeing what we come up with though over the next decade and hopefully the trade offs we make don’t take away too much of what originally made the Web great.

-Kyle


-------- Original Message --------
On Sunday, 04/19/26 at 13:10 Steve Capell <steve.capell@gmail.com<mailto:steve.capell@gmail.com>> wrote:
Challenge : there’s an increasing amount of AI generated content that, whilst possibly containing useful insights, takes more time to read than to generate and, given the size of this mailing list, is likely to lead most of us to unsubscribe, rendering the list worthless

Constraint : AI used well is a genuinely useful tool and can dramatically improve quality of output.  “Used well” is key and, unfortunately, many do not use it so well.  Nevertheless, this group can’t become anti-LLM luddites or this list may equally become worthless for the opposite reason

Goal : to continue to enjoy intelligent discussions between real humans that feel empowered to use AI to improve the value of their human contributions.  So the goal, it seems to me is not to block AI content but rather to block content that has little evidence of human analysis and interpretation.  Perhaps counterintuitively, LLMs themselves might be the best tool to detect such content

Proposal : rather than continuing to discuss whether AI content on this list is good or bad, let’s collectively agree a rubric in the form of an AI prompt that can act as an automated list moderator.  The rubric should focus on requiring evidence of human assessment rather than blocking AI content

I had a go at this myself with several of the messages in this thread and earlier ones and it seemed quite effective at blocking the ones that I would have blocked myself.  I know that there is a token cost associated with such a moderator but I for one would delighted to contribute.

Disclaimer : this message was written with blurry eyes and fat thumbs on my iPhone - with no AI assistance whatsoever

Kind regards

Steven Capell
UN/CEFACT Vice-Chair
Mob: +61 410 437854

On 19 Apr 2026, at 10:03 am, Melvin Carvalho <melvincarvalho@gmail.com<mailto:melvincarvalho@gmail.com>> wrote:




ne 19. 4. 2026 v 1:49 odesílatel Marcus Engvall <marcus@engvall.email<mailto:marcus@engvall.email>> napsal:
Hi all,

I’m glad to see that we have some healthy discourse in this thread with a variety of views. I would like to address some of the points made.

On 18 Apr 2026, at 01:50, Melvin Carvalho <melvincarvalho@gmail.com<mailto:melvincarvalho@gmail.com>> wrote:

LLMs have the advantage that they know most or all of the specs inside-out, due to their training. Most humans (with notable exceptions), including on this list, have partial understanding of the complete works of web standards.

This is a real advantage that these tools have and it should not be understated. I use them professionally for referential lookups and for confirming hypotheses, and I have no doubt that they have the ability to accelerate otherwise excellent standards work. But I am also careful to not fall into the trap of assuming that their lexical consistency can fully substitute  for human judgement. LLMs are probabilistic models with encyclopaedic knowledge, they are not deterministic oracles with the capacity to rigorously derive that same knowledge. In the context of the kind of work done in this group I think it is important to not confuse the two. I trust an LLM to give me a comprehensive overview of a standards framework - I do not, however, trust it to prescribe the framework itself without and human review and editorial judgement.

I do however concede on your point on testing methodology, and I think you raise a good point that Manu eloquently touched on.

Good points. However LLMs outperform humans on medical exams, olympiad questions and many other tests, often by wide margins. They are much more than prediction machines or probabilistic guessers. What I'm saying is that I predict LLMs would exceed humans in the standards setting on any quantitative evaluation. We just have not the tools to evaluate yet. However, I believe the picture will be much clearer one year from now.


On 18 Apr 2026, at 02:24, Manu Sporny <msporny@digitalbazaar.com<mailto:msporny@digitalbazaar.com>> wrote:

Technology transitions, especially ones around human communication can
be rough to navigate. This one is no different, and sometimes it takes
decades to figure out the norms around a new medium (the printed page,
radio, television, BBSes, mailing lists, AOL, ICQ, Napster, Twitter,
Digg/Reddit/Discord, and so on).

You are completely right that this is a transition, and I think we are all trying to map this new technology onto our existing mental models of what discourse should and could be. Friction and contention is bound to arise. It is clearly counterproductive, as you and later Amir rightly stated, to enforce neo-Luddism and reject the technology wholesale.

My point however is that the ability to passively follow and occasionally contribute to developments and discussions in this group is immensely valuable, both commercially and technically. Compressing the signal-to-noise ratio raises the bar for both comprehension and participation, and my fear is that the inevitable intractability will, as you pointed out in the other thread, overwhelm people and alienate them, especially those of us with many other commitments and who do not have the time or ability to participate in every group call. That said, it is, as you suggested, our responsibility to moderate our own information ingestion, as has been the case for time immemorial in any rhetorical forum.

Perhaps LLMs will simply change the structure of how discourse is conducted in forums like these rather than drown it out, as some other writers have suggested in the thread. If the cost to contribute text tends to zero, naturally the valuable discussions will shift elsewhere to forums that still have a cost, such as the group calls. I just hope the work doesn’t lose the diversity of opinions that is crucial to develop a refined and well-considered standard.

--
Marcus Engvall

Principal—M. Engvall & Co.
mengvall.com<http://mengvall.com>

________________________________

Founded in 1821, Heriot-Watt is a leader in ideas and solutions. With campuses and students across the entire globe we span the world, delivering innovation and educational excellence in business, engineering, design and the physical, social and life sciences. This email is generated from the Heriot-Watt University Group, which includes:

  1.  Heriot-Watt University, a Scottish charity registered under number SC000278
  2.  Heriot- Watt Services Limited (Oriam), Scotland's national performance centre for sport. Heriot-Watt Services Limited is a private limited company registered is Scotland with registered number SC271030 and registered office at Research & Enterprise Services Heriot-Watt University, Riccarton, Edinburgh, EH14 4AS.

The contents (including any attachments) are confidential. If you are not the intended recipient of this e-mail, any disclosure, copying, distribution or use of its contents is strictly prohibited, and you should please notify the sender immediately and then delete it (including any attachments) from your system.

Received on Thursday, 23 April 2026 06:15:08 UTC