Re: The Slopification of the CCG

Hi all,

This is an interesting, thoughtful thread. I appreciate the discussion.

Marcus thank you for sharing your concerns with the CCG - I know you are
not alone.

The CCG - like most open source, open participation digital communities -
is navigating a challenging period.
All of us have different experiences of AI, both in terms of our own
personal use and our expectations of others use in these forums.

As Manu pointed out, we do not yet have the social norms and shared
expectations to regulate these public forums in the face of LLM generated
content.

I think all of us must feel some of the overwhelm at the volume of content
we have to wade through. I certainly am.

> if the slop is getting to you, ignore it

While this is not bad advice, I do worry that as more people check out the
mailing list will become effectively useless.
This may be unavoidable, but I hope not.

I wonder if instead of ignoring it we should be calling out this behavior.
Privately at first and asking them to stop.
We should develop some clear guidance and policies we can point to here.

In the other thread Manu referred to a "asymmetric information overloading
attack". Effectively a DOS attack, but on us as thinking humans not
machines.
These types of behavior should be explicitly banned IMO, with clear
escalation to removal of the participant from the mailing list.

We should be explicit that this mailing list is for humans engaging,
thinking, debating and collaborating with other humans. This is the hard
work of standards.
Spamming the list with AI slop that took you minutes to generate but
requires hours of other participants time to process is not collaborative
behavior.

While Steve's idea that we could use an AI to detect and moderate slop is
interesting, I worry it would just add more noise into the mix.

Instead, I think we as human participants of the CCG should start defining
and upholding shared participation norms and figuring out how to navigate
this transition together.

We should all take responsibility for our contributions to this public
space and respect the time of others we wish to collaborate with through
our contributions.

No easy answers, but I know we have a strong community of considerate,
thoughtful humans here at the CCG.
Hoping we can lean into that humanity as we continue to exchange and craft
ideas together.

Best,
Will



On Sun, Apr 19, 2026 at 6:30 AM Kyle Den Hartog <kyle@pryvit.tech> wrote:

> In case it helps, here’s how things are going in terms of AIPREFs WG and
> the impact on search crawlers:
>
> https://x.com/grittygrease/status/2044152662673752454?s=20
>
> In other words, we don’t really have any enforcement mechanisms here to
> stop this. In fact I highly suspect some people are using them in this
> conversation right now unless their writing styles dramatically changed in
> the past few years. My email client started noticing it via machine
> learning I suspect and filtering threads to my spam inbox like this most of
> the time given I engage a lot less these days. Personally that’s been a
> good enough solution for me.
>
> Identity credentials are highly unlikely to stop this either which I
> suspect is where many in this community would want to turn. Identity
> credentials just turn the issue back into a key management problem and we
> don’t really have a great way to prevent a user from sharing their keys
> with their agent. That problem persists whether the system has a delegation
> solution or not too.
>
> So where do we go? I’m not exactly sure. Here’s the leading theories and
> their tradeoffs that stand out to me for the generalized solution of AI
> generated content:
>
> 1. https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/ -
> users just stop engaging in these spaces and retreat to closed door forums.
> Then we lose the open collaboration that made the Web great.
>
> 2. Re-hash DRM debate by making it so users can’t actually access their
> keys used to sign their identity credentials. This seems to be the current
> path governments like. It optimizes enforcement but also entrenches access
> to the Web around a select number of OSes and reduces who’s allowed to
> access and contribute to conversations on the Web. I also see that as a bit
> short sighted.
>
> 3. Re-introduce fingerprinting (and pseudo reputation to that
> fingerprint) based identity like what CAPTCHAs do. That works well for
> service side enforcement but in mailing lists like these not so much. So
> likely will need user controlled filtering like what spam filters in email
> clients do as well.
>
> 4. Is the most interesting but most unproven. We shift how people are
> reachable and build out Horton Protocol like what Mark Miller proposed
> years ago at ActivityPub conf. They may have already tried this and had
> issues. I’m not exactly sure: https://www.youtube.com/watch?v=NAfjEnu6R2g
>
> In any case though, we don’t have much of a solution right now in our
> particular forum and outside things like 3, I don’t expect much to change
> in a coordinated manner right now. Looking forward to seeing what we come
> up with though over the next decade and hopefully the trade offs we make
> don’t take away too much of what originally made the Web great.
>
> -Kyle
>
>
> -------- Original Message --------
> On Sunday, 04/19/26 at 13:10 Steve Capell <steve.capell@gmail.com> wrote:
>
> Challenge : there’s an increasing amount of AI generated content that,
> whilst possibly containing useful insights, takes more time to read than to
> generate and, given the size of this mailing list, is likely to lead most
> of us to unsubscribe, rendering the list worthless
>
> Constraint : AI used well is a genuinely useful tool and can dramatically
> improve quality of output.  “Used well” is key and, unfortunately, many do
> not use it so well.  Nevertheless, this group can’t become anti-LLM
> luddites or this list may equally become worthless for the opposite reason
>
> Goal : to continue to enjoy intelligent discussions between real humans
> that feel empowered to use AI to improve the value of their human
> contributions.  So the goal, it seems to me is not to block AI content but
> rather to block content that has little evidence of human analysis and
> interpretation.  Perhaps counterintuitively, LLMs themselves might be the
> best tool to detect such content
>
> Proposal : rather than continuing to discuss whether AI content on this
> list is good or bad, let’s collectively agree a rubric in the form of an AI
> prompt that can act as an automated list moderator.  The rubric should
> focus on requiring evidence of human assessment rather than blocking AI
> content
>
> I had a go at this myself with several of the messages in this thread and
> earlier ones and it seemed quite effective at blocking the ones that I
> would have blocked myself.  I know that there is a token cost associated
> with such a moderator but I for one would delighted to contribute.
>
> Disclaimer : this message was written with blurry eyes and fat thumbs on
> my iPhone - with no AI assistance whatsoever
>
> Kind regards
>
> Steven Capell
> UN/CEFACT Vice-Chair
> Mob: +61 410 437854
>
> On 19 Apr 2026, at 10:03 am, Melvin Carvalho <melvincarvalho@gmail.com>
> wrote:
>
> 
>
>
> ne 19. 4. 2026 v 1:49 odesílatel Marcus Engvall <marcus@engvall.email>
> napsal:
>
>> Hi all,
>>
>> I’m glad to see that we have some healthy discourse in this thread with a
>> variety of views. I would like to address some of the points made.
>>
>> On 18 Apr 2026, at 01:50, Melvin Carvalho <melvincarvalho@gmail.com>
>> wrote:
>>
>> LLMs have the advantage that they know most or all of the specs
>> inside-out, due to their training. Most humans (with notable exceptions),
>> including on this list, have partial understanding of the complete works of
>> web standards.
>>
>>
>> This is a real advantage that these tools have and it should not be
>> understated. I use them professionally for referential lookups and for
>> confirming hypotheses, and I have no doubt that they have the ability to
>> accelerate otherwise excellent standards work. But I am also careful to not
>> fall into the trap of assuming that their lexical consistency can fully
>> substitute  for human judgement. LLMs are probabilistic models with
>> encyclopaedic knowledge, they are not deterministic oracles with the
>> capacity to rigorously derive that same knowledge. In the context of the
>> kind of work done in this group I think it is important to not confuse the
>> two. I trust an LLM to give me a comprehensive overview of a standards
>> framework - I do not, however, trust it to prescribe the framework itself
>> without and human review and editorial judgement.
>>
>> I do however concede on your point on testing methodology, and I think
>> you raise a good point that Manu eloquently touched on.
>>
>
> Good points. However LLMs outperform humans on medical exams,
> olympiad questions and many other tests, often by wide margins. They are
> much more than prediction machines or probabilistic guessers. What I'm
> saying is that I predict LLMs would exceed humans in the standards setting
> on any quantitative evaluation. We just have not the tools to evaluate yet.
> However, I believe the picture will be much clearer one year from now.
>
>
>>
>> On 18 Apr 2026, at 02:24, Manu Sporny <msporny@digitalbazaar.com> wrote:
>>
>> Technology transitions, especially ones around human communication can
>> be rough to navigate. This one is no different, and sometimes it takes
>> decades to figure out the norms around a new medium (the printed page,
>> radio, television, BBSes, mailing lists, AOL, ICQ, Napster, Twitter,
>> Digg/Reddit/Discord, and so on).
>>
>>
>> You are completely right that this is a transition, and I think we are
>> all trying to map this new technology onto our existing mental models of
>> what discourse should and could be. Friction and contention is bound to
>> arise. It is clearly counterproductive, as you and later Amir rightly
>> stated, to enforce neo-Luddism and reject the technology wholesale.
>>
>> My point however is that the ability to passively follow and occasionally
>> contribute to developments and discussions in this group is immensely
>> valuable, both commercially and technically. Compressing the
>> signal-to-noise ratio raises the bar for both comprehension and
>> participation, and my fear is that the inevitable intractability will, as
>> you pointed out in the other thread, overwhelm people and alienate them,
>> especially those of us with many other commitments and who do not have the
>> time or ability to participate in every group call. That said, it is, as
>> you suggested, our responsibility to moderate our own information
>> ingestion, as has been the case for time immemorial in any rhetorical forum.
>>
>> Perhaps LLMs will simply change the structure of how discourse is
>> conducted in forums like these rather than drown it out, as some other
>> writers have suggested in the thread. If the cost to contribute text tends
>> to zero, naturally the valuable discussions will shift elsewhere to forums
>> that still have a cost, such as the group calls. I just hope the work
>> doesn’t lose the diversity of opinions that is crucial to develop a refined
>> and well-considered standard.
>>
>> --
>> Marcus Engvall
>>
>> Principal—M. Engvall & Co.
>> mengvall.com
>>
>>

Received on Monday, 20 April 2026 14:50:24 UTC