Re: The Slopification of the CCG

On Fri, Apr 17, 2026 at 6:42 PM Marcus Engvall <marcus@engvall.email> wrote:
> I think a serious discussion should be opened to consider migrating to a discussion channel that is more resistant to AI agents, or at least consensus be formed to institute and enforce a strict code of conduct with zero-tolerance for AI slop. Openness is important, and exclusionary dynamics must be avoided to the extent possible, but the integrity of the standardisation process and the important work done in this group depends on humanity and not artificiality.

Hey Marcus, first thank you for engaging and speaking up. I do
appreciate what you wrote, and I too have been moderately annoyed by
the amount of AI slop circulating these days, in various forums
(including this one).

I think the most challenging part of this "problem" is that the
"technology" is using humans as a conduit. I don't think there will be
a single online communication mechanism in the near future that will
be "resistant to AI agents"... maybe we're going back to the days of
having a drink with a friend at a local pub.

Technology transitions, especially ones around human communication can
be rough to navigate. This one is no different, and sometimes it takes
decades to figure out the norms around a new medium (the printed page,
radio, television, BBSes, mailing lists, AOL, ICQ, Napster, Twitter,
Digg/Reddit/Discord, and so on).

One thing that has been fairly consistent through all of that are
people that engage with groups in ways that are not effective.
Especially groups, such as this one, that try really hard to be open
and welcoming and operate with a code of ethics and professional
conduct that favors engaging with each other and listening, respecting
others viewpoints, and yes, that also means tolerating people that are
learning, misguided, don't quite get it (or the hint), are borderline
spamming, have difficulties navigating social cues, and a whole host
of other things that are not tolerated in other communities. It leads
to much more noise than many of us would prefer.

There is also a limit to what most people will accept and that will
mean we lose very good people because the noise to signal gets too
high.

If you want a higher signal to noise, join the groups that are meeting
weekly building something. Those conversations are very human and have
very little LLM use. They're certainly far from perfect... they're
messy and human, sometimes we argue for months about the name of a
particular concept, or a particular algorithm. That is the good, hard
work that needs to be done, and people continue to do that work (both
assisted by LLMs and without).

I know this particular technology transition can be frustrating, and I
guess what I'm trying to say is -- if the slop is getting to you,
ignore it -- you're probably not missing anything. I know that I'm
hitting DELETE far more often than I have in the past the second I hit
something that looks like slop, that doesn't seem like it's going
anywhere.

You don't have to give your attention to every electron spilled on
this mailing list... only about 10% of any of these ideas are going
anywhere... the trick is understanding which ones have a chance and
which ones don't.

-- manu

-- 
Manu Sporny - https://www.linkedin.com/in/manusporny/
Founder/CEO - Digital Bazaar, Inc.
https://www.digitalbazaar.com/

Received on Saturday, 18 April 2026 00:24:45 UTC