Re: The Slopification of the CCG

If you think the role/goal of this group is code generation, that's a fair
argument.

On Sat, Apr 18, 2026 at 2:42 PM Adrian Gropper <agropper@healthurl.com>
wrote:

> The structure of workgroups and SDOs will change with increasingly
> capable AI.
>
> For almost a year now, I've been able to ask LLMs how to accomplish what I
> want. They read the API documentation, answered my questions and
> implemented the API. I never, once looked at a standard or the code.
> However, I do ask Claude Code to generate markdown files for documentation,
> which helps me feel in control and reduces the cost and risk of future
> changes. The documentation includes very valuable analyses of security
> vulnerabilities.
>
> Consequently, the role of groups like CCG and standards workgroups is
> changing, at least for me. I look forward to learning about business
> realities and real-world experience.
>
> I no longer care about new standards. If a vendor or service provider
> wants my business, it's up to them to provide and document the APIs.
> Standardized APIs can reduce risk and switching costs, of course, but if
> the tradeoff is 5+ years of discussions on CCG and related forums, the
> juice is no longer worth the squeeze.
>
> Adrian
>
> On Sat, Apr 18, 2026 at 4:22 PM Kim Hamilton <kimdhamilton@gmail.com>
> wrote:
>
>> Marcus represents a growing contingent of increasingly concerned folks,
>> and I'm grateful to him for speaking up.
>>
>> Several factors:
>> 1. LLMs trained on the existing corpus of internet standards will tend to
>> reproduce the assumptions baked into it. Particularly given the work of
>> this group, our experience* as humans *is critical for detecting /
>> evaluating any such bias*.
>> 2. Particularly for a standards generation/incubation body, it's
>> essential to know that a contribution comes from an individual/org/entity
>> with a stake in the outcome and accountability for the direction it implies.
>> 3. Of particular concern is (what I consider) a category error
>> prematurely attributing properties like "knowledge", "understanding" to
>> LLMs, with accompanying statements implying we humans are now off the
>> hook for deeper critical evaluation.
>>
>> Our human agency, judgment, and accountability*,* *feeble though our
>> little brains may be,* are needed now more than ever.
>>
>> To Manu's point, perhaps the venue for authentic human discourse in the
>> CCG is now restricted to the group calls, until someone creates a
>> vocally-convincing agent...
>>
>> Kim
>>
>> ** Not saying you cannot use an LLM to help with this work, but see other
>> points*
>>
>> On Sat, Apr 18, 2026 at 12:38 PM Michael Herman (Trusted Digital Web) <
>> mwherman@parallelspace.net> wrote:
>>
>>> [I know I’m saying too much but I have a lot to say.]
>>>
>>>
>>>
>>> RE: LLMs will slowly absorb all pattern matching and synthesis work.
>>>
>>>
>>>
>>> +1 Christoph
>>>
>>>
>>>
>>> At DAVOS this past January, noted expert Yuval Noah Harari shared the
>>> following (https://youtu.be/QiT2yK-5-yg?si=x71xvnZou_72o9u2&t=227):
>>>
>>>
>>>
>>> *Some people argue that AI is just glorified autocomplete. It barely
>>> predicts the next word in a sentence. But is that so different from what
>>> the human mind is doing? Try to observe - to catch - the next word that
>>> pops up in your mind. Do you really know why you thought of that word?
>>> …where did it come from? Why did you think of this particular word and not
>>> some other word? Do you know? *
>>>
>>>
>>>
>>> *As far as putting words in order is concerned, AI already thinks better
>>> than many of us. Therefore, anything made of words will be taken over by
>>> AI. If laws are made of words, then AI will take over the legal system. If
>>> books are just combinations of words, then AI will take over books. *
>>>
>>>
>>>
>>> *If religion is built from words, then AI will take over religion. This
>>> is particularly true of religions based on books like Islam, Christianity,
>>> or Judaism. Judaism called itself the religion of the book and it grants
>>> ultimate authority not to humans but to words in books. Humans have
>>> authority in Judaism not because of our experiences but only because we
>>> learn words in books. Now, no human can read and remember all the words in
>>> all the Jewish books. But AI can easily do that. What happens to a
>>> “religion of the book” when the greatest expert on the holy book is an AI?*
>>>
>>> [Yuval Noah Harari, 2026]
>>>
>>>
>>>
>>> *In effect, all the standards we need already exist.  It’s a simple
>>> matter of choosing the right words and placing them in the right order.  AI
>>> can do this more completely, more correctly, with greater precision, and
>>> less time and effort than any human or (working) group of humans. [Michael
>>> Herman, 2026]*
>>>
>>>
>>>
>>> I’ll try to pipe down,
>>>
>>> Michael Herman
>>>
>>> Chief Digital Officer
>>>
>>> Web 7.0 Foundation
>>>
>>>
>>>
>>> *From:* Christoph <christoph@christophdorn.com>
>>> *Sent:* Saturday, April 18, 2026 8:23 AM
>>> *To:* Melvin Carvalho <melvincarvalho@gmail.com>; Eduardo C. <
>>> e.chongkan@gmail.com>
>>> *Cc:* Marcus Engvall <marcus@engvall.email>; W3C Credentials CG <
>>> public-credentials@w3.org>
>>> *Subject:* Re: The Slopification of the CCG
>>>
>>>
>>>
>>> On Sat, Apr 18, 2026, at 2:04 AM, Melvin Carvalho wrote:
>>>
>>> so 18. 4. 2026 v 5:39 odesílatel Eduardo C. <e.chongkan@gmail.com>
>>> napsal:
>>>
>>> "I find it difficult to trust a contribution in this group if it has
>>> been generated by an LLM"
>>>
>>>
>>>
>>> A- I wonder how everyone can tell if something was written by an LLM?
>>> Aside of the now infamous "--" here and there that it uses, how can you
>>> guys tell? ( how do you know it is not a grammaraly plugin? )
>>>
>>> B- Also wondering if the embedded gemini would detect if an email or
>>> text was generated by an LLM. And more importantly, detect slope in that
>>> email or content. e.g. I normally use 2 different LLMs to do manual
>>> adversary checks on each other outputs and analysis, Gemini + Claude, and
>>> they usually find improvements or catches, and I also find deviations and
>>> correct the alignment.
>>>
>>> C- Most slope happens when one is researching or asking for things that
>>> are not in the model itself. E.g. you ask for certain uncommon thing and
>>> the models—all of them—keep levitating and pointing towards what the
>>> probability says they should answer. One needs to be aware of that.
>>>
>>>
>>>
>>> LLM content is reasonably easy to identify as many signals are inserted
>>> by default.
>>>
>>>
>>>
>>> If we consider content on the internet over the last 2-3 years its gone
>>> from small LLM contributions to majority LLM content.
>>>
>>>
>>>
>>> I see the same happening with standards, as LLMs get smarter.
>>>
>>>
>>>
>>> IMHO we're in the last phase of human authored standards, and LLMs will
>>> end up becoming the majority of content in standards. But that's nothing to
>>> fear. It just means we get things over the line faster and at a higher
>>> quality than ever before.
>>>
>>>
>>>
>>> The standards that went before will be building blocks for what comes
>>> next.
>>>
>>>
>>>
>>> I share this point of view. LLMs will slowly absorb all pattern matching
>>> and synthesis work.
>>>
>>>
>>>
>>> There is currently a huge spectrum of LLM practitioner competence
>>> leading to different assessments of LLM usefulness and capability. This
>>> leads to opinions that are not well grounded.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> What humans will be able to do is to manage the complexity budget,
>>> present use cases and help standards work gain adoption.
>>>
>>>
>>>
>>> Humans will provide the judgement to direct LLMs and thus standards
>>> towards what matters which is what LLM cannot do.
>>>
>>>
>>>
>>> Humans provide direction, LLMs execute.
>>>
>>>
>>>
>>> IMO the focus of this list will evolve towards making such judgements
>>> and discussions about meaningful direction will become more and more
>>> important in the future. When you can go any direction rapidly, you might
>>> as well go in directions that really matter.
>>>
>>>
>>>
>>> Christooh
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> BTW, I agree with Michael Herman 100%.
>>>
>>> --
>>>
>>> Eduardo Chongkan
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Apr 17, 2026 at 4:43 PM Marcus Engvall <marcus@engvall.email>
>>> wrote:
>>>
>>> Hi all,
>>>
>>>
>>>
>>> I have been a passive observer of the CCG and have found the discussions
>>> in this group to have been remarkably considered, professional, and above
>>> all else clear in both intent and direction. I hesitate to comment on the
>>> current state of the mailing list as my tenure is minuscule compared to
>>> some of my brilliant co-participants, but the quality of recent
>>> contributions have compelled me to share some thoughts.
>>>
>>>
>>>
>>> Standards work is fundamentally a rigorous process of deriving a
>>> synthesis of human knowledge and judgement through healthy debate and,
>>> particularly in this group, decentralised knowledge discovery. It is
>>> precisely the provenance of consideration that establishes the trust basis
>>> necessary for the voluntary adoption of standards. Without trust, there is
>>> no standard. It follows then that preserving the integrity of the
>>> standardisation process is existential for any group working on standards.
>>>
>>>
>>>
>>> AI has improved the accessibility of standardisation to a larger and
>>> more diverse group of participants which is incredibly valuable for
>>> standardisation and should be encouraged. However, it should not come at
>>> the cost of compromising the integrity of the process itself, something I
>>> fear is happening in this group.
>>>
>>>
>>>
>>> Many recent contributions on this mailing list bear the hallmarks of LLM
>>> generation. To be clear, it is my view that there is nothing wrong with
>>> using AI agents to assist with research, proofreading, and other similar
>>> tasks. I use these tools every day professionally and their value is
>>> undeniable. That said, they are not replacements for human judgement, and
>>> this is something I think shared by most people in this group.
>>>
>>>
>>>
>>> I find it difficult to trust a contribution in this group if it has been
>>> generated by an LLM, and it is becoming increasingly intractable to follow
>>> discussions as they seem to inevitably degenerate to chatbots arguing with
>>> each other. Inferring the direction of standardisation, which has a direct
>>> impact on commercial and technical planning, becomes impossible. I find it
>>> quite ironic that the recent thread discussing LLMs and agents in the CCG
>>> contains responses that suggest that they themselves have been generated by
>>> an AI. If anything, I think it is proof enough of how acute this problem is.
>>>
>>>
>>>
>>> There is also the somewhat primal and adversarial aspect of evaluating
>>> human judgement and reaching consensus. A debate is a contest between two
>>> humans arguing for their position, which presupposes real agency and, well,
>>> humanity. An AI agent is not, and will never be, a real human - and nobody
>>> wants to credibly evaluate the arguments of a robot.
>>>
>>>
>>>
>>> I am not sure what the solution is, but I feel that the effects of this
>>> are severe and will almost certainly discourage participants from
>>> contributing, the downstream consequences of which I think are clear to
>>> everyone.
>>>
>>>
>>>
>>> I would like to close out this lengthy email with this: I think a
>>> serious discussion should be opened to consider migrating to a discussion
>>> channel that is more resistant to AI agents, or at least consensus be
>>> formed to institute and enforce a strict code of conduct with
>>> zero-tolerance for AI slop. Openness is important, and exclusionary
>>> dynamics must be avoided to the extent possible, but the integrity of the
>>> standardisation process and the important work done in this group depends
>>> on humanity and not artificiality.
>>>
>>>
>>>
>>> Sincerely,
>>>
>>>
>>>
>>> --
>>>
>>> Marcus Engvall
>>>
>>>
>>>
>>> Principal—M. Engvall & Co.
>>>
>>> mengvall.com
>>>
>>>
>>>
>>

Received on Saturday, 18 April 2026 21:46:34 UTC