- From: Melvin Carvalho <melvincarvalho@gmail.com>
- Date: Sat, 18 Apr 2026 01:50:54 +0200
- To: Marcus Engvall <marcus@engvall.email>
- Cc: W3C Credentials Community Group <public-credentials@w3.org>
- Message-ID: <CAKaEYhL7OYNKcXtG+q1eC4p4nho2H3e6NW+Ai84b_Sx6VtFqTA@mail.gmail.com>
so 18. 4. 2026 v 0:43 odesÃlatel Marcus Engvall <marcus@engvall.email> napsal: > Hi all, > > I have been a passive observer of the CCG and have found the discussions > in this group to have been remarkably considered, professional, and above > all else clear in both intent and direction. I hesitate to comment on the > current state of the mailing list as my tenure is minuscule compared to > some of my brilliant co-participants, but the quality of recent > contributions have compelled me to share some thoughts. > > Standards work is fundamentally a rigorous process of deriving a synthesis > of human knowledge and judgement through healthy debate and, particularly > in this group, decentralised knowledge discovery. It is precisely the > provenance of consideration that establishes the trust basis necessary for > the voluntary adoption of standards. Without trust, there is no standard. > It follows then that preserving the integrity of the standardisation > process is existential for any group working on standards. > > AI has improved the accessibility of standardisation to a larger and more > diverse group of participants which is incredibly valuable for > standardisation and should be encouraged. However, it should not come at > the cost of compromising the integrity of the process itself, something I > fear is happening in this group. > > Many recent contributions on this mailing list bear the hallmarks of LLM > generation. To be clear, it is my view that there is nothing wrong with > using AI agents to assist with research, proofreading, and other similar > tasks. I use these tools every day professionally and their value is > undeniable. That said, they are not replacements for human judgement, and > this is something I think shared by most people in this group. > > I find it difficult to trust a contribution in this group if it has been > generated by an LLM, and it is becoming increasingly intractable to follow > discussions as they seem to inevitably degenerate to chatbots arguing with > each other. Inferring the direction of standardisation, which has a direct > impact on commercial and technical planning, becomes impossible. I find it > quite ironic that the recent thread discussing LLMs and agents in the CCG > contains responses that suggest that they themselves have been generated by > an AI. If anything, I think it is proof enough of how acute this problem is. > > There is also the somewhat primal and adversarial aspect of evaluating > human judgement and reaching consensus. A debate is a contest between two > humans arguing for their position, which presupposes real agency and, well, > humanity. An AI agent is not, and will never be, a real human - and nobody > wants to credibly evaluate the arguments of a robot. > > I am not sure what the solution is, but I feel that the effects of this > are severe and will almost certainly discourage participants from > contributing, the downstream consequences of which I think are clear to > everyone. > > I would like to close out this lengthy email with this: I think a serious > discussion should be opened to consider migrating to a discussion channel > that is more resistant to AI agents, or at least consensus be formed to > institute and enforce a strict code of conduct with zero-tolerance for AI > slop. Openness is important, and exclusionary dynamics must be avoided to > the extent possible, but the integrity of the standardisation process and > the important work done in this group depends on humanity and not > artificiality. > Having followed web standards for about 2 decades, I think its hard to actually test this thesis. While there may be aspects of style and taste that appeal more or less subjectively, particularly around verbosity, its useful to look at the substance. LLMs have the advantage that they know most or all of the specs inside-out, due to their training. Most humans (with notable exceptions), including on this list, have partial understanding of the complete works of web standards. For example, most LLMs could put together a correct RDF vocabulary and it would be to a high standard. Most humans could not. I sympathize with the view expressed, but I would ask how would you test it, when it comes down to quality. Personally I have seen an uptick in quality, even if there is more to read. Just my 2 cents. > > Sincerely, > > -- > Marcus Engvall > > Principal—M. Engvall & Co. > mengvall.com > >
Received on Friday, 17 April 2026 23:51:10 UTC