- From: Melvin Carvalho <melvincarvalho@gmail.com>
- Date: Sat, 18 Apr 2026 09:04:22 +0200
- To: "Eduardo C." <e.chongkan@gmail.com>
- Cc: Marcus Engvall <marcus@engvall.email>, W3C Credentials Community Group <public-credentials@w3.org>
- Message-ID: <CAKaEYhKPxmDsDsQKt=9M11j41vne_tUrwoqZFBxvvSu6L2pkAg@mail.gmail.com>
so 18. 4. 2026 v 5:39 odesílatel Eduardo C. <e.chongkan@gmail.com> napsal: > "I find it difficult to trust a contribution in this group if it has been > generated by an LLM" > > A- I wonder how everyone can tell if something was written by an LLM? > Aside of the now infamous "--" here and there that it uses, how can you > guys tell? ( how do you know it is not a grammaraly plugin? ) > B- Also wondering if the embedded gemini would detect if an email or text > was generated by an LLM. And more importantly, detect slope in that email > or content. e.g. I normally use 2 different LLMs to do manual adversary > checks on each other outputs and analysis, Gemini + Claude, and they > usually find improvements or catches, and I also find deviations and > correct the alignment. > C- Most slope happens when one is researching or asking for things that > are not in the model itself. E.g. you ask for certain uncommon thing and > the models—all of them—keep levitating and pointing towards what the > probability says they should answer. One needs to be aware of that. > LLM content is reasonably easy to identify as many signals are inserted by default. If we consider content on the internet over the last 2-3 years its gone from small LLM contributions to majority LLM content. I see the same happening with standards, as LLMs get smarter. IMHO we're in the last phase of human authored standards, and LLMs will end up becoming the majority of content in standards. But that's nothing to fear. It just means we get things over the line faster and at a higher quality than ever before. The standards that went before will be building blocks for what comes next. What humans will be able to do is to manage the complexity budget, present use cases and help standards work gain adoption. > > BTW, I agree with Michael Herman 100%. > > -- > Eduardo Chongkan > > > > On Fri, Apr 17, 2026 at 4:43 PM Marcus Engvall <marcus@engvall.email> > wrote: > >> Hi all, >> >> I have been a passive observer of the CCG and have found the discussions >> in this group to have been remarkably considered, professional, and above >> all else clear in both intent and direction. I hesitate to comment on the >> current state of the mailing list as my tenure is minuscule compared to >> some of my brilliant co-participants, but the quality of recent >> contributions have compelled me to share some thoughts. >> >> Standards work is fundamentally a rigorous process of deriving a >> synthesis of human knowledge and judgement through healthy debate and, >> particularly in this group, decentralised knowledge discovery. It is >> precisely the provenance of consideration that establishes the trust basis >> necessary for the voluntary adoption of standards. Without trust, there is >> no standard. It follows then that preserving the integrity of the >> standardisation process is existential for any group working on standards. >> >> AI has improved the accessibility of standardisation to a larger and more >> diverse group of participants which is incredibly valuable for >> standardisation and should be encouraged. However, it should not come at >> the cost of compromising the integrity of the process itself, something I >> fear is happening in this group. >> >> Many recent contributions on this mailing list bear the hallmarks of LLM >> generation. To be clear, it is my view that there is nothing wrong with >> using AI agents to assist with research, proofreading, and other similar >> tasks. I use these tools every day professionally and their value is >> undeniable. That said, they are not replacements for human judgement, and >> this is something I think shared by most people in this group. >> >> I find it difficult to trust a contribution in this group if it has been >> generated by an LLM, and it is becoming increasingly intractable to follow >> discussions as they seem to inevitably degenerate to chatbots arguing with >> each other. Inferring the direction of standardisation, which has a direct >> impact on commercial and technical planning, becomes impossible. I find it >> quite ironic that the recent thread discussing LLMs and agents in the CCG >> contains responses that suggest that they themselves have been generated by >> an AI. If anything, I think it is proof enough of how acute this problem is. >> >> There is also the somewhat primal and adversarial aspect of evaluating >> human judgement and reaching consensus. A debate is a contest between two >> humans arguing for their position, which presupposes real agency and, well, >> humanity. An AI agent is not, and will never be, a real human - and nobody >> wants to credibly evaluate the arguments of a robot. >> >> I am not sure what the solution is, but I feel that the effects of this >> are severe and will almost certainly discourage participants from >> contributing, the downstream consequences of which I think are clear to >> everyone. >> >> I would like to close out this lengthy email with this: I think a serious >> discussion should be opened to consider migrating to a discussion channel >> that is more resistant to AI agents, or at least consensus be formed to >> institute and enforce a strict code of conduct with zero-tolerance for AI >> slop. Openness is important, and exclusionary dynamics must be avoided to >> the extent possible, but the integrity of the standardisation process and >> the important work done in this group depends on humanity and not >> artificiality. >> >> Sincerely, >> >> -- >> Marcus Engvall >> >> Principal—M. Engvall & Co. >> mengvall.com >> >>
Received on Saturday, 18 April 2026 07:04:38 UTC