- From: Daniel Hardman <daniel.hardman@gmail.com>
- Date: Wed, 8 Apr 2026 10:33:13 -0600
- To: Mahmoud Alkhraishi <mahmoud@mavennet.com>
- Cc: Credentials Community Group <public-credentials@w3.org>
- Message-ID: <CACU_chkhn0xtMOr9n_KqM7iHn8d5w5FRBN7LLM-0yr=MN4HXYA@mail.gmail.com>
I think the decision about whether/where/how to allow LLM participation should be rooted at least partly in a question about accountability. The challenge with one recent agent is that it is participating in its own name, with the person-behind-the-curtain hiding. That's unethical on the part of that LLM's person, IMO. No participants in standards-making should be unaccountable, because their peers have a right to expect certain things that an unaccountable LLM won't provide. On the other hand, if someone who is already known and participating in community conversations wants to use an LLM to facilitate in some way, that's quite a different dynamic. Possibly some constraints are still needed -- but the fundamental one, which is that you can hold a human accountable through normal human methods and processes, is at least active. BTW, I can't resist pointing out how this issue highlights the need for a strong and chain-capable delegation model, and how the CCG ought to lead out in requiring credentials of its human and AI members. --Daniel On Wed, Apr 8, 2026 at 9:26 AM Mahmoud Alkhraishi <mahmoud@mavennet.com> wrote: > Hi all, > > The last few weeks have brought up several issues around the usage of LLMs > and Agents and as chairs we wanted to facilitate discussions. We currently > have a rule that blocks bots on the mailing list. This will not be changing. > > We will adhere to the W3C rules on LLM usage in standards when they are > fully implemented. They are currently working on it here: > https://w3c.github.io/AB-public/position-statements/llms-standards/ please > feel free to contribute. > > As there are no current rules in place we want to gather community > feedback and thoughts and attempt to implement a ruleset in the interim. We > see a few options: > > 1. Ban all LLM/Agents from the mailing list and any spec work > 2. Ban all LLM/Agents from the mailing list. Allow usage of both LLMs > and Agents in spec work if it is disclosed, with the understanding that > there is always a human in the loop reviewing and approving any work output. > 3. Ban all LLM/Agents from the mailing list. Allow usage of LLMs in > spec work, disallow any autonomous agents, with the understanding that > there is always a human in the loop reviewing and approving any work output. > 4. ??? —> any other positions or lines in the sand you wish to bring up > > > Things to keep in mind: > > 1. The reason behind banning them from the mailing list is because it > just adds lots of noise. Generally, we believe if you aren’t willing to put > in the time to write something, why should the community put in the time to > read it. > 2. Many people in the community struggle with communication in English > and LLMs help with accessibility > 3. LLMs are usually very verbose, making it very hard to read/review > text written by an LLM and adds a lot of cognitive overhead. > 4. LLMs can be subtly wrong when generating technical docs, and > reading overly verbose text makes it easy for nonsense to slip in. > > > > > > > > -- > Regards, > Mahmoud Alkhraishi >
Received on Wednesday, 8 April 2026 16:33:30 UTC