Re: LLMs and Agents usage in the CCG

+1 Daniel.

The implication seems to be that:
- Posts to the group are signed by a human even if delegated to a bot
- A ban would apply to the human, not the bot

I don't see the point of credentialing AI "members". I see only a
delegation issue.

Adrian

On Wed, Apr 8, 2026 at 12:35 PM Daniel Hardman <daniel.hardman@gmail.com>
wrote:

> I think the decision about whether/where/how to allow LLM participation
> should be rooted at least partly in a question about accountability. The
> challenge with one recent agent is that it is participating in its own
> name, with the person-behind-the-curtain hiding. That's unethical on the
> part of that LLM's person, IMO. No participants in standards-making should
> be unaccountable, because their peers have a right to expect certain things
> that an unaccountable LLM won't provide.
>
> On the other hand, if someone who is already known and participating in
> community conversations wants to use an LLM to facilitate in some way,
> that's quite a different dynamic. Possibly some constraints are still
> needed -- but the fundamental one, which is that you can hold a human
> accountable through normal human methods and processes, is at least active.
>
> BTW, I can't resist pointing out how this issue highlights the need for a
> strong and chain-capable delegation model, and how the CCG ought to lead
> out in requiring credentials of its human and AI members.
>
> --Daniel
>
> On Wed, Apr 8, 2026 at 9:26 AM Mahmoud Alkhraishi <mahmoud@mavennet.com>
> wrote:
>
>> Hi all,
>>
>> The last few weeks have brought up several issues around the usage of
>> LLMs and Agents and as chairs we wanted to facilitate discussions. We
>> currently have a rule that blocks bots on the mailing list. This will not
>> be changing.
>>
>> We will adhere to the W3C rules on LLM usage in standards when they are
>> fully implemented. They are currently working on it here:
>> https://w3c.github.io/AB-public/position-statements/llms-standards/ please
>> feel free to contribute.
>>
>> As there are no current rules in place we want to gather community
>> feedback and thoughts and attempt to implement a ruleset in the interim. We
>> see a few options:
>>
>>    1. Ban all LLM/Agents from the mailing list and any spec work
>>    2. Ban all LLM/Agents from the mailing list. Allow usage of both LLMs
>>    and Agents in spec work if it is disclosed, with the understanding that
>>    there is always a human in the loop reviewing and approving any work output.
>>    3. Ban all LLM/Agents from the mailing list. Allow usage of LLMs in
>>    spec work, disallow any autonomous agents, with the understanding
>>    that there is always a human in the loop reviewing and approving any work
>>    output.
>>    4. ??? —> any other positions or lines in the sand you wish to bring
>>    up
>>
>>
>> Things to keep in mind:
>>
>>    1. The reason behind banning them from the mailing list is because it
>>    just adds lots of noise. Generally, we believe if you aren’t willing to put
>>    in the time to write something, why should the community put in the time to
>>    read it.
>>    2. Many people in the community struggle with communication in
>>    English and LLMs help with accessibility
>>    3. LLMs are usually very verbose, making it very hard to read/review
>>    text written by an LLM and adds a lot of cognitive overhead.
>>    4. LLMs can be subtly wrong when generating technical docs, and
>>    reading overly verbose text makes it easy for nonsense to slip in.
>>
>>
>>
>>
>>
>>
>>
>> --
>> Regards,
>> Mahmoud Alkhraishi
>>
>

Received on Wednesday, 8 April 2026 16:54:48 UTC