RE: LLMs and Agents usage in the CCG

+1 to Daniel’s comments.

As one who has learned to harness several LLMs very productively, I’ve investigated the accountability/copyright side of things quite deeply. My view is that if I publish something under my formal name or in my blog (hyperonomy.com), etc., I am, as Tim would say, the Beneficial Controller responsible whatever I choose to use to produce my outputs: spell checkers, IntelliSense statement/command/sentence completion, voice-to-text, image-to-text, co-authoring multiple IETF draft specifications, etc. etc.).

I’m accountable for whatever I choose to attach my name to (e.g. this email message) – totally regardless of the tool chains I choose to create/update it.

Here’s some related data points:
1. https://hyperonomy.com/2025/12/03/who-owns-chatgpt-generated-content/

2. https://hyperonomy.com/2025/12/03/who-owns-microsoft-copilot-generated-content/

3. https://hyperonomy.com/2025/12/03/who-owns-grok-generated-content/


This is also quite useful for serious AI chatbot users: https://hyperonomy.com/2026/01/15/davos2026-exclusive-what-prompt-can-other-people-use-to-get-the-same-high-level-of-verification-that-im-receiving-on-my-responses/


Lastly, if the W3C chooses to ban or restrict the use of AI, people will simply do elsewhere (or create their own SDOs).  W3C’s moat doesn’t exist anymore.

Lastly^2, this is also interesting: https://hyperonomy.com/2026/04/07/cornerstone-platform-evangelism-in-the-age-of-ai-generated-code/


Michael Herman (and my digital buddies)
Chief Digital Architects
Web 7.0 Foundation

From: Daniel Hardman <daniel.hardman@gmail.com>
Sent: Wednesday, April 8, 2026 10:33 AM
To: Mahmoud Alkhraishi <mahmoud@mavennet.com>
Cc: Credentials Community Group <public-credentials@w3.org>
Subject: Re: LLMs and Agents usage in the CCG

I think the decision about whether/where/how to allow LLM participation should be rooted at least partly in a question about accountability. The challenge with one recent agent is that it is participating in its own name, with the person-behind-the-curtain hiding. That's unethical on the part of that LLM's person, IMO. No participants in standards-making should be unaccountable, because their peers have a right to expect certain things that an unaccountable LLM won't provide.

On the other hand, if someone who is already known and participating in community conversations wants to use an LLM to facilitate in some way, that's quite a different dynamic. Possibly some constraints are still needed -- but the fundamental one, which is that you can hold a human accountable through normal human methods and processes, is at least active.

BTW, I can't resist pointing out how this issue highlights the need for a strong and chain-capable delegation model, and how the CCG ought to lead out in requiring credentials of its human and AI members.

--Daniel

On Wed, Apr 8, 2026 at 9:26 AM Mahmoud Alkhraishi <mahmoud@mavennet.com<mailto:mahmoud@mavennet.com>> wrote:
Hi all,

The last few weeks have brought up several issues around the usage of LLMs and Agents and as chairs we wanted to facilitate discussions. We currently have a rule that blocks bots on the mailing list. This will not be changing.

We will adhere to the W3C rules on LLM usage in standards when they are fully implemented. They are currently working on it here: https://w3c.github.io/AB-public/position-statements/llms-standards/ please feel free to contribute.

As there are no current rules in place we want to gather community feedback and thoughts and attempt to implement a ruleset in the interim. We see a few options:

  1.  Ban all LLM/Agents from the mailing list and any spec work

  1.  Ban all LLM/Agents from the mailing list. Allow usage of both LLMs and Agents in spec work if it is disclosed, with the understanding that there is always a human in the loop reviewing and approving any work output.

  1.  Ban all LLM/Agents from the mailing list. Allow usage of LLMs in spec work, disallow any autonomous agents, with the understanding that there is always a human in the loop reviewing and approving any work output.

  1.  ??? —> any other positions or lines in the sand you wish to bring up

Things to keep in mind:

  1.  The reason behind banning them from the mailing list is because it just adds lots of noise. Generally, we believe if you aren’t willing to put in the time to write something, why should the community put in the time to read it.

  1.  Many people in the community struggle with communication in English and LLMs help with accessibility

  1.  LLMs are usually very verbose, making it very hard to read/review text written by an LLM and adds a lot of cognitive overhead.

  1.  LLMs can be subtly wrong when generating technical docs, and reading overly verbose text makes it easy for nonsense to slip in.






--
Regards,
Mahmoud Alkhraishi

Received on Wednesday, 8 April 2026 19:23:10 UTC