Re: LLMs and Agents usage in the CCG

Hi all,

I think it’s more useful to clearly define boundaries. LLMs can help in how
something is written, but they should not influence what is being decided.
Areas like security, cryptography, privacy, interoperability, core
architecture, and working group decisions should remain strictly
human-driven, because these require intent, accountability, and real
understanding.

At the same time, LLMs can be useful in a limited way—mainly for improving
language, fixing grammar, helping non-native English speakers express
themselves better, and structuring drafts. They can also help with examples
or explanations, but not with anything normative. As a general principle,
any information should be verified before being shared further, especially
when LLMs are involved. In all cases, there must be a human author who has
properly reviewed the content, understands it, and takes full
responsibility for it.

On the mailing list, keeping it human-only makes sense to avoid noise and
maintain meaningful discussion. I would lean toward Option 3, with one
addition: light disclosure of LLM assistance (for example, noting if it was
used for structuring or language), so the group can apply appropriate
scrutiny. LLMs and agents are tools we’ve built over time, and it’s
important we continue to use them in line with their intended role—as
assistants, not decision-makers—while keeping accountability firmly human.


Regards
Amir Hameed Mir

On Thu, 9 Apr 2026 at 1:04 AM, Taylor Kendal <taylor@learningeconomy.io>
wrote:

> Great thread, and I appreciate Mahmoud and the chairs for facilitating
> this proactively. I want to build on what Daniel and Dmitri are raising as
> I think there are a few important points underneath the surface-level
> policy question.
>
> The real issue isn't whether agents participate, but whether we can
> maintain accountability, provenance, and trust when they do. And that's a
> problem this community is arguably more qualified to address than almost
> any other group. The CCG has spent over a decade building the conceptual
> and technical foundations for exactly this kind of challenge (verifiable
> claims, delegation, HiTL trust chains). The question of *"who said this,
> on whose authority, and can I verify it?"* is what this group does/has
> always done.
>
> Dmitri's point about infra resonates. The gap between what we know needs
> to exist and what actually exists today is very real. It's not a
> theoretical gap, but one of resources and institutional commitment. Like
> many here, we work in this space every day: building open,
> standards-aligned credential infrastructure that can make things like
> verifiable delegation, accountable AI participation, and human-anchored
> trust practically possible, not just conceptually sound. We'd welcome the
> opportunity to inform or support the CCG in standing up that kind of infra
> if there's appetite for it.
>
> One other non-trivial dimension to flag: the CCG's archive (a decade+ of
> meeting transcripts, mailing list threads, spec work, etc.) represents an
> extraordinary knowledge base. If/when responsible AI tooling is applied to
> standards work, the quality and integrity of that contextual foundation
> matters A LOT. A well-structured, verifiable archive paired with
> responsible AI could meaningfully lower barriers to participation and
> institutional memory. Done poorly or without the trust scaffolding this
> group knows how to build, it just produces over-confident noise. Imho, that
> distinction is exactly why the CCG should lead this conversation, not just
> react to it.
>
> On the immediate policy question: Option 2 seems like the most pragmatic
> starting point, with the addition of Melvin's point about branch protection
> and review processes. Disclosure + human accountability ftw. Seems a good
> opportunity to dog-food our own work and use the credential and delegation
> models we've been developing to lead and solve this problem for ourselves
> first.
>
> On Wed, Apr 8, 2026 at 11:10 AM Dmitri Zagidulin <dzagidulin@gmail.com>
> wrote:
>
>> >  and how the CCG ought to lead out in requiring credentials of its
>> human and AI members.
>>
>> Daniel,
>> I think about this part every day. And my suspicion is -- the inability
>> of our CCG to do that rests in a lack of infrastructure resources.
>> (We'll put aside the other major obstacle, which is - folks are very
>> rightly concerned that there will be a big fight about the exact credential
>> serializations and protocols.)
>>
>> As it is, key community group infrastructure (like meeting auto-scribe
>> models) has been maintained by single companies or even single individual
>> volunteers, at their own expense.
>>
>> And something like the ability to have human and AI membership
>> credentials -- and I agree that it's CRUCIAL for any kind of standards
>> groups -- will require some basic infrastructure. Not a lot (some server
>> space, a domain or two). And even more importantly, institutional buy-in to
>> stand behind it. "Yes, this is the list of anchoring identity registries
>> we're keeping an eye on, etc"
>>
>> I don't know what can be done about that.
>>
>> Part of me is tempted to informally pass around the hat, among CG
>> membership, a minimal voluntary membership donation, to set up this
>> infrastructure.
>> Another part knows that this is really in the realm of the standards body
>> itself, that W3C should provide it for all their CGs. And I don't have
>> enough institutional process knowledge of how to make this happen.
>>
>> But just wanted to flag that your statement really resonated with me.
>>
>>>

Received on Thursday, 9 April 2026 05:23:48 UTC