Re: LLMs and Agents usage in the CCG

Hello everyone,

I have been reading this list for a couple of months, but this is the first
time I participate. I am not even sure if this message will be rejected by
the list. Perhaps at a later time I should introduce myself, but I wanted
to bring something up about this subject that I feel hasn't been said very
clearly, and that matters not just for this list but for the broader
discussion (as mentioned by many others), which is one of the main reasons
that brought me to this community to begin with.

While most people seem to be in agreement that *accountability* is
something that LLMs cannot have at the same level as a human, and that is
important for the correct functioning of interactions like this (and
therefore that a human should always be responsible), what seems more
controversial is whether LLM use should be *disclosed*. Some people argue
that the tools that a person uses to craft their responses do not matter as
long as they take accountability for it, and I completely understand that
point of view and believe it is "true" and important to a large degree.
Indeed, I don't think people must disclose every tool they use, and there
are multiple reasons why they would not want to.

However, I think there is an important reason why disclosure is so often
asked/required that has not been clearly connected to it. When you speak to
a human, there are certain assumptions you can safely make about how they
work. Some can be questionable, such as whether they have "common sense" or
"good faith", not all humans have these. But others are much more basic, so
much that we forget that we make them. These involve *finite energy and
time*, *self-preservation* (arguably some humans don't have it but that is
very extreme), a *human experience of the real world*, and a certain level
of inherent *identity* they cannot throw away (i.e. even if they try to
hide it, a human can be held accountable because they have identifiable
elements they cannot easily get rid of). LLMs do not have these, or at the
very least, do not have them in the same way that a human does. And if they
did, they would be artificially added, rather than inescapable. With
humans, we *know* that these are true, with LLMs, at best we can hope. This
changes things, in ways like what people have brought up: An LLM can
generate virtually infinite amounts of "stuff" for others to read, very
quickly, at very little cost. This is not a concern that we have with
humans simply because of their physical limitations. Similar arguments can
and have been made about how this affects accountability and traceability.

In my opinion, these issues cannot be simply overcome by having a human
delegate or take accountability of the responses. A human without an LLM
cannot produce infinite amounts of content that is hard to distinguish from
human content, even if they wanted to. A human without an LLM will very
rarely produce large amounts of content that is *almost right* but not *quite
right*, because their own human experience of the real world, and the
technical barrier required to post in the list to begin with severely
constrains which humans post in the list and what their expectations are. A
human *with *an LLM, even if they take responsibility, can suddenly
overcome these barriers, whether on purpose, or much more likely, by
accident, and change the game. This is what disclosure really helps. It
enables people reading the results to be aware that these assumptions do
not hold, and be on the lookout for potential high amounts of information
that is not totally right, which would not be nearly as necessary when
reading a human response without LLM support. Undisclosed humans using LLMs
may cause others to spend large amounts of energy deciphering complex
messages that contain subtle errors. In turn, this would make people end up
doing this with every message they interact with, which would make this
interaction more costly, and make people withdraw from conversations out of
self-preservation. With disclosure, people can treat purely human messages
differently from messages created with LLM support, and therefore have a
more effective interaction.

This does not mean that I think LLM disclosure should always be mandatory.
I think there are good reasons why this is unnecessary or even
counterproductive in certain occasions, some of which have been discussed.
But I do think it is important to challenge the argument that disclosure is
completely unnecessary - it is not. Even when a real identifiable human
takes full accountability, disclosure of LLM use may still be important for
productive conversations and constructive interactions. Deciding when,
what, and how that disclosure needs to happen is nuanced and I don't have
all the answers, but I do strongly feel that discarding disclosure entirely
is not the right call at all.

Some basic suggestions from me would be to require LLM use disclosure when
actual *content* has been taken from LLMs, but not when it is only used for
language aspects (translation, grammar checks, etc.).

By the way, I think this problem is analogous to the difference between,
for example, generated video and real video, and why I think generated
video should almost always be labelled - it changes the assumptions that
people make when looking at it, in ways that matter for the continued
productivity of interactions. I joined this list hoping that this community
would have tools or be in the process of creating tools to help these
problems, among other things.

Thanks for your time. I find it funny because my tendency to long messages
and being a first time participant might make others feel like I am the LLM
here, arguing for disclosure of LLM use :P . No LLMs were used in making
this email, for what it's worth.

Sincerely,
Juan Casanova.

On Thu, 9 Apr 2026 at 06:28, Amir Hameed <amsaalegal@gmail.com> wrote:

> Hi all,
>
> I think it’s more useful to clearly define boundaries. LLMs can help in
> how something is written, but they should not influence what is being
> decided. Areas like security, cryptography, privacy, interoperability, core
> architecture, and working group decisions should remain strictly
> human-driven, because these require intent, accountability, and real
> understanding.
>
> At the same time, LLMs can be useful in a limited way—mainly for improving
> language, fixing grammar, helping non-native English speakers express
> themselves better, and structuring drafts. They can also help with examples
> or explanations, but not with anything normative. As a general principle,
> any information should be verified before being shared further, especially
> when LLMs are involved. In all cases, there must be a human author who has
> properly reviewed the content, understands it, and takes full
> responsibility for it.
>
> On the mailing list, keeping it human-only makes sense to avoid noise and
> maintain meaningful discussion. I would lean toward Option 3, with one
> addition: light disclosure of LLM assistance (for example, noting if it was
> used for structuring or language), so the group can apply appropriate
> scrutiny. LLMs and agents are tools we’ve built over time, and it’s
> important we continue to use them in line with their intended role—as
> assistants, not decision-makers—while keeping accountability firmly human.
>
>
> Regards
> Amir Hameed Mir
>
> On Thu, 9 Apr 2026 at 1:04 AM, Taylor Kendal <taylor@learningeconomy.io>
> wrote:
>
>> Great thread, and I appreciate Mahmoud and the chairs for facilitating
>> this proactively. I want to build on what Daniel and Dmitri are raising as
>> I think there are a few important points underneath the surface-level
>> policy question.
>>
>> The real issue isn't whether agents participate, but whether we can
>> maintain accountability, provenance, and trust when they do. And that's a
>> problem this community is arguably more qualified to address than almost
>> any other group. The CCG has spent over a decade building the conceptual
>> and technical foundations for exactly this kind of challenge (verifiable
>> claims, delegation, HiTL trust chains). The question of *"who said this,
>> on whose authority, and can I verify it?"* is what this group does/has
>> always done.
>>
>> Dmitri's point about infra resonates. The gap between what we know needs
>> to exist and what actually exists today is very real. It's not a
>> theoretical gap, but one of resources and institutional commitment. Like
>> many here, we work in this space every day: building open,
>> standards-aligned credential infrastructure that can make things like
>> verifiable delegation, accountable AI participation, and human-anchored
>> trust practically possible, not just conceptually sound. We'd welcome the
>> opportunity to inform or support the CCG in standing up that kind of infra
>> if there's appetite for it.
>>
>> One other non-trivial dimension to flag: the CCG's archive (a decade+ of
>> meeting transcripts, mailing list threads, spec work, etc.) represents an
>> extraordinary knowledge base. If/when responsible AI tooling is applied to
>> standards work, the quality and integrity of that contextual foundation
>> matters A LOT. A well-structured, verifiable archive paired with
>> responsible AI could meaningfully lower barriers to participation and
>> institutional memory. Done poorly or without the trust scaffolding this
>> group knows how to build, it just produces over-confident noise. Imho, that
>> distinction is exactly why the CCG should lead this conversation, not just
>> react to it.
>>
>> On the immediate policy question: Option 2 seems like the most pragmatic
>> starting point, with the addition of Melvin's point about branch protection
>> and review processes. Disclosure + human accountability ftw. Seems a good
>> opportunity to dog-food our own work and use the credential and delegation
>> models we've been developing to lead and solve this problem for ourselves
>> first.
>>
>> On Wed, Apr 8, 2026 at 11:10 AM Dmitri Zagidulin <dzagidulin@gmail.com>
>> wrote:
>>
>>> >  and how the CCG ought to lead out in requiring credentials of its
>>> human and AI members.
>>>
>>> Daniel,
>>> I think about this part every day. And my suspicion is -- the inability
>>> of our CCG to do that rests in a lack of infrastructure resources.
>>> (We'll put aside the other major obstacle, which is - folks are very
>>> rightly concerned that there will be a big fight about the exact credential
>>> serializations and protocols.)
>>>
>>> As it is, key community group infrastructure (like meeting auto-scribe
>>> models) has been maintained by single companies or even single individual
>>> volunteers, at their own expense.
>>>
>>> And something like the ability to have human and AI membership
>>> credentials -- and I agree that it's CRUCIAL for any kind of standards
>>> groups -- will require some basic infrastructure. Not a lot (some server
>>> space, a domain or two). And even more importantly, institutional buy-in to
>>> stand behind it. "Yes, this is the list of anchoring identity registries
>>> we're keeping an eye on, etc"
>>>
>>> I don't know what can be done about that.
>>>
>>> Part of me is tempted to informally pass around the hat, among CG
>>> membership, a minimal voluntary membership donation, to set up this
>>> infrastructure.
>>> Another part knows that this is really in the realm of the standards
>>> body itself, that W3C should provide it for all their CGs. And I don't have
>>> enough institutional process knowledge of how to make this happen.
>>>
>>> But just wanted to flag that your statement really resonated with me.
>>>
>>>>

Received on Monday, 13 April 2026 13:15:41 UTC