Re: LLMs and Agents usage in the CCG

po 13. 4. 2026 v 15:57 odesílatel Manu Sporny <msporny@digitalbazaar.com>
napsal:

> On Mon, Apr 13, 2026 at 9:18 AM Juan Casanova
> <juan.casanova.undeceiver@gmail.com> wrote:
> > I have been reading this list for a couple of months, but this is the
> first time I participate.
>
> Welcome to the list, Juan. :) We're very happy to have you and your
> input on the important work being done here.
>
> > Thanks for your time. I find it funny because my tendency to long
> messages and being a first time participant might make others feel like I
> am the LLM here, arguing for disclosure of LLM use :P . No LLMs were used
> in making this email, for what it's worth.
>
> FWIW, your response felt strongly human to me... and I found myself
> nodding along in agreement with much of it.
>
> Amir wrote:
> > On the mailing list, keeping it human-only makes sense to avoid noise
> and maintain meaningful discussion. I would lean toward Option 3, with one
> addition: light disclosure of LLM assistance (for example, noting if it was
> used for structuring or language), so the group can apply appropriate
> scrutiny. LLMs and agents are tools we’ve built over time, and it’s
> important we continue to use them in line with their intended role—as
> assistants, not decision-makers—while keeping accountability firmly human.
>
> What Amir wrote above resonates strongly with me. As I'm sure with a
> number of you, my LLM use has been climbing steadily over the last
> several years. It was cute at first, but there are some things I'm
> doing with the various models today (mostly code / theory analysis and
> refactoring) that I would be unable to do on my own (on the scale at
> which I am doing it).
>
> There are some work products where I feel like the output is mostly
> mine (ideas, architecture, theory, etc.)... but there was very heavy
> LLM usage (perhaps more than I'm giving the LLM credit for). The lines
> are blurred for me... email is one place where I definitely do not use
> LLMs... but specs, and code are certainly becoming increasingly
> blurred and graphics/illustrations are an almost complete outsourcing
> to LLMs.
>
> I don't disclose that I use compilers, linters, macros, spell
> checkers, scripts, and code coverage tools... and I'm pretty sure
> we're going to eventually stop disclosing usage of LLMs as they
> increasingly exceed human capabilities. I think we're all concerned
> about others that outsource their thinking to LLMs, but perhaps that
> could be better than what we have today, which is some outsourcing
> their thinking to institutions that don't have their best interests at
> heart? Perhaps this is the dawn of personalized thinking? And yes, it
> is easy for that to slide into personalized coercion.
>
> ... but for now, I think I'm ok w/ mentioning when I use them when I
> think others need to be aware (mostly because I'm concerned about
> accidental slop, even though I've reviewed it multiple times, and I
> need other humans to help me keep what's produced in check). I also
> don't want others to feel what I feel when I realize I'm reading
> LLM-generated content without being warned -- it feels like a lie by
> omission; a minor betrayal -- and I have to reset the context in which
> I'm reading the work, usually going all the way back to the beginning
> and reading it knowing that it's LLM generated (which feels like it
> takes much more effort to catch the subtle errors that make the entire
> argument/architecture/theory fall apart).
>
> From a concrete standpoint, this means disclosing medium-to-heavy LLM
> usage in pull requests for specs and code. I'm not sure if I care if
> LLM usage is disclosed when clearing up spelling, grammar, and flow as
> long as the original content/concepts were written by a human.
>
> Just thinking out loud, not suggesting any particular direction --
> just mostly trying to get feedback from others so the Chairs can
> establish a coherent policy for the community.
>

A small point is that the creative process is protected by privacy laws in
some countries.

AI like any tool can be a force for good, or a time sink. I think
moderation is the key, and there's no one rule for every group.

What I tend to do is formulate ideas myself, and then use AI as a grammar
tool to refine the wording for email lists. AI is also increasingly good at
understanding the nuances of standards work, where humans may not be as
detail oriented.

We could have a convention to suggest keeping contributions relatively
brief.


>
> -- manu
>
> --
> Manu Sporny - https://www.linkedin.com/in/manusporny/
> Founder/CEO - Digital Bazaar, Inc.
> https://www.digitalbazaar.com/
>
>

Received on Monday, 13 April 2026 14:07:49 UTC