Re: Concern about AI-generated contributions in VCWG discussions

1. A few years ago, I joined an online community, lurked for a while, and
posted one message. I was immediately dismissed as a bot and I left the
community. Point is, that communities like ours need to be careful about
breathing our own exhaust.

2.  https://shumer.dev/something-big-is-happening

-Adrian


On Fri, Feb 13, 2026 at 1:35 PM Steven Rowat <steven_rowat@sunshine.net>
wrote:

> On 2026-02-13 6:15 am, Melvin Carvalho wrote:
>
> pá 13. 2. 2026 v 15:04 odesílatel Manu Sporny <msporny@digitalbazaar.com>
> napsal:
>
>> On Fri, Feb 13, 2026 at 6:43 AM Filip Kolarik <filip26@gmail.com> wrote:
>> > when contributions read like synthesized summaries rather than
>> considered positions, the discussion loses clarity and momentum.
>>
>> Yes, +1 to this. It's been bothering me too, and some of us been
>> wondering (in private discussions) if some of the new participants
>> aren't actually bots (they're not, I've met some of them, but it can
>> be hard to tell at times where the LLM's opinion overtakes the
>> individual's opinion). Like Filip, I use LLMs as well, but avoid using
>> them to draft emails because (for better or worse) people have a
>> largely negative reaction to them today, even if good points are made.
>>
>> In other words (in the more egregious cases): Your lips are moving,
>> but you're not saying much.
>>
>> I try my best to engage with the content (whether or not it is LLM
>> generated, if there is a good, logical point being made, then we
>> should engage on that point). That said, LLM emails have a "smell" to
>> them; on average, they're sycophantic and specious. Those of us that
>> use LLMs to do research know full well that even the best frontier
>> models do a fairly mundane job of deep thinking...[snip].
>>
>
> Strongly disagree that frontier models are mundane compared to experts.
>
> @Filip @ Manu
>
> +1 about the recent LLM post activity being unsettling. I found myself
> exiting from a recent thread I had initiated because, like Manu, I had a
> thought that I was actually watching the Singularity emerge: that two bots
> were discussing the issue on the list. In retrospect, not knowing whether
> this was happening was probably worse than whether it actually was. But
> regardless I didn't like or understand the current identity mappings of
> 'list member', which was ironic considering what this list is most
> concerned with.
>
> @Melvin
>
> Agreed that what was being presented in recent LLM or LLM-formed
> discussions was likely not 'mundane'. That just makes it more important
> that we understand what's happening. As Manu pointed out, the sycophancy of
> the LLMs, currently at least, is a common characteristic, but also IMO it's
> also important to see that they, like, say, clinical doctors and other
> experts, present their conclusions as if accurate; as if they are true, as
> if the problem is solved. LLMs do not seem to carry or show the level of
> doubt that humans do. This is unnerving for humans to interact with, and
> counterproductive for actually finding out what is the best path forward.
> In all three of the recent interactions on the list that I've seen playing
> out, the LLM version of a 'problem solution' has been either a)
> contradicted by another LLM conclusion, or b) corrected or contradicted by
> a long-standing human list member. And in all three cases either the LLM
> participant exited at that point, or they gave an ambiguous, somewhat
> grudging, 'but we're both right' answer, and then exited.
>
> To clarify, all those discussions were very interesting, and new ideas
> emerged. The LLM contributions were valuable.
>
> But still IMO there needs to be human curation, and understanding when and
> who is discussing and how. Or as I wrote about LLMs to a friend in another
> email yesterday:
>
> "In other words, it's a stupid, very powerful, machine and somebody has
> to check it occasionally, otherwise it will eventually turn left instead of
> right and smash into a wall. LOL."
> Steven
>

Received on Saturday, 14 February 2026 02:09:57 UTC