Re: Concern about AI-generated contributions in VCWG discussions

On Fri, Feb 13, 2026 at 6:43 AM Filip Kolarik <filip26@gmail.com> wrote:
> when contributions read like synthesized summaries rather than considered positions, the discussion loses clarity and momentum.

Yes, +1 to this. It's been bothering me too, and some of us been
wondering (in private discussions) if some of the new participants
aren't actually bots (they're not, I've met some of them, but it can
be hard to tell at times where the LLM's opinion overtakes the
individual's opinion). Like Filip, I use LLMs as well, but avoid using
them to draft emails because (for better or worse) people have a
largely negative reaction to them today, even if good points are made.

In other words (in the more egregious cases): Your lips are moving,
but you're not saying much.

I try my best to engage with the content (whether or not it is LLM
generated, if there is a good, logical point being made, then we
should engage on that point). That said, LLM emails have a "smell" to
them; on average, they're sycophantic and specious. Those of us that
use LLMs to do research know full well that even the best frontier
models do a fairly mundane job of deep thinking and deep research in
fields for which we are experts in. They can find content, sure, but
they tend to misanalyze it because we're at the frontier and haven't
written down much of our tribal knowledge yet... and even if we did,
there are nuances that have significant outcomes that the LLMs just
don't synthesize into a cohesive narrative (yet).

But, man, they sure do draw pretty pictures, build prototypes quickly,
and speed up guided research.

When used in this forum, what ends up happening is the experts
combating an asymmetric misinformation engine, where the volume of
things we need to dispel outstrips our ability to engage. We're all
busy, and the sheer volume of misguided statements in some of these
exchanges cause me to just throw my hands up and go: Well, hopefully
someone else will correct them on those fallacies.

... perhaps this is a generational thing, but I prefer to know that
the person I'm speaking with is fully engaged in thinking deeply about
what they're writing instead of having part of their thinking on
auto-pilot, done by a machine that is just auto-completing thoughts
based on some sort of sycophantic mean... and when I can't tell if
this is a human opinion, or an LLM opinion, I choose to not burn my
precious cycles engaging unless what is being said is a danger to the
work of this community.

Food for thought for those of you that are using LLMs to engage in
discussion threads on this mailing list. It's being tolerated for now,
but it might be harming your ability to engage with the community in
the long term.

-- manu

-- 
Manu Sporny - https://www.linkedin.com/in/manusporny/
Founder/CEO - Digital Bazaar, Inc.
https://www.digitalbazaar.com/

Received on Friday, 13 February 2026 14:01:19 UTC