Re: Concern about AI-generated contributions in VCWG discussions

Melvin,
Thank you for taking the time. I’m not questioning the usefulness of LLMs;
my concern is about discussions being flooded with generic, AI-generated
synthesis. This is neither helpful nor productive, and effectively turns
the mailing list into AIs talking to each other in the long term.

Use whatever tools you like to form your understanding, explore ideas, or
clarify your thinking, but please, don’t paste generated text to the group
as-is. Instead, focus on what you personally want to share, contribute, or
argue,

and please value other people’s time and expertise - after all, they can
use tools just as well as you!

Best regards,
Filip

On Fri, Feb 13, 2026 at 3:18 PM Melvin Carvalho <melvincarvalho@gmail.com>
wrote:

>
>
> pá 13. 2. 2026 v 15:04 odesílatel Manu Sporny <msporny@digitalbazaar.com>
> napsal:
>
>> On Fri, Feb 13, 2026 at 6:43 AM Filip Kolarik <filip26@gmail.com> wrote:
>> > when contributions read like synthesized summaries rather than
>> considered positions, the discussion loses clarity and momentum.
>>
>> Yes, +1 to this. It's been bothering me too, and some of us been
>> wondering (in private discussions) if some of the new participants
>> aren't actually bots (they're not, I've met some of them, but it can
>> be hard to tell at times where the LLM's opinion overtakes the
>> individual's opinion). Like Filip, I use LLMs as well, but avoid using
>> them to draft emails because (for better or worse) people have a
>> largely negative reaction to them today, even if good points are made.
>>
>> In other words (in the more egregious cases): Your lips are moving,
>> but you're not saying much.
>>
>> I try my best to engage with the content (whether or not it is LLM
>> generated, if there is a good, logical point being made, then we
>> should engage on that point). That said, LLM emails have a "smell" to
>> them; on average, they're sycophantic and specious. Those of us that
>> use LLMs to do research know full well that even the best frontier
>> models do a fairly mundane job of deep thinking and deep research in
>> fields for which we are experts in. They can find content, sure, but
>> they tend to misanalyze it because we're at the frontier and haven't
>> written down much of our tribal knowledge yet... and even if we did,
>> there are nuances that have significant outcomes that the LLMs just
>> don't synthesize into a cohesive narrative (yet).
>>
>
> Strongly disagree that frontier models are mundane compared to experts.
> Having worked closely with some great minds in the field, Manu, Dave
> Longley, Nathan Rixham, Roy Fielding, Tim Beners-Lee, Dan Brickley, over
> the decades. It evaluates cleanly and now is catching things even the best
> group would miss. The frontier models are already well up to the standard
> of the best experts. In some ways ahead because they have deeper knowledge.
> And they will only get better this year. You do have to ask them the right
> questions though (garbage in, garbage out), and that is where I hope
> experience will be an advantage. That said, it's something every group will
> have to adapt to, rather than fighting the ecosystem, and that a balance
> that Manu and co. normally get right.
>
>
>>
>> But, man, they sure do draw pretty pictures, build prototypes quickly,
>> and speed up guided research.
>>
>> When used in this forum, what ends up happening is the experts
>> combating an asymmetric misinformation engine, where the volume of
>> things we need to dispel outstrips our ability to engage. We're all
>> busy, and the sheer volume of misguided statements in some of these
>> exchanges cause me to just throw my hands up and go: Well, hopefully
>> someone else will correct them on those fallacies.
>>
>> ... perhaps this is a generational thing, but I prefer to know that
>> the person I'm speaking with is fully engaged in thinking deeply about
>> what they're writing instead of having part of their thinking on
>> auto-pilot, done by a machine that is just auto-completing thoughts
>> based on some sort of sycophantic mean... and when I can't tell if
>> this is a human opinion, or an LLM opinion, I choose to not burn my
>> precious cycles engaging unless what is being said is a danger to the
>> work of this community.
>>
>> Food for thought for those of you that are using LLMs to engage in
>> discussion threads on this mailing list. It's being tolerated for now,
>> but it might be harming your ability to engage with the community in
>> the long term.
>>
>> -- manu
>>
>> --
>> Manu Sporny - https://www.linkedin.com/in/manusporny/
>> Founder/CEO - Digital Bazaar, Inc.
>> https://www.digitalbazaar.com/
>>
>>

Received on Friday, 13 February 2026 14:39:56 UTC