Re: Concern about AI-generated contributions in VCWG discussions

Based on a year of intense experience using LLMs for coding and to help a
group edit a wonderful book by a deceased author, my gut feeling is that
we're underestimating the role generative AI could play in this group.

State-of-the-art LLMs excel at:
- organizing and summarizing the text these workgroups generate
- cutting through redundancy
- finding relevant information from outside the group

To get these benefits,
- the archives of this group and related groups must be available as
context to the LLMs
- each participant should have access to the LLM to help craft their
contributions
- we would try to have the LLMs produce and maintain working demonstrations
of the current state

This is just my observer perspective. I am not a principal in the
discussions and do not represent any commercial interest in the outcome. I
just want to get something I can use and demonstrate.

Adrian



On Sun, Feb 22, 2026 at 4:52 PM Manu Sporny <msporny@digitalbazaar.com>
wrote:

> On Fri, Feb 20, 2026 at 1:24 PM Moses Ma
> <moses.ma@futurelabconsulting.com> wrote:
> > 25% of readers misidentified the LLM output as human-generated.
>
> Interesting, I thought it would be higher... closer to 40%; I had a
> hard time telling the difference (there was a spelling/grammar anomaly
> and a Moses-ism that tipped me off).
>
> > Here's my point: if AI materially improves clarity, structure, rigor, or
> speed—and you choose not to use it—are you protecting integrity, or just
> degrading output for the sake of red tape?
> > The bar is: did it advance understanding? was it effective in creating a
> breakthrough?
> > Anyway, that’s my position. As a group, let’s optimize for better
> work—not nostalgic for the way it used to be. Those days aren’t coming back
> people!
>
> Hmm, I would be surprised if anyone were arguing against those points.
> IOW, I agree.
>
> The issue that I'm trying to point out is the asymmetry and the effect
> it has on engagement. Someone sloppily creates prose in 5 minutes that
> takes 30 minutes to wade through only to find out that there are
> critical flaws in the reasoning... or, maybe there are no flaws at
> all, but it doesn't engage with the point of contention... or it
> really doesn't add much to the discussion (but takes forever to wade
> through)... well, the most likely outcome is that people stop engaging
> with you.
>
> It's the same as watching a talking head go on for an hour about a
> topic you were interested in, only to find out that they haven't
> really said anything of substance over the past hour and have wasted
> your time. Time you could have used to be more productive. Ai-written
> responses weaponize that pattern.
>
> In any case, I do think we've come to some sort of shared
> understanding here -- it's fine to use AI, but make sure you stand
> behind what it is saying, and make sure it's actually saying something
> of substance instead of being a zero-calorie sycophantic embellishment
> engine. :P
>
> -- manu
>
> --
> Manu Sporny - https://www.linkedin.com/in/manusporny/
> Founder/CEO - Digital Bazaar, Inc.
> https://www.digitalbazaar.com/
>
>

Received on Sunday, 22 February 2026 23:29:21 UTC