Re: Concern about AI-generated contributions in VCWG discussions

Hi everyone,

Daniel is right, the results so far are very interesting. I won't spoil 
it by saying any more.
Please participate in the experiment - 
*https://forms.gle/42mWD8HAouAhM9kVA* <https://forms.gle/42mWD8HAouAhM9kVA>

Moses



On 2/14/26 12:52 PM, Daniel Hardman wrote:
> Good experiment, Moses. I'll be very curious to see the results.
>
> FWIW, I have been trying to hold myself to the following standard: 
> https://dhh1128.github.io/papers/ai-coca.html
>
> The part that I'm not sure about is: "I will acknowledge the AI’s 
> contribution according to the reasonable expectations of my audience." 
> Are "the reasonable expectations of my audience" shifting? Should 
> they? On several technical papers I've written recently, I've found 
> that AI helps me articulate complex ideas a lot faster than I could do 
> on my own -- but the ideas are still entirely from me, and I take 
> responsibility for them. Even the AI's language is highly constrained 
> by the guidelines I give, the other content I have written as starters 
> for the piece, and by my own substantial post-edits. So I haven't 
> credited an AI. Was this the right call? I'm not sure. Here is how I 
> described the contributions of myself and AIs on a recent album I 
> released with suno's help (where my contribution was mostly as a 
> lyricist): https://sivanea.com/ai-collab. When I shared the album with 
> FB friends (along with the same caveats about how much of the album 
> came from me versus AI), one of my friends who's a musician told me he 
> felt discouraged because he had worked for years to play the guitar as 
> nicely as the guitar riff in one of my songs -- it didn't matter that 
> I was transparent, it still felt uncomfortable. And I get it. He's not 
> wrong. And yet, I feel like I found a creative outlet that was 
> meaningful and absolutely represents my own investment and 
> personality, too...
>
> I guess this is something we'll be wrestling with for a while...
>
> On Sat, Feb 14, 2026 at 1:19 PM Moses Ma 
> <moses.ma@futurelabconsulting.com> wrote:
>
>     Hi all,
>
>     Just as an experiment, I’m providing two responses, one written
>     organically and the other was generated by an LLM. Please vote on
>     which you think is human generated? Doing this is allowing me to
>     explorethe nature of human versus AI generated content.
>
>     Vote here: *https://forms.gle/42mWD8HAouAhM9kVA*
>     <https://forms.gle/42mWD8HAouAhM9kVA>
>
>     I'll reply with the survey results in about a week.
>
>     Moses
>
>     ---
>
>     Version A:
>
>     I am also worried about the slopification of not only this forum,
>     but the entire practice of strategic collaboration. First, I
>     recently wrote something where there was concern that my work was
>     AI generated simply because I used em dashes—I tend to use them a
>     lot, as it offers the reading equivalent of a thoughtful pause. I
>     had to use an AI detector on an extensive article I blogged ten
>     years ago to show that my natural writing style triggered the
>     detector, when it was simply, well, good writing. (The article I
>     blogged received over half a million page views.) I subsequently
>     discovered that most “humanizers” simply inject words less likely
>     ones, to appear human generated by reducing the quality of the
>     writing.
>
>     The reality is that AI tools are now part of the cognitive
>     environment. Clearly, the key issue isn’t “used an LLM” vs “didn’t
>     use an LLM.” It’s authorship and responsibility. If someone uses a
>     model to structure thoughts, tighten prose, and check logic — but
>     the ideas, judgments, and commitments are theirs — that’s still
>     meaningful participation. If the model is doing the thinking, and
>     the human is just "phoning it in" by forwarding output, then yes,
>     we’re drifting toward a lot more than just model collapse
>
>     The other issue is where we’re heading. Just as AI generated
>     coding and video have crossed their critical thresholds of
>     realism, collaborative ideation will also do so within a few
>     years. As LLMs have slowly transitioned from “stochastic
>     plagiarism” to the use of reasoning models, they will at some
>     point cross a similar threshold, the "uncanny valley" of cognitive
>     realism. Especially when multi-agent systems start emulating what
>     we do here. At that point, our internal slop detectors—even set to
>     high sensitivity so anything we read could “feel” AI
>     generated—will not be able to tell the difference.
>
>     Version B:
>
>     Thanks for raising this. I think you're pointing at something real
>     and worth discussing openly.
>
>     The distinction you draw between using an LLM as a tool and
>     outsourcing your thinking to one is important, and I'd argue it's
>     actually sharper than it might first appear. Proofreading
>     preserves the structure of your reasoning while cleaning up
>     expression. Generation replaces the reasoning itself. Those are
>     fundamentally different activities, even if the output can
>     sometimes look similar on the surface.
>
>     Where I think this gets genuinely difficult is that the line isn't
>     always clean. Someone might use an LLM to help organize scattered
>     thoughts into a coherent draft, then revise it substantially.
>     That's somewhere in the middle, and I'd hesitate to call it
>     illegitimate. The question probably isn't whether an LLM touched
>     the text at all, but whether the person behind it actually
>     wrestled with the problem, made judgment calls, and can defend
>     what they wrote if pressed.
>
>     That said, I think the practical concern you're raising stands
>     regardless of where we draw the line. When responses on a list
>     like this start reading like they were produced by someone who
>     spent 30 seconds prompting rather than 30 minutes thinking, it
>     does erode trust. You start reading differently. You skim more.
>     You engage less. And that's corrosive to exactly the kind of
>     deliberation this group exists for.
>
>     I don't know what the right intervention is. Norms are probably
>     more useful than rules here. Something like: if you wouldn't be
>     comfortable explaining and defending every claim in your message
>     during a live conversation, maybe reconsider sending it. That's
>     not a perfect filter, but it at least recenters the expectation
>     that contributions reflect genuine engagement rather than
>     generated fluency.
>
>
>
>     On 2/13/26 3:41 AM, Filip Kolarik wrote:
>>     Dear VCWG,
>>     I want to raise a concern that’s been bothering me lately. It
>>     feels like this mailing list is being flooded by LLM-generated
>>     responses.
>>
>>     Whether or not that’s intentional, meaningful work depends on
>>     people engaging directly with arguments and tradeoffs, and when
>>     contributions read like synthesized summaries rather than
>>     considered positions, the discussion loses clarity and momentum.
>>
>>     I’m not arguing against using tools; I use LLMs to proofread my
>>     own writing. But there is a difference between proofreading text
>>     you wrote and letting an LLM generate the entire response. If
>>     normalized, we risk damaging the effectiveness of the group and
>>     turning this mailing list into a swamp to be ignored.
>>
>>     Best regards,
>>     Filip
>>     https://www.linkedin.com/in/filipkolarik/
>
>     -- 
>     *Moses Ma | Managing Partner*
>     moses.ma@futurelabconsulting.com (public) |
>     moses@futurelab.venture (private)
>     v+1.415.568.1068 | allmylinks.com/moses-ma
>     <http://allmylinks.com/moses-ma>
>
>

-- 
*Moses Ma | Managing Partner*
moses.ma@futurelabconsulting.com (public) | moses@futurelab.venture 
(private)
v+1.415.568.1068 | allmylinks.com/moses-ma
Learn more at futurelabconsulting.com
For calendar invites, please cc: mosesma@gmail.com - but please don't 
email me there

Received on Sunday, 15 February 2026 02:33:52 UTC