Re: Concern about AI-generated contributions in VCWG discussions

On 2026-02-14 12:52 pm, Daniel Hardman wrote:
> Good experiment, Moses. I'll be very curious to see the results.
>
> FWIW, I have been trying to hold myself to the following standard: https://dhh1128.github.io/papers/ai-coca.html
>
> The part that I'm not sure about is: "I will acknowledge the AI’s contribution according to the reasonable expectations of my audience." Are "the reasonable expectations of my audience" shifting?

There's an analysis appearing today in the Atlantic today, "Words Without Consequence" that bears directly on this, written by "Deb Roy <https://www.theatlantic.com/author/deb-roy/>...a professor of Media Arts and Sciences at MIT, where he directs the MIT Center for Constructive Communication."

https://www.theatlantic.com/technology/2026/02/words-without-consequence/685974/

I founditdeeply reasoned, andconvincing. The key point is that the attribution link to a given person allows moral responsibility, current AI practice has cut this, and we risk major destruction if this is not rectified.

Of course the range of attribution needs to be a continuum, as your example of the song lyric shows. But itstillneeds to exist, and we need new norms for ensuring that it does.

My current suggestion: just as norms for a scientific paper's statement of a fact are to reference a given source for that fact, I believe the use of AI should at very least entail the specific AI model and application area: "Claude Code xx.03, Cloud Version was used to generate this text", or "ChatGPT 4.5.xx running on a local machine I own generated the original of this paper based on documents I fed it. Then I substantially revised that result."... etc.

Does this seem plausible or viable?

Steven Rowat

Received on Sunday, 15 February 2026 19:01:52 UTC