Re: Concern about AI-generated contributions in VCWG discussions

As a professional writer, I claim ownership over everything I produce and as a professional, feel no compulsion to divulge my methods or tools...although sometimes I do:

  1.
https://hyperonomy.com/2025/11/22/truly-effective-communication/

  2.
https://hyperonomy.com/2026/01/15/davos2026-exclusive-what-prompt-can-other-people-use-to-get-the-same-high-level-of-verification-that-im-receiving-on-my-responses/


It doesn't matter if I used an AI tool, zero AI tools, or 100 AI tools.

I mark everything with:
Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
(Free to use with Attribution)

I categorically own everything I produce regardless of how many AI tools I used: https://hyperonomy.com/?s=Copyright%3A


Michael Herman
Chief Digital Officer
Web 7.0 Foundation

Disclosure: My robot vacuum just ran over my right little toe.
No humans or digital agents were harmed during the writing of this email.




Get Outlook for Android<https://aka.ms/AAb9ysg>
________________________________
From: Thanh M. Le <vnlemanhthanh@gmail.com>
Sent: Sunday, February 15, 2026 3:00:17 PM
To: Adrian Gropper <agropper@healthurl.com>
Cc: Steven Rowat <steven_rowat@sunshine.net>; Daniel Hardman <daniel.hardman@gmail.com>; W3C Credentials CG (Public List) <public-credentials@w3.org>
Subject: Re: Concern about AI-generated contributions in VCWG discussions

Proof of Provability

Thanh M. Le
---------------------------------------------------------------------------------------
SHA-256("") — From nothing, truth emerges<https://github.com/glogos-org/glogos/blob/main/shared/artifacts/genesis-artifact.json>
code · cel · cell · citizen · card · cluster · consortium · civilization · cosmos

Vào 04:46 T.2, 16 Th2, 2026 Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> đã viết:
Thank you for sharing this article. It’s the best argument for human reputation being more important than identity that I have seen so far.

It also explains, quite clearly, why insisting on disclosure of AI use is not just futile but also deeply damaging to humanity.

Which brings us to the issue of “proof of humanity”, which the article does not explicitly discuss.

I want your reputation to be contextual and irrevocably tied to your Sybil-resistant biometric identity. That doesn’t mean I want your biometrics, because that would allow me to track you across contexts à la Chinese social credit scoring.

I don’t want to be asked if I used an LLM to author something I said. If you ask, I will be offended and may leave the conversation - if I can. As an employee, I don’t expect to be restricted in how I use whatever AI I choose to be responsible for.

From my perspective as “invited expert on privacy and community interest” to the VC and DID standards, the lack of human reputation as an essential use-case is disappointing. My disappointment is not important, because I am not the intended audience for the standards, so all I can do is to remind the community of this issue every once in a while.

However, the problem for this community may turn out to be that LLMs highlight the use-case of human agency, responsibility and reputation to such an extent that our work is ignored.

Adrian


On Sun, Feb 15, 2026 at 2:04 PM Steven Rowat <steven_rowat@sunshine.net<mailto:steven_rowat@sunshine.net>> wrote:
On 2026-02-14 12:52 pm, Daniel Hardman wrote:
Good experiment, Moses. I'll be very curious to see the results.

FWIW, I have been trying to hold myself to the following standard: https://dhh1128.github.io/papers/ai-coca.html


The part that I'm not sure about is: "I will acknowledge the AI’s contribution according to the reasonable expectations of my audience." Are "the reasonable expectations of my audience" shifting?

There's an analysis appearing today in the Atlantic today, "Words Without Consequence" that bears directly on this, written by "Deb Roy<https://www.theatlantic.com/author/deb-roy/>...a professor of Media Arts and Sciences at MIT, where he directs the MIT Center for Constructive Communication."

https://www.theatlantic.com/technology/2026/02/words-without-consequence/685974/


I found it deeply reasoned, and convincing. The key point is that the attribution link to a given person allows moral responsibility, current AI practice has cut this, and we risk major destruction if this is not rectified.

Of course the range of attribution needs to be a continuum, as your example of the song lyric shows. But it still needs to exist, and we need new norms for ensuring that it does.

My current suggestion: just as norms for a scientific paper's statement of a fact are to reference a given source for that fact, I believe the use of AI should at very least entail the specific AI model and application area: "Claude Code xx.03, Cloud Version was used to generate this text", or "ChatGPT 4.5.xx running on a local machine I own generated the original of this paper based on documents I fed it. Then I substantially revised that result."... etc.

Does this seem plausible or viable?

Steven Rowat

Received on Monday, 16 February 2026 05:57:31 UTC