Re: Concern about AI-generated contributions in VCWG discussions

Proof of Provability

Thanh M. Le
---------------------------------------------------------------------------------------
SHA-256("") — From nothing, truth emerges
<https://github.com/glogos-org/glogos/blob/main/shared/artifacts/genesis-artifact.json>
code · cel · cell · citizen · card · cluster · consortium · civilization ·
cosmos

Vào 04:46 T.2, 16 Th2, 2026 Adrian Gropper <agropper@healthurl.com> đã viết:

> Thank you for sharing this article. It’s the best argument for human
> reputation being more important than identity that I have seen so far.
>
> It also explains, quite clearly, why insisting on disclosure of AI use is
> not just futile but also deeply damaging to humanity.
>
> Which brings us to the issue of “proof of humanity”, which the article
> does not explicitly discuss.
>
> I want your reputation to be contextual and irrevocably tied to your
> Sybil-resistant biometric identity. That doesn’t mean I want your
> biometrics, because that would allow me to track you across contexts à la
> Chinese social credit scoring.
>
> I don’t want to be asked if I used an LLM to author something I said. If
> you ask, I will be offended and may leave the conversation - if I can. As
> an employee, I don’t expect to be restricted in how I use whatever AI I
> choose to be responsible for.
>
> From my perspective as “invited expert on privacy and community interest”
> to the VC and DID standards, the lack of human reputation as an essential
> use-case is disappointing. My disappointment is not important, because I am
> not the intended audience for the standards, so all I can do is to remind
> the community of this issue every once in a while.
>
> However, the problem for this community may turn out to be that LLMs
> highlight the use-case of human agency, responsibility and reputation to
> such an extent that our work is ignored.
>
> Adrian
>
>
> On Sun, Feb 15, 2026 at 2:04 PM Steven Rowat <steven_rowat@sunshine.net>
> wrote:
>
>> On 2026-02-14 12:52 pm, Daniel Hardman wrote:
>>
>> Good experiment, Moses. I'll be very curious to see the results.
>>
>> FWIW, I have been trying to hold myself to the following standard:
>> https://dhh1128.github.io/papers/ai-coca.html
>>
>> The part that I'm not sure about is: "I will acknowledge the AI’s
>> contribution according to the reasonable expectations of my audience." Are
>> "the reasonable expectations of my audience" shifting?
>>
>> There's an analysis appearing today in the Atlantic today, "Words Without
>> Consequence" that bears directly on this, written by "Deb Roy
>> <https://www.theatlantic.com/author/deb-roy/>...a professor of Media
>> Arts and Sciences at MIT, where he directs the MIT Center for Constructive
>> Communication."
>>
>> https://www.theatlantic.com/technology/2026/02/words-without-consequence/685974/
>>
>> I found it deeply reasoned, and convincing. The key point is that the
>> attribution link to a given person allows moral responsibility, current AI
>> practice has cut this, and we risk major destruction if this is not
>> rectified.
>>
>> Of course the range of attribution needs to be a continuum, as your
>> example of the song lyric shows. But it still needs to exist, and we
>> need new norms for ensuring that it does.
>>
>> My current suggestion: just as norms for a scientific paper's statement
>> of a fact are to reference a given source for that fact, I believe the use
>> of AI should at very least entail the specific AI model and application
>> area: "Claude Code xx.03, Cloud Version was used to generate this text", or
>> "ChatGPT 4.5.xx running on a local machine I own generated the original of
>> this paper based on documents I fed it. Then I substantially revised that
>> result."... etc.
>>
>> Does this seem plausible or viable?
>>
>> Steven Rowat
>>
>

Received on Sunday, 15 February 2026 22:00:33 UTC