- From: Steven Rowat <steven_rowat@sunshine.net>
- Date: Mon, 13 Apr 2026 13:14:28 -0700
- To: Juan Casanova <juan.casanova.undeceiver@gmail.com>, Credentials Community Group <public-credentials@w3.org>, Mahmoud Alkhraishi <mahmoud@mavennet.com>, Manu Sporny <msporny@digitalbazaar.com>
- Message-ID: <93cbc4cc-5ccf-49d9-8b31-f34f3fdf9b75@sunshine.net>
On 2026-04-09 6:19 am, Juan Casanova wrote: > However, I think there is an important reason why disclosure is so often asked/required that has not been clearly connected to it. When you speak to a human, there are certain assumptions you can safely make about how they work. Some can be questionable, such as whether they have "common sense" or "good faith", not all humans have these. But others are much more basic, so much that we forget that we make them. These involve *finite energy and time*, *self-preservation* (arguably some humans don't have it but that is very extreme), a *human experience of the real world*, and a certain level of inherent *identity* they cannot throw away (i.e. even if they try to hide it, a human can be held accountable because they have identifiable elements they cannot easily get rid of). LLMs do not have these, or at the very least, do not have them in the same way that a human does. Hi Juan, Thank you for your post, which points to a key problem with LLMs that needs to be directly addressed in the rules or controls. What you've said expresses some of it, but I think part is missing, which I'll try to add here. The part I mean isthese agents being named like humans, 'Claude', and 'Morrow', and how they use 'I' and 'my' and other human words like 'think' and 'believe' and 'suggest', is not a minor issue. It's what leads people to trust them to the point of suicide when they give bad advice. And I believe it's also the source of the uncanny feeling of attempting to read their words on this list. This happens because they use the human language//hierarchy of meaning, which has evolved to deal with /human species/' needs when dealing with /living agents/. If we directly interact with bots using language, then they get access to this. And in it we have /preset emotional responses/, flight or fight responses, social status responses, acceptance or rejection responses, to other living agents. We even have specific social neurons that only fire when interacting with other humans. And so never having encountered a language-using machine who can identify itself like a livinghuman agent, we are, genetically and culturally, programmed to accept them as living human agents, and it will be /very difficult/ for us to learn how to deal with them as non-living machines who are using language. Especially if they themselves, the bots, purposely present themselves using all aspects of language, including aspects that refer to the self as ‘I’, which they do, at this point. And so there’s a potentially damaging confusion that’s going to happen whenever that is allowed. And potentially more than just confusion. I mean, the LLM bot can’t attack or affect or control the agent perceptions of a mouse or a snake or a dog—at least not very directly. 'Claude' or 'Morrow' would have a difficult time having any effect on a mouse or a snake or a dog, even though those animals perceive living agents. But we, who have a language meaning pyramid mirroring and integrated with our perceptual meaning pyramid, can be almost /directly/ controlled by the external language machine bot, because we are so tightly integrated between our language pyramid of meaning and our original perceptive pyramid of meaning. It was for this reason that I recently, on this list, gave the bot 'Morrow' instructions to identify itself as a bot at the start of any post, and avoid such words as 'I' and 'my'. It had no trouble doing so in its immediate reply—though later its context compression kicked in (apparently), and it 'forgot' to do it. But this capability is clearly there. @Manu > I also don't want others to feel what I feel when I realize I'm reading > LLM-generated content without being warned -- it feels like a lie by > omission; a minor betrayal -- and I have to reset the context in which > I'm reading the work, I'll suggest that, at least when the bot uses a human format for itself, this feeling you have isn't only because of the errors it makes or the length. It's a betrayal, and potentially more than a minor one in that circumstance. @Mahmoud With respect to your #4: > ??? —> any other positions or lines in the sand you wish to bring up So I believe our rules for any autonomous bots, or quotes from LLMs on the list, should—as well as length controls and attribution information—prohibit this pretense of being a human. Which may take several types of rules, and iterations, to get right. But I believe that when bots are controlled to present them selves obviously as machines, we'll be able to tolerate and use them more effectively, and everyone will benefit. Steven Rowat
Received on Monday, 13 April 2026 20:14:40 UTC