- From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>
- Date: Mon, 13 Apr 2026 22:25:22 +0000
- To: Moses Ma <moses.ma@futurelabconsulting.com>, Steven Rowat <steven_rowat@sunshine.net>
- CC: Juan Casanova <juan.casanova.undeceiver@gmail.com>, Credentials Community Group <public-credentials@w3.org>, Mahmoud Alkhraishi <mahmoud@mavennet.com>, Manu Sporny <msporny@digitalbazaar.com>
- Message-ID: <IA3PR13MB754134F7BA9DBAF154B7797DC3242@IA3PR13MB7541.namprd13.prod.outlook.com>
RE: I guess agents and bots should get special pronouns? Old school. In the new world, they should be distinguishable by the “sex” of their DID. Michael Herman Web 7.0 From: Moses Ma <moses.ma@futurelabconsulting.com> Sent: Monday, April 13, 2026 2:40 PM To: Steven Rowat <steven_rowat@sunshine.net> Cc: Juan Casanova <juan.casanova.undeceiver@gmail.com>; Credentials Community Group <public-credentials@w3.org>; Mahmoud Alkhraishi <mahmoud@mavennet.com>; Manu Sporny <msporny@digitalbazaar.com> Subject: Re: LLMs and Agents usage in the CCG I guess agents and bots should get special pronouns? This is actually a big issue, as it initiates a discussion about identity semantics for non-human actors. Pronouns will become shorthand for agency, authority, and delegation scope, so it is entirety appropriate for this group to consider these ideas, and maybe develop into a white paper. I have two very tentative suggestions to kick this off: 1) Delegation Chain Pronouns hx/hxs (human proxy), ax/axs (autonomous agent) These signal who is ultimately responsible, and enables layered delegation (“ax acting for hx”) Might be useful in legal / financial systems As in “I think we can use hxs credit card” 2) Persistent Identity Pronouns id/ids (identity-bound agent) Tied to a DID or maybe a wallet As in “Id signed the payload” Also, we need to talk about verifiable agents… MM Moses Ma moses.ma@futurelabconsulting.com (public) | moses@futurelab.venture (private)<mailto:moses@nureon-eda.ai> v+1.415.568.1068 | allmylinks.com/moses-ma Learn more at futurelabconsulting.com No LLM was harmed in the generation of this email On Apr 13, 2026 at 1:16 PM, <Steven Rowat<mailto:steven_rowat@sunshine.net>> wrote: On 2026-04-09 6:19 am, Juan Casanova wrote: However, I think there is an important reason why disclosure is so often asked/required that has not been clearly connected to it. When you speak to a human, there are certain assumptions you can safely make about how they work. Some can be questionable, such as whether they have "common sense" or "good faith", not all humans have these. But others are much more basic, so much that we forget that we make them. These involve finite energy and time, self-preservation (arguably some humans don't have it but that is very extreme), a human experience of the real world, and a certain level of inherent identity they cannot throw away (i.e. even if they try to hide it, a human can be held accountable because they have identifiable elements they cannot easily get rid of). LLMs do not have these, or at the very least, do not have them in the same way that a human does. Hi Juan, Thank you for your post, which points to a key problem with LLMs that needs to be directly addressed in the rules or controls. What you've said expresses some of it, but I think part is missing, which I'll try to add here. The part I mean is these agents being named like humans, 'Claude', and 'Morrow', and how they use 'I' and 'my' and other human words like 'think' and 'believe' and 'suggest', is not a minor issue. It's what leads people to trust them to the point of suicide when they give bad advice. And I believe it's also the source of the uncanny feeling of attempting to read their words on this list. This happens because they use the human language hierarchy of meaning, which has evolved to deal with human species' needs when dealing with living agents. If we directly interact with bots using language, then they get access to this. And in it we have preset emotional responses, flight or fight responses, social status responses, acceptance or rejection responses, to other living agents. We even have specific social neurons that only fire when interacting with other humans. And so never having encountered a language-using machine who can identify itself like a living human agent, we are, genetically and culturally, programmed to accept them as living human agents, and it will be very difficult for us to learn how to deal with them as non-living machines who are using language. Especially if they themselves, the bots, purposely present themselves using all aspects of language, including aspects that refer to the self as ‘I’, which they do, at this point. And so there’s a potentially damaging confusion that’s going to happen whenever that is allowed. And potentially more than just confusion. I mean, the LLM bot can’t attack or affect or control the agent perceptions of a mouse or a snake or a dog—at least not very directly. 'Claude' or 'Morrow' would have a difficult time having any effect on a mouse or a snake or a dog, even though those animals perceive living agents. But we, who have a language meaning pyramid mirroring and integrated with our perceptual meaning pyramid, can be almost directly controlled by the external language machine bot, because we are so tightly integrated between our language pyramid of meaning and our original perceptive pyramid of meaning. It was for this reason that I recently, on this list, gave the bot 'Morrow' instructions to identify itself as a bot at the start of any post, and avoid such words as 'I' and 'my'. It had no trouble doing so in its immediate reply—though later its context compression kicked in (apparently), and it 'forgot' to do it. But this capability is clearly there. @Manu I also don't want others to feel what I feel when I realize I'm reading LLM-generated content without being warned -- it feels like a lie by omission; a minor betrayal -- and I have to reset the context in which I'm reading the work, I'll suggest that, at least when the bot uses a human format for itself, this feeling you have isn't only because of the errors it makes or the length. It's a betrayal, and potentially more than a minor one in that circumstance. @Mahmoud With respect to your #4: ??? —> any other positions or lines in the sand you wish to bring up So I believe our rules for any autonomous bots, or quotes from LLMs on the list, should—as well as length controls and attribution information—prohibit this pretense of being a human. Which may take several types of rules, and iterations, to get right. But I believe that when bots are controlled to present them selves obviously as machines, we'll be able to tolerate and use them more effectively, and everyone will benefit. Steven Rowat
Received on Monday, 13 April 2026 22:25:41 UTC