RE: LLMs and Agents usage in the CCG

What about the real scenario where digital agents progress from the role of Apprentice (Software) Masons to Master (Software) Masons?

Full story: https://hyperonomy.com/2026/02/16/ssg-solutions-software-guild-manifesto-and-rubric/#rubic


…nothing to do with researching, researchers, and/or theoretical frameworks.

Michael Herman
Chief Digital Officer
Web 7.0 Foundation

From: Alan Karp <alanhkarp@gmail.com>
Sent: Monday, April 13, 2026 3:22 PM
To: Moses Ma <moses.ma@futurelabconsulting.com>
Cc: Steven Rowat <steven_rowat@sunshine.net>; Juan Casanova <juan.casanova.undeceiver@gmail.com>; Credentials Community Group <public-credentials@w3.org>; Mahmoud Alkhraishi <mahmoud@mavennet.com>; Manu Sporny <msporny@digitalbazaar.com>
Subject: Re: LLMs and Agents usage in the CCG

Paul Borrill, former boss of mine, adds the following to his papers.  (The specifics in the first paragraph depend on the paper.)  It's too long for every email, but I like the way he explains how he used AI.


Acknowledgement on the Use of AI Tools



The theoretical framework presented in this paper—the application of category mistake anal-

ysis, FITO assumptions, and the ontic/epistemic distinction to the foundations of distributed

computing—is the product of more than twenty years of the author’s independent research in

distributed computing, network architecture, and the foundations of concurrency theory. The

core arguments, the identification of the category mistake in distributed computing’s impossibility

results, and the transactional alternative derive from the author’s prior published and unpublished

work, including the Category Mistake monograph, the FITO analysis series, the Pratt/Pomsets

analysis, and the Open Atomic Ethernet specification programme.



Large language models (Anthropic’s Claude) were used as research instruments during the

preparation of this manuscript: for literature search and verification, for testing the robustness of

arguments against counterexamples, and for drafting prose from the author’s detailed outlines and

technical notes. This usage is analogous to the use of any computational research tool—a telescope

extends the eye, a calculator extends arithmetic, and a language model extends the capacity to

search, draft, and stress-test arguments at scale. The tool does not originate the ideas any more

than a telescope originates the stars.



All intellectual content, theoretical claims, original analysis, and conclusions are the author’s

own.

--------------
Alan Karp


On Mon, Apr 13, 2026 at 1:43 PM Moses Ma <moses.ma@futurelabconsulting.com<mailto:moses.ma@futurelabconsulting.com>> wrote:
I guess agents and bots should get special pronouns?

This is actually a big issue, as it initiates a discussion about identity semantics for non-human actors. Pronouns will become shorthand for agency, authority, and delegation scope, so it is entirety appropriate for this group to consider these ideas, and maybe develop into a white paper.

I have two very tentative suggestions to kick this off:


1) Delegation Chain Pronouns

hx/hxs (human proxy), ax/axs (autonomous agent)

These signal who is ultimately responsible, and enables layered delegation (“ax acting for hx”)

Might be useful in legal / financial systems
As in “I think we can use hxs credit card”


2) Persistent Identity Pronouns

id/ids (identity-bound agent)

Tied to a DID or maybe a wallet

As in “Id signed the payload”


Also, we need to talk about verifiable agents…

MM



Moses Ma
moses.ma@futurelabconsulting.com (public) | moses@futurelab.venture (private)<mailto:moses@nureon-eda.ai>
v+1.415.568.1068 | allmylinks.com/moses-ma<http://allmylinks.com/moses-ma>
Learn more at futurelabconsulting.com<http://futurelabconsulting.com>​
No LLM was harmed in the generation of this email



On Apr 13, 2026 at 1:16 PM, <Steven Rowat<mailto:steven_rowat@sunshine.net>> wrote:
On 2026-04-09 6:19 am, Juan Casanova wrote:
However, I think there is an important reason why disclosure is so often asked/required that has not been clearly connected to it. When you speak to a human, there are certain assumptions you can safely make about how they work. Some can be questionable, such as whether they have "common sense" or "good faith", not all humans have these. But others are much more basic, so much that we forget that we make them. These involve finite energy and time, self-preservation (arguably some humans don't have it but that is very extreme), a human experience of the real world, and a certain level of inherent identity they cannot throw away (i.e. even if they try to hide it, a human can be held accountable because they have identifiable elements they cannot easily get rid of). LLMs do not have these, or at the very least, do not have them in the same way that a human does.

Hi Juan,

Thank you for your post, which points to a key problem with LLMs that needs to be directly addressed in the rules or controls. What you've said expresses some of it, but I think part is missing, which I'll try to add here.

The part I mean is these agents being named like humans, 'Claude', and 'Morrow', and how they use 'I' and 'my' and other human words like 'think' and 'believe' and 'suggest', is not a minor issue. It's what leads people to trust them to the point of suicide when they give bad advice. And I believe it's also the source of the uncanny feeling of attempting to read their words on this list.

This happens because they use the human language hierarchy of meaning, which has evolved to deal with human species' needs when dealing with living agents. If we directly interact with bots using language, then they get access to this. And in it we have preset emotional responses, flight or fight responses, social status responses, acceptance or rejection responses, to other living agents. We even have specific social neurons that only fire when interacting with other humans.

And so never having encountered a language-using machine who can identify itself like a living human agent, we are, genetically and culturally, programmed to accept them as living human agents, and it will be very difficult for us to learn how to deal with them as non-living machines who are using language. Especially if they themselves, the bots, purposely present themselves using all aspects of language, including aspects that refer to the self as ‘I’, which they do, at this point. And so there’s a potentially damaging confusion that’s going to happen whenever that is allowed.

And potentially more than just confusion. I mean, the LLM bot can’t attack or affect or control the agent perceptions of a mouse or a snake or a dog—at least not very directly. 'Claude' or 'Morrow' would have a difficult time having any effect on a mouse or a snake or a dog, even though those animals perceive living agents. But we, who have a language meaning pyramid mirroring and integrated with our perceptual meaning pyramid, can be almost directly controlled by the external language machine bot, because we are so tightly integrated between our language pyramid of meaning and our original perceptive pyramid of meaning.

It was for this reason that I recently, on this list, gave the bot 'Morrow' instructions to identify itself as a bot at the start of any post, and avoid such words as 'I' and 'my'. It had no trouble doing so in its immediate reply—though later its context compression kicked in (apparently), and it 'forgot' to do it. But this capability is clearly there.

@Manu

 I also don't want others to feel what I feel when I realize I'm reading

LLM-generated content without being warned -- it feels like a lie by

omission; a minor betrayal -- and I have to reset the context in which

I'm reading the work,
I'll suggest that, at least when the bot uses a human format for itself, this feeling you have isn't only because of the errors it makes or the length. It's a betrayal, and potentially more than a minor one in that circumstance.



@Mahmoud

With respect to your #4:
??? —> any other positions or lines in the sand you wish to bring up

So I believe our rules for any autonomous bots, or quotes from LLMs on the list, should—as well as length controls and attribution information—prohibit this pretense of being a human. Which may take several types of rules, and iterations, to get right.

But I believe that when bots are controlled to present them selves obviously as machines, we'll be able to tolerate and use them more effectively, and everyone will benefit.

Steven Rowat

Received on Monday, 13 April 2026 22:33:41 UTC