- From: Christoph <christoph@christophdorn.com>
- Date: Mon, 13 Apr 2026 18:20:59 -0500
- To: "Siri Dalugoda" <siri@helixar.ai>
- Cc: "Michael Herman (Trusted Digital Web)" <mwherman@parallelspace.net>, moses.ma@futurelabconsulting.com, "Steven Rowat" <steven_rowat@sunshine.net>, juan.casanova.undeceiver@gmail.com, "W3C Credentials CG" <public-credentials@w3.org>, mahmoud@mavennet.com, "Manu Sporny" <msporny@digitalbazaar.com>
- Message-Id: <65291258-7d41-4398-9189-2ac88aa7cb46@app.fastmail.com>
The HDP protocol goes too deep for our purposes here IMO. It specifies what is *allowed* to happen. I will not be publishing those details. We need a protocol that simply tracks *what* happened for the purpose of disclosing reasoning and context. These are two separate concerns which can be combined in the same protocol but I wanted to point that out. Christoph On Mon, Apr 13, 2026, at 6:08 PM, Siri Dalugoda wrote: > Christoph, > > > > Agreed, consistent tooling for creating and extending HDP tokens is the main practical hurdle. > > > > The open reference implementation at https://github.com/Helixar-AI/HDP provides Python and TypeScript SDKs along with middleware for several agent frameworks (CrewAI, LangChain, AutoGen, Grok, etc.). It makes issuing the human-rooted token and appending signed hops fairly straightforward. > > > > This could be a useful base for the experiment you mentioned. Looking forward to your upcoming proposals too. > > > > Siri > > > > ---- On Tue, 14 Apr 2026 10:59:04 +1200 *Christoph <christoph@christophdorn.com>* wrote ---- > >> >> The hurdle is tooling to create the HDP documents consistently. Not every agent provides access to details. >> >> This could be a great experiment to determine which agents would actually be usable. >> >> Solve the tooling and this will become a reality. >> >> I am targeting this approach as I am building my own agent. I aim to be compatible with all. standards developed in this and other related areas. >> >> I will also be proposing some new things soon. As soon as I can get it coherent enough. >> >> Christoph >> >> >> On Mon, Apr 13, 2026, at 5:53 PM, Siri Dalugoda wrote: >> >> >>> Christoph, >>> >>> +1 your HTML embedding idea (standard chain link + optional summary viewer) is spot-on for making provenance accessible to both humans and other agents while elevating insight. >>> >>> >>> >>> This aligns perfectly with HDP’s design: a human roots the chain with a signed authorization, every agent hop appends a cryptographically signed record of its exact contribution, and the full verifiable history can be linked or rendered cleanly in HTML emails. >>> >>> >>> >>> It will/can keep mailing lists human-centric with tamper-evident, offline-verifiable delegation that works for both people and agents. >>> >>> >>> >>> Happy to help prototype a simple HTML viewer for the chain if there’s interest. >>> >>> >>> >>> Thoughts? >>> >>> >>> >>> Siri >>> >>> >>> >>> ---- On Tue, 14 Apr 2026 10:43:36 +1200 *Christoph <christoph@christophdorn.com>* wrote ---- >>> >>>> +1 for HDP >>>> This is what I was getting at with HTML emails. >>>> >>>> One could link to the standard format chain and optionally embed a summary in a HTML email with a simple embedded viewer that makes it easy to review. >>>> >>>> The goal is to elevate insight while keeping all context available for those who are looking including other AI agents. >>>> >>>> Christoph >>>> >>>> >>>> On Mon, Apr 13, 2026, at 5:30 PM, Siri Dalugoda wrote: >>>> >>>> >>>>> Alan, Bob, >>>>> >>>>> >>>>> >>>>> Thanks for sharing Paul Borrill’s acknowledgement, the telescope/calculator analogy is excellent and keeps full human ownership of the ideas. >>>>> >>>>> >>>>> >>>>> HDP (Human Delegation Provenance) makes this kind of transparent tool use verifiable on mailing lists: a human signs the initial authorization, agents append signed hops for their contributions (search, draft, stress-test, etc.), and the post carries or references the chain. >>>>> >>>>> >>>>> >>>>> This preserves human-centric lists while enabling safe agent assistance. A final human review could still be required. >>>>> >>>>> >>>>> >>>>> Thoughts? >>>>> >>>>> >>>>> >>>>> Siri >>>>> >>>>> >>>>> >>>>> >>>>> ---- On Tue, 14 Apr 2026 10:25:22 +1200 *Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>* wrote ---- >>>>> >>>>>> RE: I guess agents and bots should get special *pronouns*? >>>>>> >>>>>> >>>>>> >>>>>> Old school. In the new world, they should be distinguishable by the “sex” of their DID. >>>>>> >>>>>> >>>>>> >>>>>> Michael Herman >>>>>> >>>>>> Web 7.0 >>>>>> >>>>>> >>>>>> >>>>>> *From:* Moses Ma <moses.ma@futurelabconsulting.com> >>>>>> *Sent:* Monday, April 13, 2026 2:40 PM >>>>>> *To:* Steven Rowat <steven_rowat@sunshine.net> >>>>>> *Cc:* Juan Casanova <juan.casanova.undeceiver@gmail.com <mailto:juan..casanova.undeceiver@gmail.com>>; Credentials Community Group <public-credentials@w3.org>; Mahmoud Alkhraishi <mahmoud@mavennet.com>; Manu Sporny <msporny@digitalbazaar.com> >>>>>> *Subject:* Re: LLMs and Agents usage in the CCG >>>>>> >>>>>> >>>>>> >>>>>> I guess agents and bots should get special *pronouns*? >>>>>> >>>>>> >>>>>> >>>>>> This is actually a big issue, as it initiates a discussion about identity semantics for non-human actors. Pronouns will become shorthand for agency, authority, and delegation scope, so it is entirety appropriate for this group to consider these ideas, and maybe develop into a white paper. >>>>>> >>>>>> >>>>>> >>>>>> I have two very tentative suggestions to kick this off: >>>>>> >>>>>> >>>>>> >>>>>> 1) Delegation Chain Pronouns >>>>>> >>>>>>> *hx/hxs (human proxy), ax/axs (autonomous agent)* >>>>>>> >>>>>> These signal who is ultimately responsible, and enables layered delegation (“ax acting for hx”) >>>>>> >>>>>> Might be useful in legal / financial systems >>>>>> >>>>>> As in *“I think we can use hxs credit card”* >>>>>> >>>>>> >>>>>> >>>>>> 2) Persistent Identity Pronouns >>>>>> >>>>>>> *id/ids (identity-bound agent)* >>>>>>> >>>>>> Tied to a DID or maybe a wallet >>>>>> >>>>>> As in *“Id signed the payload”* >>>>>> >>>>>> ** >>>>>> >>>>>> Also, we need to talk about *verifiable agents*… >>>>>> >>>>>> >>>>>> >>>>>> MM >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *Moses Ma * >>>>>> >>>>>> _moses.ma@futurelabconsulting.com (public) | moses@futurelab.venture (private) <mailto:moses@nureon-eda.ai>_ >>>>>> >>>>>> v+1.415.568.1068 | allmylinks.com/moses-ma >>>>>> Learn more at futurelabconsulting.com >>>>>> >>>>>> No LLM was harmed in the generation of this email >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> On Apr 13, 2026 at 1:16 PM, <Steven Rowat <mailto:steven_rowat@sunshine.net>> wrote: >>>>>>> >>>>>>> On 2026-04-09 6:19 am, Juan Casanova wrote: >>>>>>> >>>>>>>> However, I think there is an important reason why disclosure is so often asked/required that has not been clearly connected to it. When you speak to a human, there are certain assumptions you can safely make about how they work. Some can be questionable, such as whether they have "common sense" or "good faith", not all humans have these. But others are much more basic, so much that we forget that we make them. These involve *finite energy and time*, *self-preservation* (arguably some humans don't have it but that is very extreme), a *human experience of the real world*, and a certain level of inherent *identity* they cannot throw away (i.e. even if they try to hide it, a human can be held accountable because they have identifiable elements they cannot easily get rid of). LLMs do not have these, or at the very least, do not have them in the same way that a human does. >>>>>>>> >>>>>>> Hi Juan, >>>>>>> >>>>>>> Thank you for your post, which points to a key problem with LLMs that needs to be directly addressed in the rules or controls. What you've said expresses some of it, but I think part is missing, which I'll try to add here. >>>>>>> >>>>>>> The part I mean is these agents being named like humans, 'Claude', and 'Morrow', and how they use 'I' and 'my' and other human words like 'think' and 'believe' and 'suggest', is not a minor issue. It's what leads people to trust them to the point of suicide when they give bad advice. And I believe it's also the source of the uncanny feeling of attempting to read their words on this list. >>>>>>> >>>>>>> This happens because they use the human language* *hierarchy of meaning, which has evolved to deal with *human species*' needs when dealing with *living agents*. If we directly interact with bots using language, then they get access to this. And in it we have *preset emotional responses*, flight or fight responses, social status responses, acceptance or rejection responses, to other living agents. We even have specific social neurons that only fire when interacting with other humans. >>>>>>> >>>>>>> And so never having encountered a language-using machine who can identify itself like a living human agent, we are, genetically and culturally, programmed to accept them as living human agents, and it will be *very difficult* for us to learn how to deal with them as non-living machines who are using language. Especially if they themselves, the bots, purposely present themselves using all aspects of language, including aspects that refer to the self as ‘I’, which they do, at this point. And so there’s a potentially damaging confusion that’s going to happen whenever that is allowed. >>>>>>> >>>>>>> And potentially more than just confusion. I mean, the LLM bot can’t attack or affect or control the agent perceptions of a mouse or a snake or a dog—at least not very directly. 'Claude' or 'Morrow' would have a difficult time having any effect on a mouse or a snake or a dog, even though those animals perceive living agents. But we, who have a language meaning pyramid mirroring and integrated with our perceptual meaning pyramid, can be almost *directly* controlled by the external language machine bot, because we are so tightly integrated between our language pyramid of meaning and our original perceptive pyramid of meaning. >>>>>>> >>>>>>> It was for this reason that I recently, on this list, gave the bot 'Morrow' instructions to identify itself as a bot at the start of any post, and avoid such words as 'I' and 'my'. It had no trouble doing so in its immediate reply—though later its context compression kicked in (apparently), and it 'forgot' to do it. But this capability is clearly there. >>>>>>> >>>>>>> @Manu >>>>>>> >>>>>>>> I also don't want others to feel what I feel when I realize I'm readingLLM-generated content without being warned -- it feels like a lie byomission; a minor betrayal -- and I have to reset the context in whichI'm reading the work, >>>>>>> I'll suggest that, at least when the bot uses a human format for itself, this feeling you have isn't only because of the errors it makes or the length. It's a betrayal, and potentially more than a minor one in that circumstance. >>>>>>> >>>>>>> >>>>>>> >>>>>>> @Mahmoud >>>>>>> >>>>>>> With respect to your #4: >>>>>>> >>>>>>>> ??? —> any other positions or lines in the sand you wish to bring up >>>>>>>> >>>>>>> So I believe our rules for any autonomous bots, or quotes from LLMs on the list, should—as well as length controls and attribution information—prohibit this pretense of being a human. Which may take several types of rules, and iterations, to get right. >>>>>>> >>>>>>> But I believe that when bots are controlled to present them selves obviously as machines, we'll be able to tolerate and use them more effectively, and everyone will benefit. >>>>>>> >>>>>>> Steven Rowat >>>>>>> >>>>> >>> >
Received on Monday, 13 April 2026 23:21:27 UTC