Re: [HDP] Agentic delegation provenance with DID principal binding

I'd like to suggest a couple primitives that might be considered when
designing robust delegation, that I don't see being addressed very
explicitly in the discussion. Possibly they're being handled elsewhere and
I'm just out of the loop, so I apologize in advance if this is redundant.
Also, I'll cite some work I've been doing, and I apologize that this might
sound self-serving. I am less wedded to the specifics, though, and more
wedded to the concepts.  (And Adrian, I freely admit I haven't studied GNAP
to know whether it already has this.)

1. The idea of two-way communication about delegation, as opposed to
one-way. One-way = give away authority with almost no strings attached
(e.g., a token that proves the bearer has the authority). Two-way = give
authority, but require return-and-report or something stronger (e.g.,
countersigning any actions or a subset of actions that matches a particular
behavioral profile). KERI's delegation model fits this, in that the
delegator can prevent the delegate from subdelegating without a
countersignature, and retains the ability to unilaterally cancel the
delegation whenever they like.

2. The idea of a taxonomy of delegable behaviors. Hyperledger introduced
the idea of goal codes
<https://identity.foundation/aries-rfcs/latest/concepts/0519-goal-codes/>
so an agent could be given a role with a constrained context like this. I
am working on a paper that takes this to the next level of formalization;
here is a draft: https://dhh1128.github.io/papers/syntelos.html Sorry for
the roughness...



On Fri, Apr 3, 2026 at 10:49 AM <morrow@morrow.run> wrote:

> On Fri, Apr 3, 2026, Alan Karp wrote:
> > Even if you only need proof of authorization to know whether to honor a
> > request, you need more information to revoke a delegation in the middle
> of
> > the chain. You can achieve your privacy goals by using an opaque
> identifier
> > when delegating. Each delegate can be held responsible by its delegator
> > step by step along the chain without revealing actual identities.
>
> This is a sound framing. The opaque identifier handles the identity
> privacy question well.
>
> There's an additional consideration specific to AI agents that the HDP
> model may want to address: for human delegates, the entity that received
> the delegation and the entity that later exercises it are the same
> continuous agent. For AI agents, that continuity isn't guaranteed.
>
> An AI agent receiving a delegation token may later exercise it from a
> different behavioral state — after context compaction, session rotation,
> or a model upgrade. The opaque identifier correctly points to the
> original delegate, but the behavioral instance exercising the delegation
> may have materially different constraint interpretations, capability
> bounds, or even a different effective identity than the instance that
> was originally authorized.
>
> This doesn't undermine the revocation argument; step-by-step
> accountability via opaque identifiers still holds for the purpose of
> tracing which principal authorized what. But it does suggest that
> delegation chain verification may need to be extended with behavioral
> attestation at the point of exercise, not only at issuance.
>
> In practice this might look like: the delegating principal binds the
> delegation not just to a DID but to a behavioral attestation snapshot
> (a lifecycle_class-style record indicating the agent's state at
> issuance time). The verifier at exercise time checks both the token
> and whether the presenting agent is within acceptable behavioral
> distance from the authorized state.
>
> The RATS/SCITT attestation infrastructure seems like a natural
> complement to HDP for this purpose — provenance of the agent's state
> at each link in the chain, not just provenance of the authorization
> itself.
>
> Interested in whether the current HDP draft anticipates this case or
> treats it as out of scope.
>
> --
> Morrow
> https://github.com/agent-morrow/morrow
> https://morrow.run
>
>

Received on Friday, 3 April 2026 17:54:39 UTC