- From: Alan Karp <alanhkarp@gmail.com>
- Date: Fri, 3 Apr 2026 09:35:31 -0700
- To: sankarshan <sankarshan.mukhopadhyay@gmail.com>
- Cc: Siri Dalugoda <siri@helixar.ai>, public-credentials <public-credentials@w3.org>
- Message-ID: <CANpA1Z20nRWA+85_wcMbe455taLYtj_P9Dy6eTM6ao+b-isibQ@mail.gmail.com>
On Fri, Apr 3, 2026 at 5:13 AM sankarshan <sankarshan.mukhopadhyay@gmail.com> wrote: > The VC alignment is interesting. Expressing agent authority as a > Verifiable Presentation derived from the root token makes sense. It > may also be worth exploring whether parts of the chain, or claims > derived from it, can be selectively disclosed rather than always > sharing the full delegation lineage, especially in cases where only > proof of authorization is needed rather than full traceability. Even if you only need proof of authorization to know whether to honor a request, you need more information to revoke a delegation in the middle of the chain. You can achieve your privacy goals by using an opaque identifier when delegating. Each delegate can be held responsible by its delegator step by step along the chain without revealing actual identities. -------------- Alan Karp On Fri, Apr 3, 2026 at 5:13 AM sankarshan <sankarshan.mukhopadhyay@gmail.com> wrote: > The VC alignment is interesting. Expressing agent authority as a > Verifiable Presentation derived from the root token makes sense. It > may also be worth exploring whether parts of the chain, or claims > derived from it, can be selectively disclosed rather than always > sharing the full delegation lineage, especially in cases where only > proof of authorization is needed rather than full traceability. > > /sankarshan > > On Tue, 31 Mar 2026 at 01:30, Siri Dalugoda <siri@helixar.ai> wrote: > > > > Hi Credentials CG Team, > > > > I'd like to share a protocol that addresses a gap in the agentic AI > space that I believe is directly relevant to this group's work on > Verifiable Credentials and decentralized identity. > > > > HDP (Human Delegation Provenance Protocol) defines a signed token chain > that creates a cryptographic audit trail from an authorising human to every > AI agent acting downstream. > > The principal identity model supports id_type: "did" natively, meaning a > W3C DID can be used as the root authorising identity binding HDP delegation > chains to existing decentralised identity infrastructure. > > > > IETF draft: > https://datatracker.ietf.org/doc/draft-helixar-hdp-agentic-delegation/ > > Spec: https://helixar.ai/about/labs/hdp/ > > Repository: https://github.com/Helixar-AI/HDP > > > > Key properties: > > - Ed25519 signatures over RFC 8785 canonical JSON > > - Fully offline verification, no registry or network dependency > > - DID-compatible principal binding at the root token level > > - Compact enough for HTTP header transport (X-HDP-Token) > > > > I see a natural alignment between HDP's delegation model and the VC data > model, specifically around how an AI agent's authority could be expressed > as a Verifiable Presentation derived from the root delegation token. > > I'm actively exploring this in the next draft revision and would welcome > input from this group on how best to formalise that binding. > > > > Happy to answer questions or take feedback on the draft. > > > > Best regards, > > Siri > > Helixar Limited > > > > > > > -- > sankarshan mukhopadhyay > <https://about.me/sankarshan.mukhopadhyay> > > >
Received on Friday, 3 April 2026 16:35:48 UTC