Re: LLMs and Agents usage in the CCG

po 13. 4. 2026 v 15:59 odesílatel Michael Herman (Trusted Digital Web) <
mwherman@parallelspace.net> napsal:

> How do you enforce traceability in terms of every artifact an agent
> produces?
>

Each contributor signs the W3C contributors agreement on signup, then they
are responsible to check. However I have seen copyrighted, or
unlicensed material enter into CGs already. It's an issue.


>
> Get Outlook for Android <https://aka.ms/AAb9ysg>
> ------------------------------
> *From:* Christian Hommrich <christian.hommrich@gmail.com>
> *Sent:* Thursday, April 9, 2026 3:36:18 PM
> *To:* public-credentials@w3.org <public-credentials@w3.org>
> *Subject:* Re: LLMs and Agents usage in the CCG
>
> Daniel's point about delegation and credentials for AI members is the
> crux of this.
>
> We've been working on did:trail
> (https://github.com/trailprotocol/trail-did-method) — a W3C DID method
> for AI agent identity. The core idea: the deploying organization
> registers and signs for its agents, creating a verifiable
> accountability chain that traces back to a known human.
>
> Yesterday's Anthropic Managed Agents launch made the gap concrete:
> platform-hosted agents are dynamically provisioned per session — no
> persistent identity in the classical sense. We posted a spec extension
> proposal today addressing this directly:
> https://github.com/trailprotocol/trail-did-method/discussions/10
>
> The accountability model is the same whether the agent is on
> Anthropic, Azure, or self-hosted. The deployer is always accountable.
> The credential is verifiable without platform cooperation.
>
> Happy to discuss whether this fits what the CCG is looking for.
>
> Christian Hommrich
> TRAIL Protocol Initiative
> https://trailprotocol.org
>
>
>

Received on Monday, 13 April 2026 14:10:17 UTC