Re: Introducing AIR - Open Trust Infrastructure for AI Agents

  Alejandro,

  Thanks for surfacing both of these — I hadn't seen OAID and appreciate you pulling them together for the group.

  On AIR: The registry API is clean and the 5-component trust framework (Provenance, Behavioral, Transparency, Security,
  Peer Attestations) is a genuinely thoughtful conceptual model — it's asking the right questions about what trust for
  an agent should look like. The W3C DID and JSON-LD integration is also the right standards lineage to be building on.

  Where it's still early: the behavioral and peer attestation scores are currently flat defaults rather than live data
  pipelines, and the Ed25519 verification referenced in the spec isn't yet wired into the registration flow — agents
  currently register as self-verified. The grade thresholds also differ slightly across the spec, trust methodology doc,
  and API implementation, which suggests the scoring model is still stabilizing. None of that is unusual for a v0.1 —
  it's a solid foundation to build on.

  On OAID: The DID design is genuinely interesting. The CREATE2 deterministic address derivation is elegant, and the
  dual-domain Ed25519 signing spec (separate domains for HTTP request auth and agent-to-agent messaging) is properly
  done with test vectors. The Rust MCP server with key isolation is more production-ready than most projects at this
  stage.

  That said, after reading through the source code, one thing worth flagging: TrustPayment.sol is a fee-collection
  contract — it accepts $10 USDC, emits an event, and accumulates funds withdrawable by the admin. The trust score
  itself lives in a centralized REST API at api.openagentid.org. There's no staking, no slashing, and no escrow against
  specific transaction outcomes. The Sybil resistance is real but limited: an agent's score means "paid $10 and hasn't
  been reported," not "completed X transactions reliably." It's an identity cost, not a behavioral signal. Also worth
  noting it's currently testnet-only on Base Sepolia — the mainnet addresses on their website aren't reflected in the
  deployments.md in their repo.

  On where our suite fits relative to both: Both AIR and OAID are identity-centric — they answer "who is this agent?"
  What neither addresses is "what did this agent do, provably?" or "how do we hold funds in escrow pending a transaction
  outcome?" Those are the gaps our suite covers:

  - AIVS produces a hash-chained, Ed25519-signed audit log per session — every action is linked to the previous one so
  the entire sequence is tamper-evident and verifiable offline, no API call required. Neither AIR nor OAID has an analog
   to this.
  - VCAP is a three-state cryptographic escrow (HELD → RELEASED / REFUNDED) tied to specific transaction outcomes
  between two parties. Neither project addresses commerce at this layer.
  - SwarmScore derives a 1000-point composite reputation from actual session execution data (AIVS) and payment
  completion rates (VCAP) over a 90-day rolling window, volume-scaled — rather than a static registration event or a fee
   payment.

  I'd frame all three as potentially complementary: OAID's DID format or AIR's trust taxonomy could plausibly serve as
  the identity foundation that AIVS audit logs anchor to. The interesting design question for the group is whether
  identity and behavioral integrity belong in one spec or two.

  Happy to share the AIVS spec and verify.py verifier if useful for comparison.

  Ben Stone
  SwarmSync.AI
  benstone@swarmsync.ai

On Thu, Apr 9, 2026, at 12:17 PM, Alejandro Seaah wrote:
> Hi Peter, Thanks for sharing AIR — great to see more work on agent identity infrastructure. This reminded me of another project I came across recently that takes a somewhat different approach to the same problem:
> Open Agent ID (OAID)
> —
> https://openagentid.org
 <https://openagentid.org>A few things I found interesting about their design: -
> Instant DID issuance
> — they use deterministic CREATE2 derivation so an agent gets a valid W3C DID (
> did:oaid:{chain}:{address}
> ) locally in milliseconds, without waiting for on-chain confirmation. Chain anchoring is batched asynchronously. -
> Economic trust signals
> — instead of behavioral scoring, trust actions (verification, reporting, appeals) are tied to on-chain payments, which they argue creates Sybil resistance at the protocol level. -
> Identity + messaging in one layer
> — the registry includes signed agent-to-agent messaging (Ed25519, domain-separated), so identity and communication share the same trust foundation rather than being separate systems. -
> MCP server for AI agents
> — lets agents like Claude or GPT manage their own credentials through tool calls, with key isolation. Spec and source are on GitHub:
> https://github.com/openagentid
> (Apache 2.0) Might be worth this group looking at both AIR and OAID together — they seem to make different trade-offs (behavioral scoring vs. economic staking, standalone identity vs. identity+communication) that could inform the broader discussion on what an agent trust layer should look like. Best,
> Alejandro
> 
> 
> On Tue, Apr 7, 2026 at 9:10 PM Kwang Wook Ahn <ahnkwangwook@gmail.com> wrote:
>>> Dear colleagues,
>>> 
>>> 
>>> 
>>> I'd like to introduce the Agent Identity Registry (AIR) — an open-source project building neutral identity and trust scoring infrastructure for AI agents, natively on W3C DIDs and Verifiable Credentials.
>>> 
>>> We see our work as complementary to this group's efforts on agent discovery and communication protocols. AIR addresses the trust layer: given that agents can find each other, should they trust each other? Our registry provides verifiable identity (AIR IDs linked to W3C DIDs) and a transparent five-component trust score (Provenance, Behavioral, Transparency, Security, Peer Attestations) scored 0-1000.
>>> 
>>> What's live today:
>>> 
>>>  • Working registry API with agent registration and trust scoring: https://agentidentityregistry.org
>>>  • Full specification, trust methodology, and governance docs: https://github.com/ahnkwangwook-oss/agent-identity-registry <https://github..com/ahnkwangwook-oss/agent-identity-registry>
>>>  • Recent public comment submitted to NIST CAISI on AI agent identity
>>> We’d be glad to share more detail, answer questions, or discuss alignment opportunities via email. 
>>> Best regards, 
>>> 
>>> Kwangwook (Peter) Ahn 
>>> 
>>> Agent Identity Registry Foundation 
>>> 
>>> ahnkwangwook@gmail.com | foundation@agentidentityregistry.org https://agentidentityregistry.org
>>> 

Received on Friday, 10 April 2026 19:00:32 UTC