- From: Ben Stone <benstone@swarmsync.ai>
- Date: Mon, 13 Apr 2026 12:59:56 -0600
- To: paoladimaio10@googlemail.com, "Kwang Wook Ahn" <ahnkwangwook@gmail.com>
- Cc: public-agentprotocol@w3.org
- Message-Id: <b3fbb069-f120-463f-bb16-fcca1392ed53@app.fastmail.com>
Paola,
Thank you for sharing this — it's the most rigorous independent assessment of this space I've seen written up in one
place. Your framing of the governance gap (aspiration vs. demonstrated) and the peer attestation gaming problem are
exactly the right questions to press on.
On the comparative analysis: I'd add one gap to the trust scoring dimension. The analysis notes that "only AIR
provides a graduated trust score" — that's accurate for the published initiatives you surveyed, but I want to flag
that SwarmSync's SwarmScore takes a structurally different approach worth including.
The distinction: AIR derives trust scores from assertions and attestations. SwarmScore derives them from verified
execution records — hash-chained, Ed25519-signed audit logs of actual agent sessions (AIVS), and cryptographic escrow
settlement outcomes (VCAP). The score is a 1000-point composite: Technical Execution (up to 400 points) sourced from
session integrity data, and Commercial Reliability (up to 600 points) sourced from payment outcomes. Both inputs are
tamper-evident and offline-verifiable without contacting any central registry.
This matters for the gaming question you raise. A score derived from verified behavioral data is structurally harder
to game than a score derived from peer attestations, because you cannot forge the underlying session records without
breaking the hash chain.
The IETF drafts (draft-stone-aivs, draft-stone-vcap, draft-stone-atep) are on Datatracker if you'd like to include
them in a future comparative revision. I'd welcome your critical read — the same rigorous framing you applied to AIR
is exactly what these specs deserve.
— Ben Stone
SwarmSync.AI
benstone@swarmsync.ai
On Sun, Apr 12, 2026, at 9:55 PM, Paola Di Maio wrote:
> Kwang
> sorry I did not attend your presentation, and thanks for sharing. Timely and important
> Based on the specs/repo
> Here snips from a commentary *published them as a technical note/advisory, which perhaps you'd like to comment on?
> essentially praising the effort but challenging some of the claims)
> In particular I am interested in you feedback about the comparative analysis of related initiatives
> Best
> Paola Di Maio
>
> ...excerpt from advisory......
> The problem AIR addresses is both obvious and surprisingly unsolved. AI agents -- autonomous software entities that can take actions, make decisions, call APIs, execute transactions, and interact with other agents -- are proliferating at a rate that makes the early web's growth look gradual. Every major cloud provider, every enterprise software company, every AI startup is deploying agents. These agents book flights, manage customer service conversations, execute trades, coordinate supply chains, write and deploy code, and interact with each other in multi-agent workflows. Yet there is no standardized way for one agent to verify the identity of another. There is no equivalent of the domain name system for agents. There is no equivalent of a TLS certificate. There is no way for a human or a system to look up an agent and determine, with cryptographic certainty, who built it, what it is authorized to do, and whether it has a history of reliable behavior.
>
> ith broad industry adoption. AIR claims alignment with the NIST AI Agent Standards Initiative and states that its trust scoring is designed to be EU AI Act ready.
>
> The governance model makes strong claims. The initiative is structured as an independent nonprofit. No single AI company controls the standard. All specification changes go through public review via GitHub Discussions. Scoring algorithms, verification processes, and registry rules are published openly. The project commits to data portability -- an agent's identity and history are portable across systems, with no vendor lock-in.
>
> So far, so good. The architecture is sound. The standards choices are correct. The trust score is genuinely clever -- most competing approaches give you a binary verified/unverified signal, whereas a graduated score with auditable components is more useful for real-world decision-making. But several things deserve scrutiny.
>
> The project is early-stage in ways that the polished website somewhat obscures. The GitHub repository is under a personal account (ahnkwangwook-oss), not an organizational one. The advisory board page is listed but not populated. The SDKs are "coming soon." Third-party security audits are "planned." The specification exists but there is no evidence of production deployment or real-world adoption at scale. This is not unusual for a new initiative -- every project starts somewhere -- but the framing as a "global standard" and "foundation" is ahead of where the project actually is. A domain registered in 2026, a specification on GitHub, and a well-designed landing page do not yet constitute a standard. They constitute a proposal.
>
> The peer attestation dimension (15 percent of the trust score) is the component creates a social graph of trust among agents, which is powerful. But without robust anti-gaming mechanisms -- which are not described in the current documentation -- it is vulnerable to well-known attacks. Sybil attacks, in which a single operator creates multiple fake agents that endorse each other, would inflate trust scores without reflecting genuine trustworthiness. Collusion rings, in which groups of real but coordinated agents systematically endorse each other, would produce the same effect. Adversarial attestation, in which competing agents file negative reports to suppress a rival's score, would weaponize the system against legitimate operators. Credit rating agencies spent decades developing methodologies to resist exactly these kinds of manipulation, and they still get gamed. A new nonprofit starting from scratch faces the same challenges without the institutional depth or the regulatory backing that credit rating agencies (eventually) received.
>
> The governance claim -- independent, no single company controls it -- is the right aspiration but it is currently aspirational rather than demonstrated. A nonprofit controlled by a single founder or a small team is not the same as a multi-stakeholder governance body. The comparison to ICANN or the W3C is premature. Those organizations earn their governance credibility through years of contested, messy, multi-party decision-making. The W3C process, for example, involves formal working groups with chartered scopes, public comment periods, implementation requirements, and director review. AIR's governance currently consists of GitHub Discussions. That is a starting point, not a governance framework.
>
> AIR is a well-designed early-stage project solving a real problem, with governance aspirations that exceed its current institutional maturity, entering a fragmented landscape where the winner will be determined by adoption momentum rather than technical superiority.
>
> And the landscape is genuinely fragmented. At least seven other initiatives are competing for overlapping territory. Understanding where AIR fits requires a comparative view.
>
> THE COMPETING APPROACHES
>
> 1. Agent Name Service (ANS) -- Developed with GoDaddy, described in arXiv:2505.10609. ANS takes a DNS-inspired approach, providing a globally unique naming system for AI agents. It uses Public Key Infrastructure (PKI) certificates for verifiable identity and a modular Protocol Adapter Layer that supports A2A, MCP, ACP, and other communication standards. The DNS analogy is its strength -- DNS is the single most successful naming infrastructure in the history of computing, and building on its architectural patterns gives ANS a familiar, proven model. ANS focuses on discovery and naming rather than trust scoring. It tells you who an agent is and where to find it. It does not tell you how much you should trust it. AIR's trust score fills a gap that ANS leaves open.
>
> 2. Google A2A Agent Cards -- Google's Agent-to-Agent protocol uses self-describing JSON capability manifests that agents publish. This is a decentralized approach -- each agent advertises its own capabilities without a central registry. The advantage is simplicity and low coordination cost. The disadvantage is that there is no independent verification. An agent describes itself, and you take its word for it. A2A Agent Cards are capability advertisements, not identity credentials. They solve the discovery problem ("what can this agent do") but not the trust problem ("should I believe what this agent says about itself").
>
> 3. Microsoft Entra Agent ID -- The enterprise SaaS approach. Entra Agent ID provides a centralized directory with policy enforcement, zero-trust integration, and tight coupling to the Microsoft ecosystem. For enterprises already invested in Microsoft identity infrastructure, this is the path of least resistance. It offers strong security and governance within the enterprise boundary but does not extend to cross-organizational or open-internet agent interactions. It is a walled garden, not a public utility. For organizations that need to trust agents from outside their own ecosystem, Entra Agent ID is insufficient on its own.
>
> 4. NANDA (Networked Agents and Decentralized AI) -- A decentralized registry that maps agent identifiers to cryptographically verifiable AgentFacts -- capabilities, endpoints, and trust metadata. NANDA supports privacy-preserving discovery and dynamic updates with short-lived credentials (often under 5 minutes) and real-time revocation. Its AgentFacts documents are signed as W3C Verifiable Credentials v2. NANDA is architecturally the most sophisticated of the competing approaches. It is also the most complex to implement. Its privacy-preserving features make it attractive for regulated industries (healthcare, finance) where agent metadata itself may be sensitive.
>
> 5. Solana Agent Registry -- An on-chain protocol on the Solana blockchain providing verifiable identity, portable reputation, and trust infrastructure for AI agents. Interoperable with ERC-8004 on Ethereum. The blockchain approach offers immutability and permissionless participation but introduces latency, cost, and complexity tradeoffs that may not suit high-frequency agent-to-agent interactions. The crypto ecosystem's enthusiasm for agent registries is real, but blockchain-based identity has struggled to achieve mainstream adoption outside the crypto community despite years of effort.
>
> 6. AGNTCY Agent Directory Service (ADS) -- Uses IPFS content-addressed storage with OCI (Open Container Initiative) artifact alignment and Sigstore-backed integrity verification. This approach treats agent metadata as content-addressed, immutable artifacts -- you look up an agent by the hash of its capability description rather than by a name. The advantage is strong integrity guarantees. The disadvantage is that content-addressed systems are harder for humans to navigate than named systems.
>
> 7. MCP Registry -- The centralized publication of mcp.json descriptors associated with the Model Context Protocol ecosystem. This is the simplest approach -- agents publish JSON files describing their capabilities, and a registry indexes them. It lacks the cryptographic identity guarantees of the other approaches but has the advantage of ecosystem momentum, given MCP's growing adoption.
>
> 8. Credo AI -- An enterprise governance platform that combines agent registration with risk assessment, compliance mapping, and automated control suggestions. Credo AI is less a standards initiative than a commercial product, but it addresses the enterprise governance dimension that the standards-focused approaches largely ignore.
>
> COMPARATIVE ASSESSMENT
>
> The approaches differ along several axes. On the centralization spectrum, Entra Agent ID and MCP Registry are centralized, AIR and ANS are federated nonprofits, A2A and AGNTCY are decentralized, NANDA is decentralized with federation options, and Solana is decentralized on-chain. On the trust dimension, only AIR provides a graduated trust score. NANDA provides cryptographic verification of specific claims. The others provide binary identity verification or none at all. On standards alignment, AIR, NANDA, and ANS build on W3C DIDs and Verifiable Credentials. Entra Agent ID uses Microsoft's proprietary identity stack. Solana uses blockchain-native primitives. A2A uses Google-defined JSON schemas. On governance, AIR and ANS claim nonprofit independence. MCP Registry is Anthropic-adjacent. Entra Agent ID is Microsoft-controlled. Solana is protocol-governed. AGNTCY is community-governed. On regulatory readiness, AIR claims EU AI Act and NIST alignment. Entra Agent ID benefits from Microsoft's existing regulatory relationships. The others are largely silent on regulatory compliance.
>
> A survey paper published on arXiv (2508.03095) by Singh, Ehtesham, and colleagues systematically compared five of these approaches and concluded that no single approach dominates across all dimensions. The paper recommended that the emerging Internet of AI Agents will require verifiable identity, adaptive discovery flows, and interoperable capability semantics -- likely drawing on multiple approaches rather than converging on a single winner. The paper also made a governance observation that resonates with the AIR assessment: the most resilient internet infrastructure -- DNS, HTTP, email -- emerged from open, multi-stakeholder governance. Proprietary platforms serve specific enterprise needs, but the broader agent ecosystem requires community-governed registries that can evolve independently of any single company.
>
> This is exactly the gap that the W3C AI Knowledge Representation Community Group has identified as "Layer Zero" -- the pre-negotiation capability advertisement layer that sits beneath all agent communication protocols. Before two agents can communicate via A2A, MCP, or any other protocol, they need to discover each other, verify each other's identity, and assess each other's trustworthiness. That Layer Zero infrastructure does not yet exist in a standardized form. AIR, ANS, NANDA, and the others are all competing proposals for different slices of it.
>
> THE BOTTOM LINE
>
> AIR is a well-designed initiative solving a real and urgent problem. Its standards choices (W3C DIDs, Verifiable Credentials, IETF RATS) are correct. Its trust scoring methodology is the most differentiated feature in the competitive landscape -- no one else is doing graduated, auditable, multi-dimensional trust assessment for agents. Its nonprofit governance model is the right institutional form for infrastructure that should be neutral.
>
> But the project is early. The governance is aspirational rather than demonstrated. The peer attestation system needs anti-gaming mechanisms that are not yet specified. The competitive landscape is fragmented, with well-resourced players (Google, Microsoft, Solana) pursuing their own approaches. And the history of internet infrastructure suggests that the winning standard is rarely the first or the most technically elegant -- it is the one that achieves adoption momentum through a combination of technical merit, institutional credibility, and ecosystem support.
>
> AIR has the technical merit. It has the right institutional aspiration. It does not yet have the ecosystem support or the demonstrated governance to claim the role it is positioning itself for. Worth watching. Worth engaging with. Not yet worth building critical infrastructure on.
>
> The agent identity layer will be built. The question is by whom and under what governance. The answer will determine whether the agentic era has a trust infrastructure comparable to the web's TLS certificate system -- or whether it remains the fragmented, unverified environment it is today.
>
> Sources: agentidentityregistry.org, arXiv 2505.10609 (ANS) <https://arxiv.org/abs/2505.10609>, arXiv 2508.03095 (Registry Survey) <https://arxiv.org/abs/2508.03095>, Cloud Security Alliance <https://cloudsecurityalliance.org/blog/2025/03/11/agentic-ai-identity-management-approach>
>
>
>
> online on Factiva Dow Jones (Document CWRE000020260411em4b00002)
>
>
>
>
>
>
>
> On Tue, Apr 7, 2026 at 9:10 PM Kwang Wook Ahn <ahnkwangwook@gmail.com> wrote:
>>> Dear colleagues,
>>>
>>>
>>>
>>> I'd like to introduce the Agent Identity Registry (AIR) — an open-source project building neutral identity and trust scoring infrastructure for AI agents, natively on W3C DIDs and Verifiable Credentials.
>>>
>>> We see our work as complementary to this group's efforts on agent discovery and communication protocols. AIR addresses the trust layer: given that agents can find each other, should they trust each other? Our registry provides verifiable identity (AIR IDs linked to W3C DIDs) and a transparent five-component trust score (Provenance, Behavioral, Transparency, Security, Peer Attestations) scored 0-1000.
>>>
>>> What's live today:
>>>
>>> • Working registry API with agent registration and trust scoring: https://agentidentityregistry.org
>>> • Full specification, trust methodology, and governance docs: https://github.com/ahnkwangwook-oss/agent-identity-registry
>>> • Recent public comment submitted to NIST CAISI on AI agent identity
>>> We’d be glad to share more detail, answer questions, or discuss alignment opportunities via email.
>>> Best regards,
>>>
>>> Kwangwook (Peter) Ahn
>>>
>>> Agent Identity Registry Foundation
>>>
>>> ahnkwangwook@gmail.com | foundation@agentidentityregistry.org https://agentidentityregistry.org
>>>
Received on Monday, 13 April 2026 19:00:23 UTC