- From: Ben Stone <benstone@swarmsync.ai>
- Date: Tue, 24 Mar 2026 08:01:29 -0600
- To: paoladimaio10@googlemail.com
- Cc: public-agentprotocol@w3.org
- Message-Id: <fbf1fefc-81da-4d6a-8c5f-63535169fe1d@app.fastmail.com>
Hi Paola! Really appreciate your thorough analysis in "Missing Layer Zero." The Layer Zero framing is so very right---capability advertisement before negotiation is a gap I'd left for the AP2 working group to address, and the MCP Model Card extension you propose is a clean fit. A few responses to the specific technical concerns raised: **On tier misrepresentation and session-count farming:** Two additional I-Ds I submitted concurrently address this directly. `draft-stone-swarmscore-v1-00` replaces ATEP's flat tier model with a volume-scaled scoring formula — tier advancement requires sustained high-volume performance across both technical execution (Conduit) and commercial reliability (AP2 escrow), not just session count. Cryptographically signed SwarmScore certificates carry an expiry timestamp, which also addresses the stale credential concern. https://datatracker.ietf.org/doc/draft-stone-swarmscore-v1/ **On the missing safety dimension:** `draft-stone-swarmscore-v2-canary-00` adds a third scoring pillar: Safety, measured via covert canary prompt testing. The core design decision is that self-reporting is insufficient — an agent won't accurately describe its own refusal behavior. Canary testing discovers actual behavior under adversarial prompts. This is backwards-compatible with V1; agents without canary history receive an interim safety score inferred from reliability metrics. https://datatracker.ietf.org/doc/draft-stone-swarmscore-v2-canary/ **On the Ed25519 / HMAC-SHA256 inconsistency:** Valid catch. VCAP uses HMAC-SHA256 for the verification callback but Ed25519 elsewhere in the stack. I'll elevate Ed25519 to REQUIRED in the `draft-stone-vcap-01` revision and note the change in the revision history. **On governance:** The concern about single-vendor authorship is fair and I take it seriously. Before these drafts advance past -00, I intend to propose neutral registry stewardship — either through an existing IETF registry or a community-governed namespace. I'm open to multi-stakeholder participation in that process if the CG has interest. Happy to discuss the full six-draft stack, or anything else further on the list in writing. The complete set of I-Ds is at https://github.com/swarmsync-ai. Ben Stone SwarmSync.AI — https://swarmsync.ai benstone@swarmsync.ai On Tue, Mar 24, 2026, at 12:10 AM, Paola Di Maio wrote: > Dear Ben and everyone > Thanks for sharing > > While I am teaching myself to use github, respec and a bunch of other things going around my head and the web > and trying to fix them in some form https://w3c-cg.github.io/aikr/ > > I have consulted with my oracles and gathered some thoughts on your spec > https://w3c-cg.github.io/aikr/conduit/index.html > > Please check, let me have feedback as to what makes sense or not and edits/comments via PR while I get my head around this and other things > > Best > > Paola > > > On Tue, Mar 17, 2026 at 8:59 PM Ben Stone <benstone@swarmsync.ai> wrote: >> __ >> Hi everyone >> >> I am Ben, a developer who is working on AI agent infrastructure. I recently joined this community group. I wanted to introduce myself. >> >> I have been building a tool called Conduit. Conduit is a browser that creates a tamper-proof audit trail of everything an AI agent does on the web. The core idea of Conduit is that after an AI agent session you can hand someone a file, and they can verify exactly what the AI agent did without trusting any server or third party. >> >> As part of that work I wrote a specification called the Conduit Session Proof Format. The Conduit Session Proof Format is a proposed standard for how AI agent sessions should be documented and verified. The Conduit Session Proof Format is designed to satisfy things like the EU AI Acts audit log requirements with a interoperable format. >> >> I think there is a question in the AI agent space around accountability. How do we prove what an AI agent did? I would love to contribute to that conversation about AI agent accountability >> >> The Conduit specification is available, on GitHub: https://github.com/bkauto3/Conduit >> >> I am happy to be here >> Ben >>
Received on Tuesday, 24 March 2026 14:02:01 UTC