- From: Paola Di Maio <paola.dimaio@gmail.com>
- Date: Tue, 24 Mar 2026 14:10:05 +0800
- To: Ben Stone <benstone@swarmsync.ai>
- Cc: public-agentprotocol@w3.org
- Message-ID: <CAMXe=Sqqi+Jge9UrZxMa+8Rn=oj_NnwBpLh3s0T7KqPrTPC0Ag@mail.gmail.com>
Dear Ben and everyone Thanks for sharing While I am teaching myself to use github, respec and a bunch of other things going around my head and the web and trying to fix them in some form https://w3c-cg.github.io/aikr/ I have consulted with my oracles and gathered some thoughts on your spec https://w3c-cg.github.io/aikr/conduit/index.html Please check, let me have feedback as to what makes sense or not and edits/comments via PR while I get my head around this and other things Best Paola On Tue, Mar 17, 2026 at 8:59 PM Ben Stone <benstone@swarmsync.ai> wrote: > Hi everyone > > I am Ben, a developer who is working on AI agent infrastructure. I > recently joined this community group. I wanted to introduce myself. > > I have been building a tool called Conduit. Conduit is a browser that > creates a tamper-proof audit trail of everything an AI agent does on the > web. The core idea of Conduit is that after an AI agent session you can > hand someone a file, and they can verify exactly what the AI agent did > without trusting any server or third party. > > As part of that work I wrote a specification called the Conduit Session > Proof Format. The Conduit Session Proof Format is a proposed standard for > how AI agent sessions should be documented and verified. The Conduit > Session Proof Format is designed to satisfy things like the EU AI Acts > audit log requirements with a interoperable format. > > I think there is a question in the AI agent space around accountability. > How do we prove what an AI agent did? I would love to contribute to that > conversation about AI agent accountability > > The Conduit specification is available, on GitHub: > https://github.com/bkauto3/Conduit > > I am happy to be here > Ben > >
Received on Tuesday, 24 March 2026 06:10:41 UTC