Re: interesting post - AI Agent Protocols, the HTTP of AI agents—a shared language for coordination

Hi all,

Thanks all around for the input.

Quickly jumping in before our regular meeting (in a few minutes):
there is indeed a large body of research to be considered; we have already included FIPA in the discussions, but FIPA is just the start of the line
as Lorezono correctly pointed out, one of the ambitious aims behind the Interoperability TF report is to integrate recent and more classical developments into a coherent conceptual framework for Web-based multi-agent systems
one nice thing about the WebAgents CG is that it brings together senior members of the research community, so although ambitious, I think we are in a good position to produce this report
in preparation for today’s discussion, I’ve bootstrapped some terminology and an initial draft for the conceptual view, which might help clarify how some of the points fit together
see PR #88: https://github.com/w3c-cg/webagents/pull/88
report preview: https://deploy-preview-88--w3cwebagentscg.netlify.app/

We’ll pick up this discussion in the meeting.

As always, the details for the regular meeting are available in the CG's calendar: https://www.w3.org/events/meetings/5a77b997-467b-4d5d-bb40-b84789c17d2e/20250704T090000/

Beast wishes,
Andrei

--
Andrei Ciortea

Assistant Professor
School of Computer Science
University of St.Gallen, Switzerland
https://interactions.ics.unisg.ch/

External Collaborator
Wimmics
Inria, Université Côte d’Azur, CNRS, I3S, France
https://wimmics.inria.fr/

> On 3 Jul 2025, at 18:03, Lorenzo Moriondo <tunedconsulting@gmail.com> wrote:
> 
> Hello,
> 
> Thanks for your links, they are well targeted to what the work on Web Agents should focus on in my opinion. The paper on Agora also brings in some minimal practical examples that are very helpful.
> Some notes below that may interest who wants to carry on the discussion:
> 
> On Thu, 3 Jul 2025 at 14:45, Samuele Marro <samuele.marro@trinity.ox.ac.uk <mailto:samuele.marro@trinity.ox.ac.uk>> wrote:
>> You might also be interested in this paper with Wooldridge, where we discuss the relationship between old school, FIPA-ACL-style agents and modern agents, as well as in general what lessons we can learn from traditional multi-agent systems.
>> https://arxiv.org/abs/2505.21298
>> 
>> Although I'd argue that ontology-based languages do not map well to modern, general-purpose, NL-based agents. See here for more on our research side
>> https://arxiv.org/abs/2410.11905
> 
> Summary: In this paper Agora is presented, a decentralised "meta protocol" that allows agents to negotiate their own data exchange protocols.
> 
> Here some notes that could be good working points: 
> A. I find the paper similar in intent to what has been done in W3C WOT as far as I understand (a "layer 0" for the layers and standards in place)
> B. possibility of a communication protocol (that is called PD, protocol Description in the paper) to be in the form of an RFC. This sounds interesting to me as it mirrors also what happens in engineering workflows in the field of software engineering inside commercial companies. Any developer that wants to implement a system starts from a well-designed RFC template with a draft, including required information in required/optional sections of the document. The process as started invites incremental adjustment and reviews. The outcome when approved will lead the development effort and it is also a record of the design principles for the system implementation. For example: in this scenario the task of the Web Agents group would be to define an algorithm-protocol to start-process-finalise an RFC negotiation; this will imply an RFC storage common to the agents involved in the communication and all subsequent implementation of all the relative subsystems (and a decision on a standard to describe such subsystems maybe?).  
> C. let the LLMs negotiate their own protocol on a case-by-case occurrence. This is the main concept of the paper. whatever PD format/formation process is chosen (see B) the agents should be the ones generating it according to the use-case. This makes necessary non-arbitrary rules for (again similar to what happens in an RFC):
>      * section on use case description in terms of actions required
>      * section use case description in terms of what kind of data needs to be exchanged
>      * section on selection of standard interface (from a list of possible )
>      * section on endpoints for the interfaces
>      * etc
> (this is a non- exhaustive list, just to provide a generic idea)
> Defining these constraints may be the objective of the working group as currently laid down in the Interoperability draft.
> 
> Possible follow-ups:
> D1. It would be nice to have a list of options for other possible formats beside RFC; it sounds to me a good choice because it is itself plain natural language, so to leverage the potential zero-overhead on readability from the human side. I can imagine there could be stricter specifications to reduce ambiguity on both the human and machine side but a lot of questions arise: is using plain text RFC feasible? What would be the cost of operating millions of agents negotiating their own data exchanges (the paper mentions reduction in cost assuming that the negotiation happens once, but what is the computing cost of generating a PD for each exchange in the worst case scenario of two agents negotiating constantly changing terms)? At which scale is it possible to reuse the same PD (see agents classification below or any other way, this is important as I guess it is difficult to have a robust algorithm to decide efficiently for each data exchange if it needs a new PD or if it can reuse one already available in the same system of transactions; this may require a dedicated agent, this could be an example of the parts of the process in which standardisation should be focused on)? What are the rules for re-negotiating a PD if a given failure in the exchange happens? What are the benchmarks for failure of the PD? These are all the usual questions that arise operating a highly-concurrent system (like a imperative parallel or message passing architecture).
> D2. I see some critical points in controlling and updating the semantic context in a potential solution like this one; each LLM has its own semantic representation but we may also assume that the underlying representation for all of them is similar? (as per different languages https://arxiv.org/html/2410.09223v1 ). I don't know much about this and how it can affect long-term operations of LLMs-generated protocols at scale;
> D3. those at the previous point regards data drifting and how to test the operational readiness of a potential protocol on future systems. Maybe a standard should also provide reference tests for alignment of agents so that every agent deployed should pass a battery of tests to see which protocols are generated against a controlled input "exchange request". Obviously this doesn't even partially cover the infinite possibilities of interactions between agents in an open scenario but may provide suggestions about which (specialised) agents should use which (specialised) "RFC templates"?
> D4. please see this list of agents inside an MCP categorised by tasks https://gist.github.com/Mec-iS/37d6ab6358e27e9165937c4bda04ac23 assuming that every industry-player will define its own MCP or any other potential tool in the future, is it worth to work at system-level (so to define only intra-system behaviour between this "macro-systems") and avoid getting into providing "design-blueprints standards" for each of the agents combinations future systems may put together but this may collide with the necessity for core/pivotal classes of agents that shall require standardisation (the PD-expire-decision agent mentioned in the questions above).
> 
> In general, I would also be in favor of a focus on emergent behaviour with agents given a set of rules at the right level. Considering the domain of application and the total open-endedness of these systems if compared to machine-to-machine semi-deterministic behaviour. 
> 
> Regards,
>> 
>> Best,
>> 
>> Samuele Marro
>> Department of Engineering Science
>> University of Oxford
>> 
>> 
>> Da: s.mariani@unimore.it <mailto:s.mariani@unimore.it> <s.mariani@unimore.it <mailto:s.mariani@unimore.it>>
>> Inviato: giovedì, luglio 3, 2025 11:19:35 AM
>> A: Joshua Cornejo <josh@marketdata.md <mailto:josh@marketdata.md>>
>> Cc: public-webagents <public-webagents@w3.org <mailto:public-webagents@w3.org>>
>> Oggetto: Re: interesting post - AI Agent Protocols, the HTTP of AI  agents—a shared language for coordination
>> 
>> Hello everybody :)
>> 
>> I have been not particularly active in the group, but I’m noticing (also beyond this group) a lot of discussion around these “agent protocols” and I cannot help but think that there is a lot of “reinventing the wheel” that is happening.
>> 
>> Coordination between agents is a long standing topic in research about software engineering and distributed artificial intelligence.
>> 
>> These are just a bunch or random references that I have on the top of my mind while at the swimming pool, so bear with me if I am not exhaustive, but I think everybody interested in this new breed of “agentic systems” should at least be aware of.
>> 
>> FIPA has done much to bring agent technology to production ready systems: 
>>  - communication language and semantics http://www.fipa.org/repository/aclspecs.html
>>  - http specs http://www.fipa.org/specs/fipa00084/PC00084B.html
>>  - coordination protocols http://www.fipa.org/repository/ips.php3
>> 
>> A whole lot of research has also been done in multiagent systems coordination with alternative paradigms such as tuple based coordination, although perhaps with less TRL
>> 
>> I can provide further refs and info if interested, this initial refs were just out of being tired of seeing over and over this way of selling “Agentic coordination protocols” as something new as in fact there have been at least 30+ years of research and dev on the topic.
>> 
>> I go back to the shadows now :)
>> Bye!
>> 
>> Stefano Mariani, PhD
>> Tenure track researcher
>> @ Department of Sciences and Methods for Engineering – University of Modena and Reggio Emilia
>> > stefano.mariani@unimore.it <mailto:stefano.mariani@unimore.it> 
>> > https://smarianimore.github.io <https://www.google.com/url?q=https://smarianimore.github.io&source=gmail-imap&ust=1677865643000000&usg=AOvVaw3kYSdJbofexYY9CLIsriBn> 
>> 
>> Il giorno 3 lug 2025, alle ore 10:37, Joshua Cornejo <josh@marketdata.md <mailto:josh@marketdata.md>> ha scritto:
>> 
>> 
>> https://www.linkedin.com/pulse/ai-agent-protocols-http-agentsa-shared-language-coordination-mtute/
>>  
>> ___________________________________
>> Joshua Cornejo
>> marketdata <https://www.google.com/url?q=https://www.marketdata.md/&source=gmail-imap&ust=1752136670000000&usg=AOvVaw3uCHYFriHbhDH0zw1zjI3S>
>> smart authorisation management for the AI-era
>> 
> 
> 
> 
> -- 
> ¤ acM ¤
> Lorenzo
> Moriondo
> @lorenzogotuned
> https://www.linkedin.com/in/lorenzomoriondo
> https://github.com/Mec-iS

Received on Friday, 4 July 2025 06:54:59 UTC