Re: Toward Engineering Agentic Systems Based on Interaction Protocols

Hi Lorenzo

Thanks for your observations. You bring up two directions.

One, mapping to Web standards. We have explored this a little bit already
(some preliminary ideas here:
https://emas.in.tu-clausthal.de/2025/assets/pdfs/emas2025-18.pdf) and we
are extending it in work with Andrei's group.  But there's clearly a lot to
do, which is where this community comes in.

Two, in general, natural language is good for human-agent interactions.
Formal representations are better between agents. Moreover, formal
protocol-based machinery can guide an agent's communications, which, in
general, have a real-world impact (e.g., booking a flight). We are
currently working out an Agentic architecture that gives us the best of
LLMs and protocols.

It would be great though if you could also test some ideas via prototypes :)

I will be giving a talk on August 1 in the group meeting (you will get more
details soon). Hope you will be able to join us.

Best
Amit

On Mon, Jul 7, 2025 at 12:19 PM Lorenzo Moriondo <tunedconsulting@gmail.com>
wrote:

> Thanks for the links!
> This model of MAS provides answers to most of my questions in the previous
> thread about fail-safeness of the concurrent computation involved in a
> decentralised network. It could be a good starting point (as FIPA) for a
> "layer 0" for LLMs to work on top of it.
>
> Some questions:
>
> Have you considered attaching a piece of syntax for semantic referencing
> (links to public ontologies)? An LLM may find it difficult to parse a
> message into a context without the semantic linking or links to NL
> documents from which to assess a context. Also, for example, different
> markets (in the case of the buy-sell example in the papers) may apply
> different nuances to a transaction that are not embeddable using a formal
> language, thus the more expressive hints provided by the semantic linking
> that the LLM can leverage. For example: parameters adornments could
> leverage semantic linking to explain themselves to a context reader that
> can dereference the link and go acquire its description.
>
> Would it be an easy implementation to have a mapping between the protocol
> and existing standards for Web (HTTP, RPC) payloads and formats (JSON-LD,
> W3C Hydra)? While for lower levels efficiency is priority, for a shared
> layer interoperability takes priority.
>
> Notes for the sake of the discussion:
> * the protocol described looks like Protobuffer at first sight. This
> implies providing an ecosystem of parsers and validators, LLMs are usually
> good at porting natural languages to formal specification (provided by the
> authors in the Gitlab repo). One of the first things I would try is some
> context engineering to develop an agent for accurate translation from
> protocols to NL.
> * it is quite compact but may require too much overhead to learn and too
> much preparation to be interpreted, this makes it less human readable than
> a NL description. There is too much syntax involved as required by the
> machine-readability but it doesn't mean an LLM cannot handle it correctly
> and bypass this overhead.
> * the main blocker with formal languages in my opinion is that they are
> an arbitrary pick among possibilities so they require to be widely accepted
> in usage, thing that may require too much consensus-building costs to be
> accepted by the majority of industry-players, while NL is already there for
> every player doing the job and can be leveraged for a standard. But
> again, if the process of translation between NL and the protocol is
> effortless... If we accept this point of view, *the best protocol may be
> the one optimised for accurate translation from machine-readable to NL*.
> * the papers already provide a verifiable concurrency model, very
> important.
> * My take at first reading, this solution can be a good intermediate
> representation at some layer of the stack that does the translation into a
> machine readable layer (similar to FIPA).
> * In my experience causality in human-machine interaction is too wide of a
> concept to be covered by a formal implementation. Maybe LLM will be a
> good pick to let agents infer causality from semantic clues mixed in with formally
> structured data about the flow of information. In the paper a narrow
> definition of causality is given, good for machine readable messages but
> maybe too narrow for the "open-endedness" of NL-based systems and all the
> possible occurrences of meaning (again, this can be improved using
> semantic referencing/dereferencing).
> * keep in mind that once a standard is in place, it will be possible to
> test if LLMs themselves can generate client/server implementations in a
> given language/tool; so defining client servers parsers and tooling in
> general may take less work than doing it via traditional software
> engineering.
>
> In general adding an "LLM-aware layer 0" to a formal protocol (like BSPL
> in the papers or FIPA) would require:
> 1. Maps Natural Language to Protocol Parameters: Use LLMs to extract
> structured information (parameters) from unstructured agent messages or
> user input. This should be quite straightforward for an LLM but the
> protocol may need semantic annotations to infer contexts from messages.
> 2. Enforces Protocol Semantics: The layer checks that the information
> dependencies and causality constraints of the BSPL protocol are respected,
> ensuring agents only send messages when required inputs are available.
> Again, this could be difficult to assure for every LLM on the market if the
> protocol is too complex.
> 3. Handles Meaning and Commitments: LLMs can interpret or generate
> explanations of the protocol’s business meaning, commitments, or
> intentions, providing transparency and facilitating negotiation or
> clarification between agents. This is the main advantage of using NL-based
> systems and also what makes their leverage on software so convenient.
> 4. Supports Protocol Composition: For more complex interactions, the layer
> can use LLM reasoning to compose or decompose protocols dynamically, this
> may happen at the machine readable level or at the NL level (or it is
> possible to decide they should happen at one and only level to avoid
> mismatch in translations).
>
> No.2 is the most challenging in my opinion considering all the challenges
> already evident in alignments for the many LLMs on the market.
>
> Hope these can help the discussion.
>
> On Sun, Jul 6, 2025, 22:50 Amit Chopra <akchopra.mail@gmail.com> wrote:
>
>> Dear all
>>
>> It's great to see interest in applying approaches developed in the
>> multiagent systems (MAS) community toward building Agentic systems.
>>
>> My colleagues (Munindar Singh, Samuel Christie, and Matteo Baldoni) and I
>> have been working on approaches for specifying interaction protocols
>> between agents.  Our approaches are formal and declarative; support
>> interaction meaning (which facilitates intelligent decision making by
>> agents); and enable realizing loosely-coupled, decentralized systems of
>> asynchronously communicating agents.  Moreover, our approaches support
>> maximally flexible interactions between agents.  These features make our
>> approach, which we call Interaction-Oriented Programming (IOP), ideally
>> suited to the Agentic paradigm.
>>
>> Below are some papers describing our approach.
>>
>> 1. (AAMAS) The BSPL Protocol language:
>> https://www.csc2.ncsu.edu/faculty/mpsingh/papers/mas/AAMAS-11-IBIOP.pdf
>> 2. (AAMAS) BSPL Semantics and Verification:
>> https://www.csc2.ncsu.edu/faculty/mpsingh/papers/mas/AAMAS-12-BSPL.pdf
>> 3. (Services Computing) Methodology for specfiying protocols:
>> https://www.csc2.ncsu.edu/faculty/mpsingh/papers/mas/SCC-14-Bliss.pdf
>> <https://www.csc2.ncsu.edu/faculty/mpsingh/papers/mas/SCC-14-Bliss..pdf>
>> 4. (JAIR) Advantage of our approach over alternative protocol
>> specification approaches, both formal and informal:
>> https://www.lancaster.ac.uk/staff/chopraak/pdfs/langeval..pdf
>> <https://www.lancaster.ac.uk/staff/chopraak/pdfs/langeval.pdf>
>> 5. (AAMAS) A Python programming model for BSPL:
>> https://www.lancaster.ac.uk/staff/chopraak/pdfs/Kiko.pdf
>> 6. (JAAMAS) A programming model for fault tolerance:
>> https://www.lancaster.ac.uk/staff/chopraak/pdfs/mandrake.pdf
>> 7. (IJCAI) Efficient verification:
>> https://www.csc2.ncsu.edu/faculty/mpsingh/papers/mas/IJCAI-21-Tango.pdf
>> 8. (AAAI) A belief-desire-intention programming model for BSPL:
>> https://www.lancaster.ac.uk/staff/chopraak/pdfs/orpheus.pdf
>> 9. (AAMAS) Meaning-based programming model:
>> https://www.lancaster.ac.uk/staff/chopraak/pdfs/azorus.pdf
>> 10. (IJCAI) Langshaw, an even higher-level protocol language that
>> compiles into BSPL:
>> https://www.lancaster.ac.uk/staff/chopraak/pdfs/langshaw.pdf
>>
>> There's a lot more here: https://www.lancaster.ac.uk/staff/chopraak/ and
>> even more here: https://www.csc2.ncsu.edu/faculty/mpsingh/papers/
>>
>> Our ever-growing software repository is available here:
>> https://gitlab.com/masr
>>
>> FIPA standards and KQML are unsuited to engineering multiagent systems.
>> FIPA ACL's and KQML's semantics are ill-conceived. FIPA Interaction
>> Protocols capture only a handful of interaction patterns via informal
>> notations such as UML.  These shortcomings are well-known in the MAS
>> community. For more details, see
>> https://www.csc2.ncsu.edu/faculty/mpsingh/papers/mas/computer-acl-98.pdf
>> and
>> https://www.lancaster.ac.uk/staff/chopraak/pdfs/ac-directions-2013.pdf
>> (especially Singh's on page 15).  IOP addresses the shortcomings of the
>> FIPA standards and KQML.
>>
>> I hope my post leads to a more informed discussion of agent communication
>> approaches.  As we seek to establish the relevance of multiagent systems
>> work for Agentic, let us not repeat the mistakes of the past.
>>
>> Best
>> Amit Chopra
>>
> Lorenzo Moriondo
> ロレンツォ・モリオンドオ
> https://linkedin.com/in/lorenzomoriondo
>

Received on Monday, 7 July 2025 15:56:06 UTC