- From: Lorenzo Moriondo <tunedconsulting@gmail.com>
- Date: Tue, 16 Sep 2025 15:58:57 +0100
- To: Joshua Cornejo <josh@marketdata.md>
- Cc: public-webagents <public-webagents@w3.org>
- Message-ID: <CAKgLLmveZbuy7myXD5+6fUUYoMr7Gq_UY_Ae0eqUpKcnA93UGA@mail.gmail.com>
Sorry I don't follow your point. I don't understand where you picked up all these different matters that sound to me to go beyond the technical object we are trying to unroll. The only definition used is the one (largely shared) that agents have *autonomy*, that is a very understandable point as an agent without autonomy is just an automated workflow. So the consequence is that in presence of autonomy (that is necessary, otherwise we just fall-back to existing technology without autonomy, aka there is no need of regulating/architecting anything new as the current norms apply), the only variable that matters is the *permeability* that the autonomous system has towards the "outside of its boundaries". From this, it is possible to define any level necessary of boundaries (from the single node/agent to multi-agents systems); each boundary should have its permeability profile (defined by any fit standard). The question is: which boundary/ies is/are the most relevant (which one to take as a reference to define the others relative to), the work of the group seems to consider the MCP server (simplifying: "multi-agent single-purpose system") as a relevant boundary. Following this reasoning, any boundaries considered relevant should be an object of specification. Also the concept of Virtual Agent Economy is quite close to the one of "virtual environment" as discussed in the last call, the only suggested addition is that these virtual environments should have minimal permeability. For a reference on capacity of boundaries/interfaces and how this enables emergent behaviour, see my 2022 pre-print <https://www.techrxiv.org/users/685780/articles/679427-cybernetics-interfaces-and-networks-intuitions-as-a-toolbox> . On Tue, 16 Sept 2025 at 15:01, Joshua Cornejo <josh@marketdata.md> wrote: > IMHO the paper treats “agents” as having “agency” … the agents are only > “critically permeable” if the entire operating system is running everything > – otherwise they are just a process that can be controlled like any human: > do you have ‘credentials to do what you are attempting to do’? > > > > Using your example: MCP is secured/governed by the user’s roles (RBAC) … > > > > It also implies that agents are ‘superior’ – they are “permeable” and need > reinforcement above human level, and need to have “our societal objectives > into the very infrastructure of agent-to-agent transactions, we can foster > an ecosystem where emergent collaboration is a feature, not a bug.” > > > > So using greek mythology, the paper is placing them demi-god level? > > ___________________________________ > > *Joshua Cornejo* > > *marketdata <https://www.marketdata.md/>* > > smart authorisation management for the AI-era > > > > *From: *Lorenzo Moriondo <tunedconsulting@gmail.com> > *Date: *Tuesday, 16 September 2025 at 14:50 > *To: *Joshua Cornejo <josh@marketdata.md> > *Cc: *public-webagents <public-webagents@w3.org> > *Subject: *Re: Agents Economic Systems > *Resent-From: *<public-webagents@w3.org> > *Resent-Date: *Tue, 16 Sep 2025 13:50:06 +0000 > > > > The paper states the opposite imo, which level of permeability should be > the "default"? And which are the boundaries to consider and where to set > them? > > > > If you set the "principal reference boundary" to be the MCP then the > resulting design is quite similar to the one we are discussing. > > > > The paper calls explicitly that this virtual agent economies should be > sandboxes (minimal permeability) thus the necessity of interoperability > protocols to communicate through these boundaries (each "level" of > permeability could have its relative protocol for example). > > > > Lorenzo Moriondo > ロレンツォ・モリオンドオ > https://linkedin.com/in/lorenzomoriondo > > > > On Tue, Sep 16, 2025, 14:29 Joshua Cornejo <josh@marketdata.md> wrote: > > It depends on what is the actual external and internal implementations of > the agent. > > > > - If an agent’s internal boundaries are a closed system (“not > sharing”), it’s external interactions can be managed via roles (i.e. RBAC > limits the scope of what the agent could do). > - If an agent’s internal boundaries are an open system (“sharing”), > it’s interactions need to be limited by those tasking the agent (i.e. “just > in time” access control) > > > > If their assumption is that the agents just co-exist like if we’re in the > wild west … then obviously nothing works and we all sound like an episode > of Black Mirror meeting Westworld 😊 > > > > Regards, > > > > ___________________________________ > > *Joshua Cornejo* > > *marketdata <https://www.marketdata.md/>* > > smart authorisation management for the AI-era > > > > *From: *Lorenzo Moriondo <tunedconsulting@gmail.com> > *Date: *Tuesday, 16 September 2025 at 12:04 > *To: *public-webagents <public-webagents@w3.org> > *Subject: *Agents Economic Systems > *Resent-From: *<public-webagents@w3.org> > *Resent-Date: *Tue, 16 Sep 2025 11:03:41 +0000 > > > > Hello, > > > > in my opinion it would be useful to get in touch with the authors of this > paper or to consider integrating some from this point of view as far as it > overlaps with the work of the group: > > > > https://arxiv.org/pdf/2509.10147 > > > > "This paper proceeds from the assumption that unless a change is made, our > current trajectory points toward the accidental emergence of a vast, and > likely permeable, sandbox economy. Our central challenge, therefore, is not > whether to create this ecosystem, but how to architect it to ensure it is > steerable, safe, and aligned with user and community goals. > > A fully permeable and emergent sandbox > > would be, in practice, functionally equivalent to AI agents simply > participating in the existing human economy. The “sandbox” terminology is > useful, however, because it allows us to contrast this default trajectory > with other possibilities, such as intentionally designed and impermeable > agent economies created for safe experimentation. It also highlights that > some degree of impermeability may (or may not) emerge naturally (e.g. if > there are practical difficulties in transacting between humans and AIs). > The framework highlights that permeability is the critical and controllable > design variable. " > > > > I think these are the kind of challenges that we are trying to tackle with > this group mission. > > > > Best, > > > > Lorenzo Moriondo > ロレンツォ・モリオンドオ > https://linkedin.com/in/lorenzomoriondo > > -- Lorenzo Moriondo ロレンツォ・モリオンドオ https://www.tuned.org.uk https://linkedin.com/in/lorenzomoriondo https://github.com/Mec-iS
Received on Tuesday, 16 September 2025 14:59:15 UTC