- From: Milton Ponson <rwiciamsd@gmail.com>
- Date: Mon, 10 Nov 2025 10:44:12 -0400
- To: paoladimaio10@googlemail.com
- Cc: Daniel Ramos <capitain_jack@yahoo.com>, W3C AIKR CG <public-aikr@w3.org>, internal-aikr@w3.org
- Message-ID: <CA+L6P4zXg_j4PT8Dgt0sUD0=CWBMiLY8eh6w86SRdqt7fAaEmQ@mail.gmail.com>
My problem with agentic network protocols is summed up by this "AI Review" in the Google search "limitations of agentic AI systematic review": "*Limitations of agentic AI in systematic reviews include a lack of deep causal reasoning, which can lead to identifying correlations instead of true cause-and-effect, and potential for hallucinations inherited from the underlying language models. Other challenges include the "black box" nature of AI, making transparency and accountability difficult, and the limited ability to handle complex, multi-stage tasks that require extensive memory and replanning. Additionally, the rapidly evolving nature of the field, the lack of standardized architectures, and risks from security vulnerabilities and misinformation are significant hurdles. * *Reasoning and reliability* *Lack of causal understanding: AI agents can identify correlations but struggle with understanding the true cause-and-effect relationships in data, potentially leading to flawed conclusions.* *Hallucinations: The large language models (LLMs) that power agentic AI can generate incorrect or fabricated information.* *Limited context and memory: Agentic AI often relies on simple, stateless cycles and has difficulty maintaining long-term context, making complex, multi-stage tasks challenging.* *Methodological heterogeneity: A systematic review of agentic AI studies finds that variations in evaluation metrics and methodologies make direct comparisons difficult.* *Transparency and accountability* *The "black box" problem: The internal decision-making processes of complex AI systems are often opaque, making it hard to understand how a conclusion was reached and who is accountable for errors.* *Difficulty with accountability: When an autonomous AI makes a mistake, it is unclear who is responsible—the developer, the user, or the organization.* *Lack of architectural transparency: Many agentic AI solutions are proprietary, lacking the transparent, standardized designs needed for widespread adoption and reliable review.* *Security and risk* *Vulnerability to manipulation: Attackers can inject false information into an AI agent's memory, causing it to make incorrect decisions.* *Amplified risks: Increased autonomy means AI agents can take actions with more significant implications, and risks like misinformation and errors are amplified without continuous human oversight.* *Security vulnerabilities: Multi-agent systems introduce complex security challenges, such as chained vulnerabilities where an error in one agent can cascade and amplify across the system.* *Implementation and integration* *Lack of standardization: There is a shortage of standardized APIs and interoperable designs for multi-agent coordination, hindering the development and deployment of agentic AI systems.* *Complexity and integration hurdles: Integrating agentic AI with existing systems is a significant challenge, and many projects fail due to high costs and unclear returns on investment.* *Risk and governance gaps: Many projects suffer from a lack of adequate risk and governance frameworks, which is a major barrier to successful implementation*." The European Union has a GDPR, EU AI Act, Digital Markets Act and Digital Services Act in place, all of which put technical constraints in place that will effectively kill such protocols because of how the abovementioned issues impact these four pieces of legislation. And complex adaptive systems (of systems) are process oriented and are a key area of AI in Bio and Life Sciences, chemistry, physics, biology and environmental science. Authentication, transparency, accountability, trustworthiness, safety, explainability and ethical compliance are crucial for agentic AI and IMHO and that of the European Union far from resolved and standardized. Milton Ponson Rainbow Warriors Core Foundation CIAMSD Institute-ICT4D Program +2977459312 PO Box 1154, Oranjestad Aruba, Dutch Caribbean On Mon, Nov 10, 2025, 06:57 Paola Di Maio <paola.dimaio@gmail.com> wrote: > Daniel, on the internal mailing list you shared your AI output > orchestration method, plus a whole load of other material > which looks like food for thought but I have not had the resources to > process yet > > But on a related note, assuing we can automate the orchestration > somethwere along the line > I wonder if you could explain in no more than 10 lines of code > how does your method relates to ANP if at all > https://github.com/agent-network-protocol/AgentNetworkProtocol > > Thank you > > Paola >
Received on Monday, 10 November 2025 14:44:28 UTC