Limitations of today's agentic AI

You may like the following post, which punctures the hype around agentic AI.  LLMs are amazing, but have major weaknesses in their suitability as a basis for agentic AI. 

https://www.forbes.com/councils/forbestechcouncil/2025/01/29/why-2025-wont-be-the-year-of-agentic-ai/
Why 2025 Won't Be The Year Of Agentic AI
forbes.com

Whilst chunks & rules can be used for simple agents, it suffers from the cost of manual programming. It is straightforward to implement reinforcement learning in terms of back propagation of rewards across threads of behaviour, the big challenge is in how model the knowledge needed to guide search through the vast space of such behaviours.  Randomly generating new rules to try out results in extremely slow learning. You need to apply knowledge about how to take the task description and decompose it into subtasks, and match them to a suite of programming patterns. This ensures that reinforcement learning is fast and effective.

LLMs have the potential to help with this given training with synthetic data, but preparing that data is challenging. Moreover, we really would benefit from work on sentient AI, e.g. learning to learn, so the agent can exploit just a few examples of what is needed.  My hunch is that we can use LLMs to bootstrap sentient AI. A big question is how to do that.

Dave Raggett <dsr@w3.org>

Received on Friday, 11 July 2025 09:02:50 UTC