- From: Dave Raggett <dsr@w3.org>
- Date: Mon, 30 Jun 2025 11:33:31 +0100
- To: paoladimaio10@googlemail.com
- Cc: public-cogai <public-cogai@w3.org>, W3C AIKR CG <public-aikr@w3.org>
- Message-Id: <EEDF9CD0-57AA-4C86-8CC6-53190821C851@w3.org>
It would be helpful if you provided further information as it isn’t very clear right now. For instance, an agent that uses facts and rules wouldn’t use MCP which is a protocol for agents implemented with generative AI. My current work on extending chunks & rules to swarms of agents uses chunks as the medium of communication, hiding the underlying protocols. Chunks & rules isn’t logic based. I suspect you are focusing on generative AI based agents, where the agent exploits pre-training and reinforcement learning with human feedback to determine its behaviour. There is a lot more work needed to advance beyond generative AI. I have sketched out some ideas in my slides on sentient AI, e.g. continual learning based upon continual prediction, episodic memory and the role of type 2 cognition during learning. This will have a big impact on agents. > On 30 Jun 2025, at 05:32, Paola Di Maio <paola.dimaio@gmail.com> wrote: > > Dave, and everyone > > several years ago, when the AI KR CG and this CG were started > Dave hinted in a post that AI would be largely agent based. > > I recalled that prediction as I started to work on AI agents, from a KR point of view (categorization of AI agents, ontology driver agent modelling etc) and to develop categories of protocols > > In trying to create logical to ontological schemas to help us capture what is going on, I would welcome CogAI input/evaluation > on the following table: > > > <image.png> > Dave Raggett <dsr@w3.org>
Received on Monday, 30 June 2025 10:33:46 UTC