Progress on human inspired cognitive agents

I’ve created a README to describe work in progress on designing, implementing and testing artificial neural networks that mimic human cognition. The goal is to demonstrate neural architectures for memory, deliberative reasoning and continual learning. This needs to be trained on data with constrained language and semantics to enable effective learning with a modest dataset and model size.  

Large Language Models (LLMs) are trained to predict the next word based upon the preceding words. Humans learn from about 3-5 orders of magnitude less data. A key difference is that human language is grounded to the real world. We can mimic that by encouraging cognitive agents to try things out themselves using reinforcement learning, along with some measure of reflective thinking. Learning can be viewed as a combination of observation, instruction, experience and reflection.

I have settled on elementary mathematics as a tractable domain for these experiments given that the knowledge is pretty much standalone, involving a combination of rote learning and step by step reasoning.  I’ve introduced digital worksheets in lieu of the pencil and paper children get to work with. This work is still at an early stage and I will be updating the repository to track progress.

Long term this should lead to a new generation of AI systems that are well suited to running at the edge rather than in the cloud.  There will be plenty of opportunities for agents with good enough intelligence and knowledge rather than very powerful systems with super human capabilities!

p.s. feel free to volunteer your time if you want to contribute to this work. I plan to add datasets and Python code to the Github repository as they are created.

https://github.com/w3c/cogai/blob/master/agents/README.md


Dave Raggett <dsr@w3.org>

Received on Monday, 18 March 2024 16:23:48 UTC