Re: Progress on human inspired cognitive agents

Generative AI systems struggle with numerical concepts and easily confuse the instructions they are given. That points to the need for work on direct manipulation of latent semantics, something I am trying to address by introducing a reasoner. I am not using an LLM, but rather starting with a novel design for neural networks that is intended to support simple language, rote memory, deliberative reasoning and continual learning. I want to mimic how children learn from a combination of observation, instruction, experience and reflection.

In future, and assuming I succeed with the current study, I anticipate using LLMs to help craft datasets that reflect the general knowledge of young children, as a basis for richer language usage. For now, I will make minimal use of language, just sufficient for the math lessons, as this will allow me to make progress with modest sized models.

> On 19 Mar 2024, at 01:43, Melvin Carvalho <melvincarvalho@gmail.com> wrote:
> 
> 
> 
> po 18. 3. 2024 v 17:23 odesílatel Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> napsal:
>> I’ve created a README to describe work in progress on designing, implementing and testing artificial neural networks that mimic human cognition. The goal is to demonstrate neural architectures for memory, deliberative reasoning and continual learning. This needs to be trained on data with constrained language and semantics to enable effective learning with a modest dataset and model size.  
>> 
>> Large Language Models (LLMs) are trained to predict the next word based upon the preceding words. Humans learn from about 3-5 orders of magnitude less data. A key difference is that human language is grounded to the real world. We can mimic that by encouraging cognitive agents to try things out themselves using reinforcement learning, along with some measure of reflective thinking. Learning can be viewed as a combination of observation, instruction, experience and reflection.
>> 
>> I have settled on elementary mathematics as a tractable domain for these experiments given that the knowledge is pretty much standalone, involving a combination of rote learning and step by step reasoning.  I’ve introduced digital worksheets in lieu of the pencil and paper children get to work with. This work is still at an early stage and I will be updating the repository to track progress.
>> 
>> Long term this should lead to a new generation of AI systems that are well suited to running at the edge rather than in the cloud.  There will be plenty of opportunities for agents with good enough intelligence and knowledge rather than very powerful systems with super human capabilities!
>> 
>> p.s. feel free to volunteer your time if you want to contribute to this work. I plan to add datasets and Python code to the Github repository as they are created.
> 
> From my experience LLMs can get pretty confused with numbers.  For example try and create an image with 8 sticks, in less than an hour.  Not easy.
> 
> Would you use LLMs for the maths, or some kind of python helper?
>  
>> 
>> cogai/agents/README.md at master · w3c/cogai
>> github.com
>> <apple-touch-icon-180x180-a80b8e11abe2.png>
>>  <https://github.com/w3c/cogai/blob/master/agents/README.md>cogai/agents/README.md at master · w3c/cogai <https://github.com/w3c/cogai/blob/master/agents/README.md>
>> github.com <https://github.com/w3c/cogai/blob/master/agents/README.md> <apple-touch-icon-180x180-a80b8e11abe2.png> <https://github.com/w3c/cogai/blob/master/agents/README.md>
>> 
>> 
>> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>
>> 
>> 
>> 
> 

Dave Raggett <dsr@w3.org>

Received on Tuesday, 19 March 2024 09:57:13 UTC