- From: Melvin Carvalho <melvincarvalho@gmail.com>
- Date: Tue, 19 Mar 2024 02:43:56 +0100
- To: Dave Raggett <dsr@w3.org>
- Cc: public-cogai <public-cogai@w3.org>
- Message-ID: <CAKaEYhK1cqoLKMHNU_miydK7dOLLnDBm7UNiyh2DrNnQ6zuaYA@mail.gmail.com>
po 18. 3. 2024 v 17:23 odesílatel Dave Raggett <dsr@w3.org> napsal: > I’ve created a README to describe work in progress on designing, > implementing and testing artificial neural networks that mimic human > cognition. The goal is to demonstrate neural architectures for memory, > deliberative reasoning and continual learning. This needs to be trained on > data with constrained language and semantics to enable effective learning > with a modest dataset and model size. > > Large Language Models (LLMs) are trained to predict the next word based > upon the preceding words. Humans learn from about 3-5 orders of magnitude > less data. A key difference is that human language is grounded to the real > world. We can mimic that by encouraging cognitive agents to try things out > themselves using reinforcement learning, along with some measure of > reflective thinking. Learning can be viewed as a combination of > observation, instruction, experience and reflection. > > I have settled on elementary mathematics as a tractable domain for these > experiments given that the knowledge is pretty much standalone, involving a > combination of rote learning and step by step reasoning. I’ve introduced > digital worksheets in lieu of the pencil and paper children get to work > with. This work is still at an early stage and I will be updating the > repository to track progress. > > Long term this should lead to a new generation of AI systems that are well > suited to running at the edge rather than in the cloud. There will be > plenty of opportunities for agents with good enough intelligence and > knowledge rather than very powerful systems with super human capabilities! > > p.s. feel free to volunteer your time if you want to contribute to this > work. I plan to add datasets and Python code to the Github repository as > they are created. > From my experience LLMs can get pretty confused with numbers. For example try and create an image with 8 sticks, in less than an hour. Not easy. Would you use LLMs for the maths, or some kind of python helper? > > cogai/agents/README.md at master · w3c/cogai > <https://github.com/w3c/cogai/blob/master/agents/README.md> > github.com <https://github.com/w3c/cogai/blob/master/agents/README.md> > [image: apple-touch-icon-180x180-a80b8e11abe2.png] > <https://github.com/w3c/cogai/blob/master/agents/README.md> > <https://github.com/w3c/cogai/blob/master/agents/README.md> > > > Dave Raggett <dsr@w3.org> > > > >
Attachments
- image/png attachment: apple-touch-icon-180x180-a80b8e11abe2.png
Received on Tuesday, 19 March 2024 01:44:13 UTC