- From: Dave Raggett <dsr@w3.org>
- Date: Tue, 26 Mar 2024 11:52:27 +0200
- To: public-cogai <public-cogai@w3.org>
Received on Tuesday, 26 March 2024 09:52:42 UTC
To explore reasoning, memory and learning, I’ve settled on elementary mathematics as a domain that is reasonably standalone, and as such suited to a modest budget for training costs. I've implemented a basic language model and next need to develop a script to expand the dataset with randomly generated examples. However, I am increasingly aware of the need to enable continual learning from as little as one example. This needs a different approach than back propagation. There is so much about AI we have yet to understand! I am scouring the literature in the search for work that is relevant to language models. I've written up some reflections on the challenges, see: https://github.com/w3c/cogai/blob/master/agents/Reflections.md cogai/agents/Reflections.md at master · w3c/cogai github.com Dave Raggett <dsr@w3.org>
Received on Tuesday, 26 March 2024 09:52:42 UTC