- From: Dave Raggett <dsr@w3.org>
- Date: Thu, 8 Aug 2024 11:29:14 +0100
- To: Milton Ponson <rwiciamsd@gmail.com>
- Cc: W3C AIKR CG <public-aikr@w3.org>, public-cogai <public-cogai@w3.org>
- Message-Id: <2FCBFB8B-54B2-4667-8D82-4EF6B93B06CA@w3.org>
LLMs are designed to make statistical predictions for text continuations, and I am amazed by how well they do that. It is unsurprising that they are weak on semantic consistency, just as they are for learning from limited data. LLMs are good for summarisation, but not for deep insights. That’s fine for some applications, but not others. LLMs can certainly help with web search, but you still need to think for yourself given the limitations of LLMs. I very much doubt that today’s LLMs will provide the hoped for return on investment. That said, LLMs are useful and will remain commonplace. I am not sure that Buddhism can help much when it comes to the details of how neurons compute. Next generation AI will depend on a much better understanding of human memory. How is it that we can remember and learn from single episodes? That includes speech and music. One of the challenges is how neurons can learn on the scale of seconds and much longer rather than milliseconds. I am trying to figure out which questions are vital to developing a model that I can implement in code. Progress is dependent on identifying the “right” questions. > On 7 Aug 2024, at 17:24, Milton Ponson <rwiciamsd@gmail.com> wrote: > > https://www.theguardian.com/technology/article/2024/aug/06/ai-llms. > > Interesting article that stresses the point that relationships between facts differentiate humans (for now) and AI that uses stochastic information about tokens to come up with text generation and problem solving. > > Now here is the mind boggling aspect, Buddhists talk about dependent arising in knowledge, the free energy principle, causal cognition and the way the brain processes sensory input and assimilates and stores this all hint at complex sequential processing across multiple areas of the brain with particular wave activities surging back and forth between areas in the brain. > > This seems to make LLM generative AI not fit for modeling AGI. > > > Milton Ponson > Rainbow Warriors Core Foundation > CIAMSD Institute-ICT4D Program > +2977459312 > PO Box 1154, Oranjestad > Aruba, Dutch Caribbean Dave Raggett <dsr@w3.org>
Received on Thursday, 8 August 2024 10:29:30 UTC