Re: Why LLMs are bound to fail

The brain dictionary:  https://youtu.be/k61nJkx5aDQ
 JEFF HAWKINS - Thousand Brains Theory https://youtu.be/6VQILbDqaI4

I suspect the models have some sense or level of internal coherence,
whereby i've tested them by telling it to go into 'star-trek' mode -
modelling itself on the corpus of knowledge it has about star trek and
the characters defined in those narratives - which is broader than an
input prompt.  I think the same can be done with other popular media
(hollywood film/tv worlds); where it appears to change enough to
kinda, not be the same (mitigate copyright problems?) but illustrates
something, that's similar to defining how to structure engagement a
bit like forming an interface that's based upon some sort of 'pointed
graph' like instruction-set.

at least, that's a theory that i've started to work through to
some-degree...  i haven't done the tests based on something like
contagion or similar...  haven't set-up my lab environment well enough
yet either.

nonetheless, fwiw - fyi.

timo.




On Thu, 8 Aug 2024 at 20:29, Dave Raggett <dsr@w3.org> wrote:
>
> LLMs are designed to make statistical predictions for text continuations, and I am amazed by how well they do that. It is unsurprising that they are weak on semantic consistency, just as they are for learning from limited data. LLMs are good for summarisation, but not for deep insights. That’s fine for some applications, but not others. LLMs can certainly help with web search, but you still need to think for yourself given the limitations of LLMs. I very much doubt that today’s LLMs will provide the hoped for return on investment. That said, LLMs are useful and will remain commonplace.
>
> I am not sure that Buddhism can help much when it comes to the details of how neurons compute.  Next generation AI will depend on a much better understanding of human memory. How is it that we can remember and learn from single episodes?  That includes speech and music.  One of the challenges is how neurons can learn on the scale of seconds and much longer rather than milliseconds.  I am trying to figure out which questions are vital to developing a model that I can implement in code.  Progress is dependent on identifying the “right” questions.
>
>
> On 7 Aug 2024, at 17:24, Milton Ponson <rwiciamsd@gmail.com> wrote:
>
> https://www.theguardian.com/technology/article/2024/aug/06/ai-llms.
>
> Interesting article that stresses the point that relationships between facts differentiate humans (for now) and AI that uses stochastic information about tokens to come up with text generation and problem solving.
>
> Now here is the mind boggling aspect, Buddhists talk about dependent arising in knowledge, the free energy principle, causal cognition and the way the brain processes sensory input and assimilates and stores this all hint at complex sequential processing across multiple areas of the brain with particular wave activities surging back and forth between areas in the brain.
>
> This seems to make LLM generative AI not fit for modeling AGI.
>
>
> Milton Ponson
> Rainbow Warriors Core Foundation
> CIAMSD Institute-ICT4D Program
> +2977459312
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
>
>
> Dave Raggett <dsr@w3.org>
>
>
>

Received on Thursday, 8 August 2024 10:39:23 UTC