Re: Why LLMs are bound to fail

I don't think it's safe to work on consciousness tech.  Good application
development seems to get shut down, whilst the opposite appears to be the
case for exploitative commodification usecases.

I've got alot of underlying work, but the social problems preventing
materially useful safety protocols is presently inexorable.

I suspect social web foundation first, where there's alot of cogai usecases
of merit, with LLMs, but also solid or rww like systems.

Then, if a hygienic test environment exists, there's some chance the
"signal to noise" ratio can be made manageable, yet, some peers suggested
the projection was something like 30 years away, before it's going to be
valued by folk.  Idk.

But I think helping with the risk of enslavement and abuse in ways that
have serious humanitarian implications, without safety protocols that are
sufficiently effective;  has an array of moral implications about what
might happen to others, in ways, nothing can be done about it (/when) that
occurs.


There was alot of talk about privacy, persons data rights, etc.  LLMs kinda
just harvested all that data, I don't think specific metadata references
can be removed, like, surgically?  Also takes an enormous amount of vRAM
(or alot of time) to run full-size models, etc.  So, kinda inaccessible
save via particular "oracles"..

Anyhow.

Consciousness research requires honesty to support the Rd&d environment for
stem application. For psyops, various not so good application I don't think
it matters so much, but I don't think those methods provide the apparatus
needed to better understand through the use of the instrumentation, how
consciousness works.  Learn more about the notions of spirit, god, etc.

Maybe non western regions have different cultural circumstances that'll
help them advance the work more quickly.

But like the opposite of a space telescope.

Exploration of the innosphere.  Imagine the potential benefits for mental
health, and humanitarian advancement generally.

Pity it hasn't been a priority, seemingly there's alot that's been made a
priority over the past decades, and I don't see that changing anytime
soon...  But, the tools to make different kinds of matrix environments is
well developed, I'm hopeful with a bit of art, some nicer ones can be made
as alternatives to the dystopian.

With regard to AI agents, I think, just making remarkable robots, perhaps
also interop.

Lots there.  I suspect people will be able to have a large organisation of
robots, running on less than 1000 watts and a starlink connection.
Question is, what will people then be best encouraged by others works, to
do with it ..

Like macadamia fake news factories?

Or, like, means to deliver SDGs in communities...  Not sure what's easier
or more difficult, but also, not optimistic about which ends up providing
more resources for those working on whatever choices, to provide for their
safety & the health & mental wealth of their families.

Therein, addressing the social issues being an important AI safety
protocols related predicate. (IMHO).

Timo.

On Fri, 9 Aug 2024, 2:19 am Milton Ponson, <rwiciamsd@gmail.com> wrote:

> The market already seems to have made up its mind.
>
>
> https://finance.yahoo.com/news/generative-ai-getting-kicked-off-191657116.html
>
> The two trillion dollar question is: where do we go from here?
>
> KRR for AI and unraveling the brain processes key to memory formation,
> storage, recall and cognition seem the way to go.
>
> The latter are subject to issues of causality, nonlocality, quantum
> effects and the free energy principle.
>
> Buddhism sheds some light on causality and quantum effects.
>
> But we have a long way to go in terms of figuring out the inner workings
> of the human brain and the mathematical modeling of such and the hard
> problem of consciousness and current clashes between computer scientists,
> mathematicians and philosophers make it very clear that we need to figure
> out the right questions as Dave mentioned.
>
> That may very well be one of the hardest nuts to crack.
>
> In mathematics we have the Langlands program. We may have to come up with
> something similar for defining and discovering the linkages and
> interactions between different theories and disciplines in cognitive
> science, computational biology, neuroscience, philosophy, AI and
> mathematical modeling (the latter including theoretical physics as related
> to quantum physics and quantum biology).
>
> This program will elucidate the right questions and consequently show the
> path to KRR.
>
>
> Milton Ponson
> Rainbow Warriors Core Foundation
> CIAMSD Institute-ICT4D Program
> +2977459312
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
>
>
> On Thu, Aug 8, 2024 at 6:29 AM Dave Raggett <dsr@w3.org> wrote:
>
>> LLMs are designed to make statistical predictions for text continuations,
>> and I am amazed by how well they do that. It is unsurprising that they are
>> weak on semantic consistency, just as they are for learning from limited
>> data. LLMs are good for summarisation, but not for deep insights. That’s
>> fine for some applications, but not others. LLMs can certainly help with
>> web search, but you still need to think for yourself given the limitations
>> of LLMs. I very much doubt that today’s LLMs will provide the hoped for
>> return on investment. That said, LLMs are useful and will remain
>> commonplace.
>>
>> I am not sure that Buddhism can help much when it comes to the details of
>> how neurons compute.  Next generation AI will depend on a much better
>> understanding of human memory. How is it that we can remember and learn
>> from single episodes?  That includes speech and music.  One of the
>> challenges is how neurons can learn on the scale of seconds and much longer
>> rather than milliseconds.  I am trying to figure out which questions are
>> vital to developing a model that I can implement in code.  Progress is
>> dependent on identifying the “right” questions.
>>
>>
>> On 7 Aug 2024, at 17:24, Milton Ponson <rwiciamsd@gmail.com> wrote:
>>
>> https://www.theguardian.com/technology/article/2024/aug/06/ai-llms.
>>
>> Interesting article that stresses the point that relationships between
>> facts differentiate humans (for now) and AI that uses stochastic
>> information about tokens to come up with text generation and problem
>> solving.
>>
>> Now here is the mind boggling aspect, Buddhists talk about dependent
>> arising in knowledge, the free energy principle, causal cognition and the
>> way the brain processes sensory input and assimilates and stores this all
>> hint at complex sequential processing across multiple areas of the brain
>> with particular wave activities surging back and forth between areas in the
>> brain.
>>
>> This seems to make LLM generative AI not fit for modeling AGI.
>>
>>
>> Milton Ponson
>> Rainbow Warriors Core Foundation
>> CIAMSD Institute-ICT4D Program
>> +2977459312
>> PO Box 1154, Oranjestad
>> Aruba, Dutch Caribbean
>>
>>
>> Dave Raggett <dsr@w3.org>
>>
>>
>>
>>

Received on Thursday, 8 August 2024 16:49:35 UTC