Re: LLMs are evolving to mimic human cognitive science ...

Thanks for this update. But there is a caveat.
Mimicking cognitive features isn't enough. The article about biological
computationalism makes a strong case for distinction between algorithm and
machine not being in line with experimental observation (
https://www.sciencedirect.com/science/article/pii/S0149763425005251).

Which means observation, perception, memory storage, cognitive function and
processes all blend into a "cognitive and consciousness smear ".

This is why the Blue Brain Project (Henry Markram, EPFL) failed, and why
other newer projects aimed at mimicry of the human brain are bound to fail
as well, because they try to correlate cognitive functionality with certain
brain cells or areas of the brain.

New research now points at astrocytes and neurons playing a central role in
the cognitand consciousness smear.
https://www.quantamagazine.org/once-thought-to-support-neurons-astrocytes-turn-out-to-be-in-charge-20260130/

As Dave points out, promising results have been found, but like in physics
where grand unified theories are the holy grail, in cognitive and
neuroscience full description of brain functions may be impossible.

In terms of computer science,  the brain is dualistic, with both a myriad
of types of cells (agents) acting in ensemble and a dizzying number of
complex adaptive systems of systems (processes at both individual cell and
areas of cell) performing a symphony with multiple orchestras and
conductors all playing simultaneously.

Extricating brain function from experimental observation is hard.
https://www.google.com/search?q=to+extricate+brain+function+from+experimental+observation+is+hard&oq=to+extricate+brain+function+from+experimental+observation+is+hard&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRiPAjIHCAIQIRiPAtIBCTMyNzkxajBqN6gCD7ACAfEFb11YWj0fZZs&client=tablet-android-samsung-ss&sourceid=chrome-mobile&ie=UTF-8#lfId=ChxjMe

Nevertheless I am optimistic that we can achieve progress if we narrow
focus functionality to mimic.

On Thu, Feb 5, 2026, 07:32 Dave Raggett <dsr@w3.org> wrote:

> Until recently the way to improve LLMs was to increase their training data
> and increase their context window (the number of tokens permitted in the
> prompt).
>
> That is now changing with a transition to hierarchical architectures that
> separate thinking from knowing and take inspiration from the cognitive
> sciences. Some key recent advances include DeepSeek’s Engrams [1], Google
> Research’s Titans + MIRAS [2], Mosaic Research’s MemAlign [3], hierarchical
> memory like CAMELoT [4], and Larimar which mimics the Hippocampus for
> single shot learning [5].
>
> RAG with vector indexes allow search by semantic similarity, enabling LLMs
> to scan resources that weren’t in their training materials.  We can go
> further by mimicking how humans use written records and catalogs to
> supplement fallible memory, enabling robust counting and aggregation,
> something that is tough for native LLMs. This involves neurosymbolic
> systems, bridging the worlds of neural AI and the semantic Web.
>
> If we want personal agents that get to know us over many interactions, one
> approach is for the agent to maintain summary notes that describe you as an
> individual. When you interact with the agent, your information is injected
> into the prompt so that the agent appears to remember you.   Personal
> agents can also be given privileges to access your email, social media and
> resources on your personal devices, and to perform certain operations on
> your behalf.
>
> Prompt injection is constrained by the size of the context window.  This
> where newer approaches to memory can make a big difference.  One challenge
> is how to manage long term personalised semantic and episodic memories with
> plenty of implications for privacy, security and trust. The LLM run-time
> combines your personalised memories with shared knowledge common to all
> users.
>
> My hunch is that much smaller models will be sufficient for many purposes,
> and have the advantage of running locally in your personal devices, thereby
> avoiding the need to transfer personal information to the cloud. Local
> agents could chat with more powerful cloud-based agents when appropriate,
> e.g. to access ecosystems of services, and to access knowledge beyond the
> local agent’s capabilities.
>
> The challenge is to ensure that such local agents are based upon open
> standards and models, rather than being highly proprietary, locking each of
> us in a particular company's embrace. That sounds like a laudable goal for
> the Cognitive AI Community Group to work on!
>
> [1] https://deepseek.ai/blog/deepseek-engram-v4-architecture
> [2]
> https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/
>
> [3]
> https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
> [4] https://arxiv.org/abs/2402.13449
> [5] https://arxiv.org/html/2403.11901v1
>
>
> Dave Raggett <dsr@w3.org>
>
>
>
> Milton Ponson
Rainbow Warriors Core Foundation
CIAMSD Institute-ICT4D Program
+2977459312
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean

Received on Thursday, 5 February 2026 15:52:17 UTC