Re: LLMs are evolving to mimic human cognitive science ...

Hi Dave,

     Thanks for this note — I’m very aligned with the shift you 
describe: moving from “bigger models / bigger windows” toward 
architectures that separate /thinking/ from /knowing/, and that take 
cognitive memory seriously (Engrams, Titans/MIRAS, MemAlign, CAMELoT, 
Larimar, etc.). (The Decoder 
<https://the-decoder.com/google-outlines-miras-and-titans-a-possible-path-toward-continuously-learning-ai/>)

I think K3D lands squarely in the same direction, but with two emphases 
that matter for W3C work:

*1) Open-world memory as /externalized structure/, not just internal caches*
     What you describe (RAG + hierarchical memory) is essentially 
“augment the model with an extendable memory substrate.” Titans/MIRAS is 
a great example of explicitly exploring memory structures and update 
rules so systems can retain and use long-range information more 
effectively. (The Decoder 
<https://the-decoder.com/google-outlines-miras-and-titans-a-possible-path-toward-continuously-learning-ai/>)
     K3D takes the next step: memory is a /shared, inspectable, spatial/ 
substrate (a “3D knowledge universe”) that both humans and AI can 
traverse. This is where the Semantic Web bridge becomes practical: the 
knowledge lives outside the model, can be versioned, audited, linked, 
and standardized.

*2) Neurosymbolic = symbolic constraints + executable procedures (not 
only embeddings)*
     I completely agree that we can go beyond “semantic similarity 
retrieval” by bringing in written records, catalogs, 
counting/aggregation, and symbolic constraints — i.e., the Semantic Web 
stack plus neural components. (arxiv.org 
<https://arxiv.org/html/2403.11901v1>)
     In K3D, retrieved items aren’t just text snippets: they can be 
/procedural/ (deterministic programs) plus semantic metadata 
(RDF/OWL-style meaning), so the system can execute transformations and 
verify outcomes instead of improvising via natural-language reasoning.

*Why I’m bringing this up now (and why it’s relevant to the Cognitive AI 
CG)*
     Your closing point is the one I care about most: local agents and 
memory should not lock people into proprietary runtimes; we need open 
formats, open semantics, and portable interfaces. (arxiv.org 
<https://arxiv.org/html/2403.11901v1>)
     K3D is being built specifically around that: open representation + 
auditable execution + local-first operation, with a standards path (glTF 
as the carrier, plus formal semantics for memory + procedures).

     If useful, I can share a more technical write-up that maps K3D 
directly against the architectures you cited and calls out what I think 
is “standardizable surface area” (file formats, memory protocols, and 
execution semantics).

If you’d like, I’ll follow up with:

  *

    a compact “K3D in 10 minutes” overview (for the CG audience), and

  *

    a separate technical appendix (for implementers / standards discussion).

In the meantime, if you'd like to explore what's already public:

• *Quick Overview* (10-minute audio deep-dive generated by NotebookLM):
https://notebooklm.google.com/notebook/1bd10bda-8900-4c41-931e-c9ec67ac865f
   *(This covers the Three-Brain System, sovereignty architecture, and 
ARC-AGI results)*

• *Technical Repository* (18,000+ words of specifications + working code):
https://github.com/danielcamposramos/Knowledge3D
   *(docs/vocabulary/ has the formal specs; TEMP/ has production 
validation reports)*

• *Demonstrations* (NotebookLM-generated videos explaining key concepts):
https://www.youtube.com/@EchoSystemsAIStudios
   *(Visual walkthroughs of spatial memory, procedural knowledge, etc.)*

• *Professional Contact* (LinkedIn, preferred social network for 
follow-ups):
https://www.linkedin.com/in/danielcamposramos/

The NotebookLM overview is probably the fastest way to get a sense of 
the architecture without diving into code.

Best,
Daniel Ramos


On 2/5/26 8:32 AM, Dave Raggett wrote:
> Until recently the way to improve LLMs was to increase their training 
> data and increase their context window (the number of tokens permitted 
> in the prompt).
>
> That is now changing with a transition to hierarchical architectures 
> that separate thinking from knowing and take inspiration from the 
> cognitive sciences. Some key recent advances include DeepSeek’s 
> Engrams [1], Google Research’s Titans + MIRAS [2], Mosaic Research’s 
> MemAlign [3], hierarchical memory like CAMELoT [4], and Larimar which 
> mimics the Hippocampus for single shot learning [5].
>
> RAG with vector indexes allow search by semantic similarity, enabling 
> LLMs to scan resources that weren’t in their training materials.  We 
> can go further by mimicking how humans use written records and 
> catalogs to supplement fallible memory, enabling robust counting and 
> aggregation, something that is tough for native LLMs. This involves 
> neurosymbolic systems, bridging the worlds of neural AI and the 
> semantic Web.
>
> If we want personal agents that get to know us over many interactions, 
> one approach is for the agent to maintain summary notes that describe 
> you as an individual. When you interact with the agent, your 
> information is injected into the prompt so that the agent appears to 
> remember you.   Personal agents can also be given privileges to access 
> your email, social media and resources on your personal devices, and 
> to perform certain operations on your behalf.
>
> Prompt injection is constrained by the size of the context window. 
>  This where newer approaches to memory can make a big difference.  One 
> challenge is how to manage long term personalised semantic and 
> episodic memories with plenty of implications for privacy, security 
> and trust. The LLM run-time combines your personalised memories with 
> shared knowledge common to all users.
>
> My hunch is that much smaller models will be sufficient for many 
> purposes, and have the advantage of running locally in your personal 
> devices, thereby avoiding the need to transfer personal information to 
> the cloud. Local agents could chat with more powerful cloud-based 
> agents when appropriate, e.g. to access ecosystems of services, and to 
> access knowledge beyond the local agent’s capabilities.
>
> The challenge is to ensure that such local agents are based upon open 
> standards and models, rather than being highly proprietary, locking 
> each of us in a particular company's embrace. That sounds like a 
> laudable goal for the Cognitive AI Community Group to work on!
>
> [1] https://deepseek.ai/blog/deepseek-engram-v4-architecture
> [2] 
> https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/ 
>
> [3] 
> https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
> [4] https://arxiv.org/abs/2402.13449
> [5] https://arxiv.org/html/2403.11901v1
>
>
> Dave Raggett <dsr@w3.org>
>
>
>

Received on Thursday, 5 February 2026 12:58:18 UTC