Re: LLMs are evolving to mimic human cognitive science ...

Hi Daniel,

I need to upload my paper "AI for Office Work: closing the gap between language and deterministic processing” that provides a more detailed explanation for exploiting semantic orchestration as I think we need to discuss further the relationship between LLMs and symbolic approaches.

Best regards,
Dave

> On 5 Feb 2026, at 12:58, Daniel Ramos <capitain_jack@yahoo.com> wrote:
> 
> Hi Dave,
> 
>     Thanks for this note — I’m very aligned with the shift you describe: moving from “bigger models / bigger windows” toward architectures that separate thinking from knowing, and that take cognitive memory seriously (Engrams, Titans/MIRAS, MemAlign, CAMELoT, Larimar, etc.). (The Decoder <https://the-decoder.com/google-outlines-miras-and-titans-a-possible-path-toward-continuously-learning-ai/>)
> 
> I think K3D lands squarely in the same direction, but with two emphases that matter for W3C work:
> 
> 1) Open-world memory as externalized structure, not just internal caches
> 
> 2) Neurosymbolic = symbolic constraints + executable procedures (not only embeddings)
> 

Dave Raggett <dsr@w3.org>

Received on Sunday, 8 February 2026 14:40:41 UTC