- From: Milton Ponson <rwiciamsd@gmail.com>
- Date: Mon, 9 Feb 2026 10:01:00 -0400
- To: Dave Raggett <dsr@w3.org>
- Cc: public-cogai <public-cogai@w3.org>
- Message-ID: <CA+L6P4y53UpSafRyuEGsKwPAkxOH75tjm4yFoDcSziPrngbQaA@mail.gmail.com>
On Mon, Feb 9, 2026, 07:25 Dave Raggett <dsr@w3.org> wrote: > Some more on memory ... > > On 8 Feb 2026, at 15:04, Dave Raggett <dsr@w3.org> wrote: > > … the current technical research focus is on *memory and reasoning*, and > so far to a lesser extent on continual learning. Findings in the cognitive > sciences are providing valuable research insights for novel neural AI > architectures. The goal is not to reproduce people, but rather to provide > useful tools. > > > Sam Altman, OpenAI CEO, recently spoke out on how he see’s AI evolving > [1]. He emphasised improvements in agent memory which he expects to arrive > in the next few years: > On a personal note, I don't care much about the musings or utterances of Sam Altman. > What perfect memory looks like: > > Remembers every word you've ever said to it. > Has read every email you've written, every document you've created. > Knows your small preferences you didn't even think to indicate. > Personalized across every detail of your entire life. > Tracks changes in your preferences over time. > Understands context from years ago without you having to remind it. > > Humans have a memory that is constantly being revised, updated and edited. There are many reasons why nature has chosen to provide us with the biological machinery to do so. People with hyperthymesia or highly superior autobiographical memory remember everything, and this is called a disorder. An agent that can mimic this can cause psychosocial problems and impact normal interchange, as customary between humans. Humans and their brains are both very selective in prioritizing what to remember and what to forget. Attuning this perfect memory to this characteristic may well be the biggest challenge. > > AI will offer perfect, unlimited, consistent memory. No degradation over > time. No confusion between similar events. Complete recall of every > interaction. > > Again, this may not be ideal in certain cases and or circumstances or for certain people (politicians and diplomats). > > Altman also talks about all of us getting used to describing our intents > and delegating agency to AI to figure out and then execute the actions > needed to accomplish those intents, given what the AI knows about us. > That can be done using StratML. > > This will require fastidious attention to trust, privacy and security. > Where should such highly personal information be held, and how can this be > kept from attackers and malicious employers and autocratic governments? > This gives businesses operating such agents huge power over our lives. How > can we ensure open and fair ecosystems and avoid abusive controlling > behaviour? > The General Data Protection Regulation of the European Union provides an idea how to accomplish this. A special officer is required by the GDPR to deal with these issues. I am working on an idea, called Resilient Infrastructure Mains Computing, where AI functionality addresses primarily domains of discourse (which includes so-called world models), to allow execution of very specific tasks, requiring very specific skills in determined application environments. The Mains to the required Infrastructures are thus highly localized, and the data repositories, computing and networking facilities as well. This resolves data and digital ecosystem sovereignty issues as well and national legal requirements for AI. And it resolves energy and water infrastructures issues and environmental and socioeconomic impacts as well. And this also means that the current hype and massive investments in hyperscale datacenters has obsolescence built in which will render return on investments moot. And we have totally overlooked new developments in semiconductors where both digital and analog processes can be combined. This may very soon render the current paradigm of hyperscale datacenters with tens of thousands or hundreds of thousands GPUs obsolete. > > My hunch is that we will find better ways to decentralise AI and move it > to the edge. Rather than an all powerful AI, we should aim for a society > of AI's with different skills and different roles. Continual learning will > allow AI’s to acquire new skills over time and as needed. > A society of AIs with different skills and roles is exactly what the domains of discourse paradigm is all about. > > [1] > https://www.theneuron.ai/explainer-articles/openais-vision-for-2026-sam-altman-lays-out-the-roadmap/ > > Dave Raggett <dsr@w3.org> > > > Milton Ponson Rainbow Warriors Core Foundation CIAMSD Institute-ICT4D Program +2977459312 PO Box 1154, Oranjestad Aruba, Dutch Caribbean > >
Received on Monday, 9 February 2026 14:01:18 UTC