- From: Dave Raggett <dsr@w3.org>
- Date: Mon, 13 Oct 2025 13:31:14 +0100
- To: Matthew Stewart <mpstewart94@gmail.com>
- Cc: public-cogai <public-cogai@w3.org>
- Message-Id: <7639B10C-A513-4DB7-BBA7-31B3FA82A445@w3.org>
Hi Matthew, Welcome to the Cognitive AI CG. The CG [1] has developed an abstraction for facts and rules above the level of RDF’s triples, and applied this to a suite of demos. This was inspired by the work by John Anderson on ACT-R as a predictive basis for cognitive science experiments. Our work (chunks & rules [2]) can be applied to control digital twins, e.g. for a cognitive agent that controls factory equipment [3] and smart homes [4]. Recent work extended this to support a swarm of cognitive agents with the means for delegating and synchronising tasks across agents. Our other work looked at the extension of reasoning from logic to argumentation, given the limitations of logic in everyday situations. The Plausible Knowledge Notation was inspired by work by Alan Clark in the 1980s. It supports reasoning with imperfect knowledge, that is, everyday knowledge subject to uncertainty, imprecision, incompleteness and inconsistencies. See the draft specification for the plausible knowledge notation [5], and the web-based PKN demonstrator. This is based upon work on guidelines for effective argumentation by a long line of philosophers since the days of Ancient Greece. In place of logical proof, we have multiple lines of argument for and against the premise in question just like in courtrooms and everyday reasoning. I am currently working on an implementation of a vision for the Immersive Web [6] which embraces virtual and augmented reality, building upon existing Web standards such as WebGPU, WebXR, WebNN, Web Sockets and WebRTC. This features intent-based behaviours as a basis for accessibility. The Immersive Web includes avatars for people, cognitive agents and digital twins. I am hoping to combine large and small AI models as a basis for recognising and generating facial expressions and body language gestures in real-time, going well beyond what Meta and Microsoft support in their metaverses. Further out, I want to work on Sentient AI featuring continual reasoning and learning based upon episodic memory. Agents based upon Sentient AI are aware of their environment, along with their goals and performance. This involves the need for research on new AI architectures inspired by the human brain and how we learn and reason. Please email public-cogai@w3.org <mailto:public-cogai@w3.org> with an introduction to yourself and your interest in Cognitive AI. [1] https://github.com/w3c/cogai [2] https://w3c.github.io/cogai/chunks-and-rules.html [3] https://www.w3.org/Data/demos/chunks/robot/ [4] https://www.w3.org/Data/demos/chunks/home/ [5] https://w3c.github.io/cogai/pkn.html [6] https://www.w3.org/2024/06-Raggett-immersive-web.pdf Best regards, Dave Raggett <dsr@w3.org>
Received on Monday, 13 October 2025 12:31:26 UTC