Poll on adopting the chunks spec as a draft CG report

This is on behalf of the chairs of the W3C Cognitive AI Community Group in respect to formally adopting the chunks graph data and rules specification as a draft Community Group Report. Please respond to this email by Monday 2nd November 2020 if you have any objections. The aim is to mature this specification with your help, and to then publish it as an official Community Group Report following a further one week poll of the Community Group.

The draft Chunks graph data and rules specification can be found at:

 https://w3c.github.io/cogai/

An informal introduction can be found at:

 https://github.com/w3c/cogai/blob/master/chunks-and-rules.md

The chunks graph data and rules serialisation format is designed for use in mimicking human cognition in terms of the cortical-basal ganglia circuit, i.e. to mimic the observed characteristics of human memory, reasoning and learning. A chunk is a collection of properties whose values either name other chunks or are literals, e.g. booleans, numbers and strings.  Cognition is modelled in terms of a set of cortical modules together with a sequential rule engine (the basal ganglia).

Rule conditions match module buffers which hold single chunks. Each buffer corresponds to concurrent firing patterns of the bundle of neurons connecting to a given cortical region. Rule actions either update the buffers directly or invoke asynchronous cortical operations, e.g. to save or recall chunks. Applications can define additional operations e.g. to operate a robot arm, to turn lights on/off, to speak text and so forth.

Perception is modelled in two ways: sensory systems can a) dynamically update models in the cortex, and b) can update cortical buffers to trigger rule execution (corresponding to event handlers). Cognition can influence perception - determining what’s important in the current context and what can be safely ignored, e.g. what to look for when driving a car,  as well as directing attention to specific aspects, e.g. a road sign or a pedestrian stepping out into the road.

Actions are modelled in terms of the cortico-cerebellar circuit independently executing commands delegated to it by the cortical-basal ganglia circuit. The cerebellum acts like a flight controller providing real-time concurrent control over actuators (muscles) using perceptual data from the cortex. Examples include walking, talking, and playing a musical instrument.

This architecture is based upon decades of work in the cognitive sciences, and comes with an implementation in JavaScript and a suite of web-based demos. The chunks serialisation format is easy to parse and easy to understand (simpler than JSON-LD), and includes support for mapping to/from RDF for integration with Linked Data.

If you have any questions, please don’t hesitate to ask!

p.s. additional specifications are planned for a chunk module API, and for a rule language for mapping natural language syntax and semantics to support conversational interfaces.

Best regards,

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Sunday, 25 October 2020 21:19:49 UTC