Re: and two more thoughts

> On 22 Sep 2020, at 05:40, Paola Di Maio <paola.dimaio@gmail.com> wrote:
> 
> - would be nice to see a more explicit explanation of how the  chunks (are you proposing that chunks become a specification?) fit in the
> ACT-R architecture, and how ACT-R fits/reflects the cortex/cognitive function 

You can see an introduction to chunks and rules at:

	https://github.com/w3c/cogai/blob/master/chunks-and-rules.md


The sandbox is the first step towards a suite of tutorials, that people can try out in their web browser. It would be great to get some help with developing those tutorials!

	https://www.w3.org/Data/demos/chunks/sandbox/

> - since ACT- R is a very abstract model, would be nice to see an analysis of how the demos implement and validate the model

We could perhaps devote upcoming telecons to a detailed look at individual demos.

> (sorry if this is obvious)   can ACT-R   valid in real/useful in the real world and can the demos help to identify also its limitations?

ACT-R is designed to support cognitive science experiments.  Chunks and Rules by contrast are aimed at a wider audience and use a syntax that should be easier to work with (simpler than JSON-LD), as well as a web page library for easy integration into web applications. Moreover, the aim is to support remote access to scalable cognitive databases that are suitable for large applications.

There are many potential applications as the technologies mature, e.g.
Helping non-programmers to work with data (worth $21B by 2022 according to Forester)
Cognitive agents in support of customer services (worth $5.6B by 2023)
Smart chatbots for personal healthcare
Assistants for detecting and responding to cyberattacks
Teaching assistants for self-paced online learning
Autonomous vehicles
Smart manufacturing
Volunteers are needed to help with analysing each of these.

Sequential rule execution involving cognitive buffers involves a particular programming style. In the short term rule sets need to be developed by hand, but the long term aim is for rule sets to be created and extended through machine learning. Work on ACT-R has shown how this can be realised in terms of heuristics for proposing candidate rules and reinforcement learning to evolve effective rulesets. Metacognition, cased-based reasoning and hierarchical reinforcement learning allows this to be scaled to more complex scenarios.

The JavaScript library makes it easy to implement graph algorithms and to integrate with sensors and actuators, as illustrated by the Robot and smart home demos.  Graph algorithms escape the constraints of the rule language in which operations are performed on buffers limited to single chunks. The ongoing work on natural language understanding and generation may lead to a separate rule language for mapping syntax and semantics.  That work is likely to take some months yet, and it would be great to get some technical help with the analysis.

The paper you cited makes broad claims about knowledge limits to cognitive architectures. I completely agree that direct manual development of declarative and procedural knowledge has scaling problems. This is why work on machine learning and natural language processing is so important, e.g. to allow the use of natural language and simulated environments as a framework for teaching and assessing skills. Incremental progress can be exploited to address business needs. This is a journey with many interesting places to explore along the way. An analogy is island hopping rather than a single voyage across a vast ocean.

Best regards,
Dave


> 
> Related article
> 
>   Representational Limits in Cognitive Architectures   
> http://ceur-ws.org/Vol-1855/EUCognition_2016_Part4.pdf <http://ceur-ws.org/Vol-1855/EUCognition_2016_Part4.pdf>
> 

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Tuesday, 22 September 2020 10:19:04 UTC