Re: AI works best when humans are there to hold its hand.

See also:

> An understanding of AI’s limitations is starting to sink in
> After years of hype, many people feel AI has failed to deliver, says Tim Cross

https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in <https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in>

Including:

> Real managers in real companies are finding AI hard to implement and that enthusiasm is cooling

and this:

> They are powerful pattern-recognition tools, but lack many cognitive abilities that biological brains take for granted. They struggle with reasoning, generalising from the rules they discover, and with the general-purpose savoir faire that researchers, for want of a more precise description, dumb “common sense”. The result is an artificial idiot savant that can excel at well-bounded tasks, but can get things very wrong if faced with unexpected input.

That’s why the W3C Cognitive AI CG is focusing on mimicking the human brain at a functional level, and benefiting from hundreds of millions of years of evolution. This has involved a shift in mindset from logic and formal semantics to a more cognitive approach.

Manual development of symbolic AI doesn’t scale either, but a combination of symbolic and statistical approaches paves the way to cognitive agents can that can learn from experience guided by human collaborators.

The immediate challenge is to open up the use of natural language through incremental concurrent processing of syntax and semantics as a basis for addressing the abundant ambiguity in natural language and paving the way for teaching cognitive agents everyday skills. 

This is a lot easier to arrange in a cognitive architecture as it is trivial to launch cognitive processes by setting goals that trigger reasoning. You can get a first glimpse of a very simple demo at

 https://www.w3.org/Data/demos/chunks/nlp/toh/ <https://www.w3.org/Data/demos/chunks/nlp/toh/>

On Chrome it also supports speech recognition - click the microphone then hit enter if the text that appears after a second or two looks okay. This demo invokes cognition after generating the word dependency graph. The next demo will use fully concurrent processing of syntax and semantics. 

Whilst Google’s speech recognition is pretty good, today’s neural network based speech recognition lacks context, and real-time integration with semantics that would make it much more effective. In the longer term, integration with emotional processing will allow further for natural human machine interaction.

Here is a demo that shows how modelling the cortico basal ganglia circuit can support real-time control of factory machinery:

 https://www.w3.org/Data/demos/chunks/robot/ <https://www.w3.org/Data/demos/chunks/robot/>

The log shows a trace of goals and rule execution.

This is just a few tiny steps along the road to strong AI, and I am hoping to complete a number of demos on NLP and various forms machine learning over the rest of this year.

A formal spec is in preparation.

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Monday, 15 June 2020 20:22:05 UTC