Re: AI works best when humans are there to hold its hand.

Hi Paola,

You seem to misunderstand - the aim of the demo is show how a functional simulation of the cortico-basal ganglia can control behaviour through the use of goal driven rules that delegate real-time control of the robot to a simulation of the function of the cortico-cerebellar circuit.

In other terms, the robot is aware of the various items of machinery and their current state, and consciously initiates appropriate actions when needed. You can imagine yourself playing the role of the robot. You think about what you need to do, but once you decide to move your arm, the details of the movement are unconscious until it completes, allow you to think of other things whilst the arm is moving.

Best regards,
Dave

> On 28 Jun 2020, at 23:28, Paola Di Maio <paoladimaio10@gmail.com> wrote:
> 
> Thank you Dave
> 
> I understand now better the working behind the model of cognition of a mechanical routine
> 
>  I don't consider robotic mechanical movement cognition, in the sense that in robots can be automated
> withouty reasoning,  It can be based on signal processing -  the condition is a signal
> at least, thats the way it can be simplified, thats not the kind of cognition I think is challenging
> (in humans motor coordination requires the brain but in the robot it does not)
> 
> eatured snippet from the web
> In addition to containing networks of neurons related to the initiation of movement and to sensation from the body and the special sensory organs, the cortex is the substrate for functions that include comprehension, cognition, communication, reasoning, problem-solving, abstraction, imagining, and planning.
> https://neurology.mhmedical.com/content.aspx?bookid=1969&sectionid=147037783#:~:text=In%20addition%20to%20containing%20networks,abstraction%2C%20imagining%2C%20and%20planning. <https://neurology.mhmedical.com/content.aspx?bookid=1969&sectionid=147037783#:~:text=In%20addition%20to%20containing%20networks,abstraction%2C%20imagining%2C%20and%20planning.>  
> 
> Regarding the creative function, well I am investigating the possibility of an intelligent agent devising new engineering designs based on prior ones, and without using ANNs but I may have to reconsider that, maybe anns would help I don't know, I may have to go that way
> Let me know when you need some higher cognitive function developed, I can try to chip in, for now I am still waiting to join the CG
> I remember distinctively clicking the join this CG button some time ago, maybe even twice
> 
> 
> 
> 
> On Mon, Jun 29, 2020 at 12:25 AM Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> wrote:
> 
> 
>> On 28 Jun 2020, at 11:22, Paola Di Maio <paola.dimaio@gmail.com <mailto:paola.dimaio@gmail.com>> wrote:
>> 
>> David 
>> 
>> just seen this - tried to join your CG a couple of times but its not happening 
>> Pinged the sysadmin today, So swamped-
>> 
>> Now, to that diagram, how fun!! where did you get the https://www.w3.org/Data/demos/chunks/robot/ <https://www.w3.org/Data/demos/chunks/robot/>sound from????
> 
> I found free to use sound clips via web search and modified them to suit use in a web browser.
> 
>> its the sound effect that does the trick
> 
> Thanks.
> 
>> However, I don't see the cognitive level, I must admit, Perhaps you could tell more about the cognitive aspect of this robot?
>> where is the cognitive modelling?
> 
> Cognition encompasses declarative and procedural knowledge. In this demo I focused on modelling the behaviour, but I also sketched the associated declarative knowledge - trying expanding the facts graph to view it.  This could be used for validating rules as well as for synthesising rules to fulfil new requirements when reconfiguring the factory.
> 
> Note that the demo includes external functions that essentially correspond to things that would be handled by the cortico-cerebellar circuit. The movement of the robot arm involves real-time coordination of 3 joints as well as the gripper.  You wouldn’t be able to play the piano if you had to consciously think about the position of each finger. The portico-basal ganglia circuit devolves responsibility for actions to the cortico-cerebellar circuit which involves real-time control based upon access to sensory input in the cortex and independent of conscious thought.
> 
> The cognitive model includes the means to concurrently wait for conditions to become true whilst reasoning about other things.  As an example, the robot arm needs to wait for a bottle to reach the end of the belt before grasping it. The bottle may already be at the end of the belt or this may happen at some time in the future.
> 
> That is like being able to handle an event that may have already happened or will happen in the future. The cognitive agent signals that it is waiting, and when the condition becomes true, a chunk is pushed to the goal queue to trigger the appropriate follow on behaviour. The robot arm is treated similarly in that the cognitive agent signals the desired location and orientation of the gripper, and the subgoal to be queued when that has been realised.
> 
> There are more details at:
>  https://github.com/w3c/cogai/blob/master/demos/robot/README.md <https://github.com/w3c/cogai/blob/master/demos/robot/README.md>
> 
> A good question is how to acquire procedural knowledge in the form of rules. There has been plenty of research. In some cases, people start by a creating and refining a declarative model, and when that has been found to work well, compiling into rules. See:
> 
>  https://www.w3.org/Data/demos/chunks/chunks.html#compilation <https://www.w3.org/Data/demos/chunks/chunks.html#compilation>
> 
> And take a look at the diagram for the theory of skill retention with the distinction between declarative knowledge and procedural knowledge, and the importance of practice for reinforcing skills. 
> 
>> To me, this is pure mechanical automation, I do not see any bit of intelligence or any creativity in such a process Mechanical automation has become very sophisticated these days, and very fast!!!
> 
> Try teaching a robot to dance and not fall over, or take a close look at very young infants learning to move, grab things and keep their balance!  This involves a wide range of systems, including cognition, proprioception, learning declarative and procedural knowledge, and the acquisition of “muscle memory” through repetition.
> 
>> 
>> https://www.youtube.com/watch?v=4DKrcpa8Z_E <https://www.youtube.com/watch?v=4DKrcpa8Z_E>
>> 
>> Your simulation is fun, but it is nowhere near the state of the art in the real world afaik but maybe you can say a bit more..
> 
> It is only a simple demo, but shows the potential for a general purpose cognitive agent. How many RDF systems are used for real-time control? The longer term technical aims for Cognitive AI are listed at:
> 
>  https://github.com/w3c/cogai/blob/master/README.md#technical-aims <https://github.com/w3c/cogai/blob/master/README.md#technical-aims>
> 
> Each demo is a small step along the path.  
> 
>> 
>> I am interested in automating higher cognitive functions, for example, one of the challenges would be to create new designs and no, I don't think ANNs can do that they only spit out a probabilist remodelling of some input
> 
> You would be very welcome to help with work on skill acquisition. However, I would defer work on artistic creativity until we have first mastered other areas, including the role of emotions for controlling cognition, as noted by Marvin Minsky, given the importance of emotion for creativity.
> 
> I have an outline of a framework for emotions, involving a feedforward network and back propagation, and plan to work on it after progressing work on natural language understanding and machine learning. The main reason for doing things in that order, is that human emotions are usually related to social interactions, so we need to model that first.
> 
>> 
>> Tell us more about the cognitive model behind your wine filling robotic arm
>> 
>> Feature request: a robot that can fold origami following the algo
>> 
>> p
>> 
>> On Tue, Jun 16, 2020 at 4:22 AM Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> wrote:
>> See also:
>> 
>>> An understanding of AI’s limitations is starting to sink in
>>> After years of hype, many people feel AI has failed to deliver, says Tim Cross
>> 
>> https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in <https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in>
>> 
>> Including:
>> 
>>> Real managers in real companies are finding AI hard to implement and that enthusiasm is cooling
>> 
>> and this:
>> 
>>> They are powerful pattern-recognition tools, but lack many cognitive abilities that biological brains take for granted. They struggle with reasoning, generalising from the rules they discover, and with the general-purpose savoir faire that researchers, for want of a more precise description, dumb “common sense”. The result is an artificial idiot savant that can excel at well-bounded tasks, but can get things very wrong if faced with unexpected input.
>> 
>> That’s why the W3C Cognitive AI CG is focusing on mimicking the human brain at a functional level, and benefiting from hundreds of millions of years of evolution. This has involved a shift in mindset from logic and formal semantics to a more cognitive approach.
>> 
>> Manual development of symbolic AI doesn’t scale either, but a combination of symbolic and statistical approaches paves the way to cognitive agents can that can learn from experience guided by human collaborators.
>> 
>> The immediate challenge is to open up the use of natural language through incremental concurrent processing of syntax and semantics as a basis for addressing the abundant ambiguity in natural language and paving the way for teaching cognitive agents everyday skills. 
>> 
>> This is a lot easier to arrange in a cognitive architecture as it is trivial to launch cognitive processes by setting goals that trigger reasoning. You can get a first glimpse of a very simple demo at
>> 
>>  https://www.w3.org/Data/demos/chunks/nlp/toh/ <https://www.w3.org/Data/demos/chunks/nlp/toh/>
>> 
>> On Chrome it also supports speech recognition - click the microphone then hit enter if the text that appears after a second or two looks okay. This demo invokes cognition after generating the word dependency graph. The next demo will use fully concurrent processing of syntax and semantics. 
>> 
>> Whilst Google’s speech recognition is pretty good, today’s neural network based speech recognition lacks context, and real-time integration with semantics that would make it much more effective. In the longer term, integration with emotional processing will allow further for natural human machine interaction.
>> 
>> Here is a demo that shows how modelling the cortico basal ganglia circuit can support real-time control of factory machinery:
>> 
>>  https://www.w3.org/Data/demos/chunks/robot/ <https://www.w3.org/Data/demos/chunks/robot/>
>> 
>> The log shows a trace of goals and rule execution.
>> 
>> This is just a few tiny steps along the road to strong AI, and I am hoping to complete a number of demos on NLP and various forms machine learning over the rest of this year.
>> 
>> A formal spec is in preparation.
>> 
>> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> http://www.w3.org/People/Raggett <http://www.w3.org/People/Raggett>
>> W3C Data Activity Lead & W3C champion for the Web of things 
>> 
>> 
>> 
>> 
> 
> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> http://www.w3.org/People/Raggett <http://www.w3.org/People/Raggett>
> W3C Data Activity Lead & W3C champion for the Web of things 
> 
> 
> 
> 

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Monday, 29 June 2020 08:45:42 UTC