Re: Intelligence without representation

Hi Paola,

I disagree with your assumptions in that I believe that there is a great deal we can learn from how infants learn in combination with other sources of information, including neuroscience. This is a mix of observation, questioning, experimentation and evolution. Consciousness and emotions are not unknowable mysteries, but rather open to scientific study, and I would say that we already know a lot more about this than most people realise, and it is a matter of synthesising ideas from different work.

For instance, one intriguing aspect is the role of the anterior cingulate cortex which has been shown to play a key role in how we appraise future reward or penalty, and how we resolve conflicting emotions, e.g. when we are torn between immediate self-interest and our desire to help those close to us. This points to the relationship between reinforcement learning, goals, plans and the role of self appraisal in evaluating plans.

This is in turn suggests practical ideas for designing cognitive agents that incorporate theories of emotion as proposed by people like Ekman, Russell and Bradley. The rules that compute and act upon the emotional state can be regarded as heuristics for guiding appraisal and decision making, and can be preprogrammed or learned just like other rules.

The challenge is to provide concrete use cases and demonstrators as proof of concept, and a starting point is to focus on applying cognitive architectures to machine learning, causal reasoning, planning, appraisal, and so forth.  Is this Community Group a forum for practical discussion of such details of knowledge representation and processing, or if not, what it is for?

> On 24 Nov 2019, at 00:21, Paola Di Maio <paoladimaio10@gmail.com> wrote:
> 
> Dave, and all
> 
> .  Instead of focusing on manual development of knowledge representations, it would be advantageous to look at how these can be learned through interactions in the real world or simulated virtual worlds, drawing inspiration from the cognitive and linguistic stages of development of young human infants.
> 
> Glad this is of interest to you too. In Edinburgh I gave a talk once on biologicl inspired systems
>  and more recently one of the projects I collaborated with (not as a PI, so I do not have the ability to change the project scope etc) was indeed designed to learn how knowledge emerges in infants, However  there are fundamental design flaws in the research, and data collection is difficult and pointless if the research design is not sound.
> A lot of issues - too many to discuss in depth here - but in brief:
> - although intelligent systems are/can be inspired by humans and nature, we  have limited capability of engineering natural intelligence . I argue that this is because we still do not understand what intelligence is and how it develops, not only as a mechanism, but also as consciousness
> - when we design AI systems, the process of learning has to be designed.  If you want to 
> produce an intelligent agent without having to engineer it, then you have to make a baby :-)
> for everything else, standard systems design is necessary (or be ready to generate an artificial monster)
> - if you want to generate some kind of intelligent agent, say a NN, and do away with good
> system design practices of planning what it does, how and why it is going to be deployed, etc
> you are mixing (or trying to mix) natural intelligence with artificial, and should really not let it go outise the lab too soon- Apart from the fact that there are scientific and technical challenges to be overcome, there are also a lot of bigger questions. Human intelligence (which is still not well understood) evolves as part of something bigger, which is human nature in all its facets
> Humans feel pain, have bad dreams, have a consciousness, a heart, feelings, emotions discernment
> Intelligence  is generally constrained by the other human factors.  
> -  recent science using fmri shows that there is knowledge representation in the brain
> we just dont know how to recognize it yet, and that infants use learning as a way of
> forming concepts and language, so learning cannot be extricated from KR
> (so that knowledge without representation is interesting to study, but it clearly 
> only strengthens the argument for KR)
> - Tha KR can be inferred from observations of how the world works, rather than imposed on
> how the world works, is the work I am doing
> -  That KR is necessary to explainability and learning and verifiability is what I have observed so far
> 
> PDM
> 
>> On 23 Nov 2019, at 02:24, Paola Di Maio <paola.dimaio@gmail.com <mailto:paola.dimaio@gmail.com>> wrote:
>> 
>> I think I found the culprit, at least one of the papers responsible for this madness of doing
>> AI without KR
>> https://web.stanford.edu/class/cs331b/2016/presentations/paper17.pdf <https://web.stanford.edu/class/cs331b/2016/presentations/paper17.pdf>  
>> I find the paper very interesting although I disagree
>> 
>> Do people know of other papers that purport a similar hypothesis (that KR is not indispensable in AI for whatever reason?) 
>> thanks a lot
>> PDM
>> 
> 
> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> http://www.w3.org/People/Raggett <http://www.w3.org/People/Raggett>
> W3C Data Activity Lead & W3C champion for the Web of things 
> 
> 
> 

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Sunday, 24 November 2019 12:02:35 UTC