Re: Intelligence without representation

Hi Dave-

>
>
> I disagree with your assumptions
>

I don't think I made any assumptions, would you care to be more precise
which of the things I wrote
you disagree with?

in that I believe that there is a great deal we can learn from how infants
> learn in combination with other sources of information, including
> neuroscience.
>
yes, I believe that too - dont think I said anything to the contrary, did I?


> This is a mix of observation, questioning, experimentation and evolution.
> Consciousness and emotions are not unknowable mysteries, but rather open to
> scientific study, and I would say that we already know a lot more about
> this than most people realise,
>
Yes, we know a lot but from different fields where, to my knowledge,
methods and findings are not in line,

   and it is a matter of synthesising ideas from different work.
yes, I think we all work on synthesizing, but perhaps using different sets
of references


>
>
For instance, one intriguing aspect is the role of the anterior cingulate
> cortex which has been shown to play a key role in how we appraise future
> reward or penalty, and how we resolve conflicting emotions, e.g. when we
> are torn between immediate self-interest and our desire to help those close
> to us. This points to the relationship between reinforcement learning,
> goals, plans and the role of self appraisal in evaluating plans.
>
reference?  there is a lot of research in this space, may help to know
exactly which source
you are using in your example

>
>
This is in turn suggests practical ideas for designing cognitive agents
> that incorporate theories of emotion as proposed by people like Ekman,
> Russell and Bradley. The rules that compute and act upon the emotional
> state can be regarded as heuristics for guiding appraisal and decision
> making, and can be preprogrammed or learned just like other rules.
>
> very fiddly. emotions are not an area I would want to compute nor to base
any
intelligent architecture on. emotions are the modst fiddly part of human
nature, they depend
on psychological inputs that can be falsified/fabricated/induced using
false inputs.  for example, some under the radar experiments  which I came
across (and found  unethical and disturbing)  took a place in student labs
were designed to reward participants with whatever made them happy, and
then punish them at the same time to see which of the two emotions would
prevail.
another one was to mislabel things to see how the students would react

 the only thing that I have observed is that  the students suffered
emotional damage, were confused and psychologically fragile  anc could not
tell fact from lie very soon after being in that lab-
as a result which made them vulnerable and thus easily manipulated by the
PI,
 a bit of a research horror story that was.

The challenge is to provide concrete use cases and demonstrators as proof
> of concept,
>
for what?


> and a starting point is to focus on applying cognitive architectures to
> machine learning, causal reasoning, planning, appraisal, and so forth.
>
sure, do you have any examples that we can look at?
when it comes to working with peoples' emotions by creating artificial
situations and conditions though, there is no ethical research protocol in
place. do you know under what code of ethics would this research have to
follow?  I dont see the immediate relevance to AI but we may have different
disciplinary perspectives on this


>  Is this Community Group a forum for practical discussion of such details
> of knowledge representation and processing, or if not, what it is for?
>
> This Community Group is for whatever its members wish to be, within the
declared scope, but even the scope can be revised, if members feel the need
to reword it
:-)
Look forward to be learning more about what you are working on Dave, why
dont you give us
a presentation sometime
PDM


> On 24 Nov 2019, at 00:21, Paola Di Maio <paoladimaio10@gmail.com> wrote:
>
> Dave, and all
>
> .  Instead of focusing on manual development of knowledge representations,
> it would be advantageous to look at how these can be learned through
> interactions in the real world or simulated virtual worlds, drawing
> inspiration from the cognitive and linguistic stages of development of
> young human infants.
>
>
> Glad this is of interest to you too. In Edinburgh I gave a talk once on
> biologicl inspired systems
>  and more recently one of the projects I collaborated with (not as a PI,
> so I do not have the ability to change the project scope etc) was indeed
> designed to learn how knowledge emerges in infants, However  there are
> fundamental design flaws in the research, and data collection is difficult
> and pointless if the research design is not sound.
> A lot of issues - too many to discuss in depth here - but in brief:
> - although intelligent systems are/can be inspired by humans and nature,
> we  have limited capability of engineering natural intelligence . I argue
> that this is because we still do not understand what intelligence is and
> how it develops, not only as a mechanism, but also as consciousness
> - when we design AI systems, the process of learning has to be designed.
> If you want to
> produce an intelligent agent without having to engineer it, then you have
> to make a baby :-)
> for everything else, standard systems design is necessary (or be ready to
> generate an artificial monster)
> - if you want to generate some kind of intelligent agent, say a NN, and do
> away with good
> system design practices of planning what it does, how and why it is going
> to be deployed, etc
> you are mixing (or trying to mix) natural intelligence with artificial,
> and should really not let it go outise the lab too soon- Apart from the
> fact that there are scientific and technical challenges to be overcome,
> there are also a lot of bigger questions. Human intelligence (which is
> still not well understood) evolves as part of something bigger, which is
> human nature in all its facets
> Humans feel pain, have bad dreams, have a consciousness, a heart,
> feelings, emotions discernment
> Intelligence  is generally constrained by the other human factors.
> -  recent science using fmri shows that there is knowledge representation
> in the brain
> we just dont know how to recognize it yet, and that infants use learning
> as a way of
> forming concepts and language, so learning cannot be extricated from KR
> (so that knowledge without representation is interesting to study, but it
> clearly
> only strengthens the argument for KR)
> - Tha KR can be inferred from observations of how the world works, rather
> than imposed on
> how the world works, is the work I am doing
> -  That KR is necessary to explainability and learning and verifiability
> is what I have observed so far
>
> PDM
>
>>
>> On 23 Nov 2019, at 02:24, Paola Di Maio <paola.dimaio@gmail.com> wrote:
>>
>> I think I found the culprit, at least one of the papers responsible for
>> this madness of doing
>> AI without KR
>> https://web.stanford.edu/class/cs331b/2016/presentations/paper17.pdf
>> I find the paper very interesting although I disagree
>>
>> Do people know of other papers that purport a similar hypothesis (that KR
>> is not indispensable in AI for whatever reason?)
>> thanks a lot
>> PDM
>>
>>
>> Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
>> W3C Data Activity Lead & W3C champion for the Web of things
>>
>>
>>
>>
> Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
> W3C Data Activity Lead & W3C champion for the Web of things
>
>
>
>

Received on Sunday, 24 November 2019 12:30:00 UTC