Re: Intelligence without representation

Dave, and all

.  Instead of focusing on manual development of knowledge representations,
it would be advantageous to look at how these can be learned through
interactions in the real world or simulated virtual worlds, drawing
inspiration from the cognitive and linguistic stages of development of
young human infants.


Glad this is of interest to you too. In Edinburgh I gave a talk once on
biologicl inspired systems
 and more recently one of the projects I collaborated with (not as a PI, so
I do not have the ability to change the project scope etc) was indeed
designed to learn how knowledge emerges in infants, However  there are
fundamental design flaws in the research, and data collection is difficult
and pointless if the research design is not sound.
A lot of issues - too many to discuss in depth here - but in brief:
- although intelligent systems are/can be inspired by humans and nature,
we  have limited capability of engineering natural intelligence . I argue
that this is because we still do not understand what intelligence is and
how it develops, not only as a mechanism, but also as consciousness
- when we design AI systems, the process of learning has to be designed.
If you want to
produce an intelligent agent without having to engineer it, then you have
to make a baby :-)
for everything else, standard systems design is necessary (or be ready to
generate an artificial monster)
- if you want to generate some kind of intelligent agent, say a NN, and do
away with good
system design practices of planning what it does, how and why it is going
to be deployed, etc
you are mixing (or trying to mix) natural intelligence with artificial, and
should really not let it go outise the lab too soon- Apart from the fact
that there are scientific and technical challenges to be overcome, there
are also a lot of bigger questions. Human intelligence (which is still not
well understood) evolves as part of something bigger, which is human nature
in all its facets
Humans feel pain, have bad dreams, have a consciousness, a heart, feelings,
emotions discernment
Intelligence  is generally constrained by the other human factors.
-  recent science using fmri shows that there is knowledge representation
in the brain
we just dont know how to recognize it yet, and that infants use learning as
a way of
forming concepts and language, so learning cannot be extricated from KR
(so that knowledge without representation is interesting to study, but it
clearly
only strengthens the argument for KR)
- Tha KR can be inferred from observations of how the world works, rather
than imposed on
how the world works, is the work I am doing
-  That KR is necessary to explainability and learning and verifiability is
what I have observed so far

PDM

>
> On 23 Nov 2019, at 02:24, Paola Di Maio <paola.dimaio@gmail.com> wrote:
>
> I think I found the culprit, at least one of the papers responsible for
> this madness of doing
> AI without KR
> https://web.stanford.edu/class/cs331b/2016/presentations/paper17.pdf
> I find the paper very interesting although I disagree
>
> Do people know of other papers that purport a similar hypothesis (that KR
> is not indispensable in AI for whatever reason?)
> thanks a lot
> PDM
>
>
> Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
> W3C Data Activity Lead & W3C champion for the Web of things
>
>
>
>

Received on Sunday, 24 November 2019 00:22:58 UTC