- From: Dave Raggett <dsr@w3.org>
- Date: Sat, 27 Jul 2019 11:28:36 +0100
- To: Paola Di Maio <paoladimaio10@gmail.com>
- Cc: Agnieszka Ławrynowicz <agnieszka.lawrynowicz@cs.put.poznan.pl>, Diogo FC Patrao <djogopatrao@gmail.com>, W3C AIKR CG <public-aikr@w3.org>
- Message-Id: <16410154-B0A2-4F0D-B50D-79A3A4A9E951@w3.org>
It is enlightening to consider a young baby’s view of the world. As a baby, you learn to recognise regularities in the visual field as objects with three dimensional shapes and behaviours. You learn the relationships between different objects. For instance, the distinction between animate which move any themselves and inanimate objects which stay where they are until something/someone moves them. This includes part-whole relationships, e.g. that a cat has a head, a body, four legs and a tail. This presents the need to bridge representations from pixels to two dimensional images, to three dimensional models, to spatial and temporal reasoning, to induction from examples, and the means to induce abstract concepts and relationships. The graph models in symbolic knowledge representation is itself an abstraction that sits above complex spiking neural networks. Individual neurons are noisy and unreliable. The brain solves that by using collections of neurons to ensure robust operations. Unfortunately, we are only at a very early stage of understanding how this works despite decades of research. Work on deep learning with artificial neural networks has made amazing progress, but remains far away from a baby’s ability to learn by itself from observing the environment around it. Gary Marcus has a nice critique of the limitations of current artificial neural networks in respect to deep learning: https://arxiv.org/pdf/1801.00631.pdf <https://arxiv.org/pdf/1801.00631.pdf> Recent work has made a start at addressing the limitations, but we need new insights that are well beyond incremental extensions to existing approaches. This is a matter of figuring out the right questions. Some of the more obvious include: how to support continuous learning and avoid learning from scratch, how to support symbols and graphs above spiking models of neural networks, how to support weakly and unsupervised learning at multiple levels of representation, how we can use evolutionary algorithms to explore the potential for different architectures. I worry that the research funding priorities are misplaced and stuck in yesterday’s mindsets, and as such blind to new opportunities that don’t fit well with those mindsets. Perhaps we need more mavericks that are willing to take a risk rather than playing it safe. Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett W3C Data Activity Lead & W3C champion for the Web of things
Received on Saturday, 27 July 2019 10:28:43 UTC