Re: Geometry of Concept

Thanks for sharing. Finally proof of few-shot learning. Now if we can only find some computer scientists and quantum field theory experts to connect the dots to this and the functioning of the neocortical columnar structures in the brain.
See:https://academic.oup.com/brain/article/120/4/701/372118
I am working on this in a set of papers trying to unify knowledge representation in a wide range of science domains.
The same things keep popping up, manifolds, very peculiar groups, and concepts familiar to quantum field theory, and in terms of memory and learning, things that hint at biological utilization of quantum computing and quantum mechanical effects.
Recent exciting articles in genomics and quantum biology seem to hint at the fact that quantum computing processes are at the center of learning, cognition, memory and problem solving in a wide range of organisms.
And this seems to suggest that the days of deep learning using massive datasets with models using billions of parameters are counted.

Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Tuesday, March 23, 2021, 2:14:41 AM ADT, Paola Di Maio <paola.dimaio@gmail.com> wrote:  
 
 I find this article fascinating and orthogoanally relevant to AIKR as I understand itit provides an interesting direction imho
The Geometry of Concept Learning  
https://www.biorxiv.org/content/10.1101/2021.03.21.436284v1.full.pdf

 AbstractUnderstanding the neural basis of our remarkable cognitive capacity to accurately learn novel highdimensional naturalistic concepts from just one or a few sensory experiences constitutes a fundamentalproblem. We propose a simple, biologically plausible, mathematically tractable, and computationallypowerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts wecan learn given few examples are defined by tightly circumscribed manifolds in the neural firing rate spaceof higher order sensory areas. We further posit that a single plastic downstream neuron can learn suchconcepts from few examples using a simple plasticity rule. We demonstrate the computational power ofour simple proposal by showing it can achieve high few-shot learning accuracy on natural visual conceptsusing both macaque inferotemporal cortex representations and deep neural network models of theserepresentations, and can even learn novel visual concepts specified only through language descriptions.Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to behaviorby delineating several fundamental and measurable geometric properties of high-dimensional neuralrepresentations that can accurately predict the few-shot learning performance of naturalistic conceptsacross all our experiments. We discuss several implications of our theory for past and future studies inneuroscience, psychology and machine learning.   
  

Received on Tuesday, 23 March 2021 18:05:10 UTC