- From: ProjectParadigm-ICT-Program <metadataportals@yahoo.com>
- Date: Fri, 16 Jul 2021 19:19:22 +0000 (UTC)
- To: W3C AIKR CG <public-aikr@w3.org>, "paoladimaio10@googlemail.com" <paoladimaio10@googlemail.com>
- Message-ID: <70072046.158774.1626463162736@mail.yahoo.com>
The way out of the "mess" of representation is to look at what linguists and philosophers say about natural language representation and "formal" representation. Six things here are key, (1) observation and coupled with it sense perception, (2) neural processing of perceived data, (3) encoding for storage,(4) retrieval of stored information for recognition, (5) adaptation of stored information and (6) cognition. Neuroscience, in particular neural circuits and systems, cognitive and behavioral neuroscience and computational neuroscience have made great strides in the study of the biological underpinnings of cognition and related processes. The emerging picture is of a highly complex functionality where the brain can create structures of up to 11 dimensions for storage (Source: Blue Brain project). At the cellular level quantum effects can come into play, and for short term memory, long term memory and memory search, biological processes involving genes activated tor trigger release of biochemical compounds, even triggered snapping of DNA residing in the nuclei or other parts of brain cells at precise breakpoints, all create a very complex system of interacting foci in the brain each contributing to network activity that can lead to recognition/comparison with stored data, (adapted) storage, cognitive processes leading to action etc. Our (western) concepts of knowledge representation are based on formal mathematical systems and logical systems. Godel, Turing, Church and Chaitin have pointed out the limitations of mathematics, logic, information science and computability and according paradigms for knowledge representation. The dichotomy of mathematical abstractions (which may represent abstracted internalized perceived real world objects and their properties), and real world sense perceived objects is subject to quantum effects and a fiercely debated field of philosophical inquiry. It should be clear that knowledge representation based on purely mathematical and logical formal systems will not do. Graph theory, knowledge graphs, category theory and the theory of complex adaptive systems are useful tools in describing some aspects of the complexity of processes, where the emphasis is less on the objects and their properties and more on the interrelated processes. I recommend we take the point of view of seeing learning (to adapt) as the key ingredient. Sentience, sapience and consciousness, and perception and cognition are currently not formally describable. The "learning" viewpoint makes the most sense, both biologically but also in the artificial intelligence domain. If we can somehow add free will, i.e. (the axiom of) choice, causal reasoning to the now existing mix, we may be able to advance the field of artificial intelligence. There are however three major hurdles to overcome, (1) to create explainability, (2) bias, (3) and ethical issues.' Milton Ponson GSM: +297 747 8280 PO Box 1154, Oranjestad Aruba, Dutch Caribbean Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development On Wednesday, July 14, 2021, 11:04:57 PM ADT, Paola Di Maio <paola.dimaio@gmail.com> wrote: This is an interesting talk relevant to this CG as cognitionm AI KR and Neuroscience are converginghttps://www.youtube.com/watch?v=OAmB5SOS2LQ Neural representation is a key neuroscientific concept meant to bridge brain and mind, or brain and behavior. But what is meant exactly by a “neural representation”? Conventionally, a neural representation is a correspondence between something in the brain and something in the world, a “code”. The encoding view of representations faces two critical issues, empirical and theoretical. Empirically, I will show that neural codes do not have the properties required to naturalize mental representations. Theoretically, it raises the problem of “system-detectable error” (Bickhard): if the brain sits at the receiving end of the code, then how can it know if the representation is wrong? As John Eccles has concluded, the logical implication is dualism – there must be a “decoder” that translates brain properties to world properties. Consequently, a number of authors have argued that representations are not only homuncular but also unnecessary: adapted behavior results not from calculations on an internal copy of the world, but from coupling between body and world – “the world is its own best model” (Brooks). Anti-representationalism introduces crucial concepts missing from the conventional view (embodiment, autonomy, dynamicism) but it struggles to explain some aspects of anticipation and abstraction. I argue that the problem with representation is to think of it as a “thing” that can be manipulated and observed (like a painting), which collides with the dynamical nature of brain activity. I suggest to shift the focus from the encoding properties of brain states, a dualistic concept, to the representational properties of brain (and body) processes, such as anticipation and abstraction.
Received on Friday, 16 July 2021 19:21:42 UTC