Re: the intersection between AIKR and COGAI

The mechanistic reductionist perspective is what got the physicists and astronomers into trouble, not to mention the elementary particle physicists.
As a mathematician I agree that any knowledge representation is reductionist, it's the mechanistic part I have problems with.
When we factor in spatiotemporal parameters, all the physicists current problems come into play. And since cognition, reasoning and consciousness deal with qualia in time sequenced processes we will face the same problems.
For practical purposes we will want to focus on conceptual frameworks that are focused on limited types of AI, leaving out the AGI discussion all together..
You mentioned three definitions of consciousness, that are very debatable. Consciousness comes in many favors, as hinted by the terms sentient, sapient and intelligent.
As human beings, being intelligent we are able to study and try to conceptualize consciousness, but that makes the entire exercise by default anthropocentric.
Since AI is to be expected also to be used in dealing with animals in controlled settings, e.g. in agriculture, husbandry and fisheries, but also in environments with domesticated animals and even in wildlife reserves, the  problem becomes clearer.
You said that For a system to be aware of something, it is sufficient to have access to a model that describes that thing, e.g. that the traffic lights are now green, so that it is okay to drive forward. Likewise, understanding is essentially being able to reason about a model of a thing, i.e. to draw inferences or conclusions.

That is an anthropocentric line of reasoning, most conscious beings do not use models, only a few categories of species like primates, corvids, octopus, some invertebrates and humans actually use models consciously.
The really strange thing happening right now is that machine learning is being used to interact with other species to figure out communication systems, language if you will, used by them, but like with many advanced ML systems, the AI will not be able to divulge the actual models on which their findings/results are based.
So any reductionist system we devise must be open and explainable as well.
The concept of a Bayesian brain was mentioned, which brings us to the heated philosophical debate about presentism versus eternalism in which  Bayesian concepts are central.Since quantum physics and the interpretation of the quantum wave, and how we perceive reality to this day present us with seemingly unreconcilable issues, we must solve the current problems of interpretation and perception of physical reality in order to be able to create any reductionist models for AI knowledge representation. 

Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Thursday, October 27, 2022 at 11:32:27 AM AST, Dave Raggett <dsr@w3.org> wrote:  
 
 

On 27 Oct 2022, at 15:29, ProjectParadigm-ICT-Program <metadataportals@yahoo.com> wrote:
In my humble opinion we must minimally incorporate the bottom three levels, being CONSCIOUSNESS, MIND, BRAIN into a minimal set of conceptual frameworks for knowledge representation for AI.
Which forces us to tackle the consciousness level for which we need to engage philosophers and psychologists to eliminate the notion of the HARD PROBLEM of consciousness.


Is consciousness really that difficult to define?  The dictionary definition is pretty straightforward, e.g.
Oxford: the state of being aware of and responsive to one's surroundings
Cambridge: the state of understanding and realising something, a person's awareness or perception of something.
Merriam Webster: the quality or state of being aware especially of something within oneself, the state or fact of being conscious of an external object, state, or fact, the state of being characterised by sensation, emotion, volition, and thought
You might ask what it means to be aware of something or to understand something.  I don’t see anything difficult there. For a system to be aware of something, it is sufficient to have access to a model that describes that thing, e.g. that the traffic lights are now green, so that it is okay to drive forward. Likewise, understanding is essentially being able to reason about a model of a thing, i.e. to draw inferences or conclusions.  Free will is the capacity of minds to choose, i.e. to reason. Moreover, reasoning is non-deterministic when based upon stochastic processes.
None of these definitions present huge challenges for creating artificial minds.  I consider consciousness to be a characteristic of mind, which in turn is the function of a computational system, whether biological or electronic.
However, this is taking a mechanistic reductionist perspective, and I accept that some people aren’t interested in science or engineering oriented explanations, and may want to use their own definition of English words. I suspect that that is related to mind-body dualism, and presumptions about people having souls, but not animals nor machines.  This is likely to fuel prejudices about AI rather than positive steps to ensure AI serves humanity as a whole.
Dave Raggett <dsr@w3.org>


  

Received on Thursday, 27 October 2022 21:04:35 UTC