Re: the intersection between AIKR and COGAI

Dear Timothy and all,
My suggestion for knowledge representation actually is limited to a small subset of what is considered AI. It allows for any conceptual frameworks to be used whether biologically inspired cognitive architectures (BICA), neural networks, human brain architecture inspired and a lot more.
I think there is more or less a consensus within this AIKR CG that we aim to come up with conceptual frameworks for KR for open, explainable and ethical AI.
By doing so we automatically define the boundaries for such. Anyone who wishes to venture outside those boundaries will be well aware (hopefully) that additional CONTAINMENT ALGORITHMS will be needed. But articles have been published recently that prove Stephen Hawking, Nick Bostrom and others right, that current state-of-the-art ML and neural network based AI cannot be contained.
We use many models of neural architectures in the human brain to create algorithms, and some provide very impressive results, but with absolutely no clue HOW this is done.
We also have absolutely no clue how and where information is stored and in what type of coding.
This is what makes current ML and AI algorithms unpredictable and in the end uncontrollable.
Empathy and ethics can be argued to be essential to creating controllable AI, but we have no idea  how to incorporate these into current state-of-the-art AI.
My proposal eliminates this problem by redefining knowledge representation as the central focus and be able to define interaction systems that can be modeled as AI that are bounded in their types of transformational mappings and interaction processes.

Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Friday, October 28, 2022 at 09:34:11 AM AST, Timothy Holborn <timothy.holborn@gmail.com> wrote:  
 
 FWIW: I think the idea of artificial minds being rendered consciousness is an "ungodly" concept. Artificial minds being rendered in relation to property rights laws / asset related considerations, entirely plausible.
I therefore think it's too dangerous to try to support peoples extension of self (digital twins) as it's likely to be something companies want to "own".  Whereas; the idea of democratisation ownership of AI agents - or robots (whether their in a phone or some sort of others physical object) doesn't really matter. 
https://twitter.com/WebCivics/status/1585976653867405312 
If humanity is under attack by dangerous robots, I'd like to have one that I own fighting for me, kinda like r2d2 but different.
Timh. 
On Fri, 28 Oct 2022, 11:25 pm ProjectParadigm-ICT-Program, <metadataportals@yahoo.com> wrote:

There may be a relatively easy way out of this confusion. But it starts with disentangling knowledge representation completely from AI.
Following Dave Raggett's line of reasoning we posit knowledge representation to be a class of semiotic  (input) structured descriptions that lend themselves to analysis through logical, computational, mathematical and computability processes in order for these to create  computable (output) algorithms, given a certain set of objects in an object system in physical reality (spatiotemporal defined set of confined spaces and objects therein) which together with a set of relevant interaction processes defining an interaction system.
This way we eliminate the problem of distinguishing between structured data, information and knowledge.
For this interaction system we now define classes of transformational mappings for the interaction system, (1) dealing with sensory input through observation, (2) converting the observation datasets to formats to compare to existing instances in the structured descriptions, (3) exchanging or passing observed datasets to another structured description, (4) add, delete, edit or deprecate instances to the structured description, (5) trigger actions in the interaction system..
We can now use all mathematical, computer science, computability, and mathematical tools from theoretical physics, representation theory, and category theory to produce generalizations  of the basic components, being structured descriptions and interaction systems to build increasingly complex sets.
Note that the concepts of mind, consciousness and sell awareness are avoided, but openness and explainability become embedded.
Mind and consciousness come into play if we contemplate artificial general intelligence.
And in doing so we avoid any ontological and epistemological discussions with philosophers, because those only arise at the AGI level.

Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Thursday, October 27, 2022 at 09:05:10 PM AST, Paola Di Maio <paoladimaio10@gmail.com> wrote:  
 
 Thank you all for contributing to the discussion
the topic is too vast - Dave I am not worried if we aree or not agree, the universe is big enough 

To start with I am concerned whether we are talking about the same thing altogether. The expression human level intelligence is often used to describe tneural networks, but that is quite ridiculous comparison. If the neural network is supposed to mimic human level intelligence, then we should be able to ask; how many fingers do humans have?But this machine is not designed to answer questions, nor to have this level of knowledge about the human anatomy. A neural network is not AI in that senseit fetches some images and mixes them without any understanding of what they areand the process of what images it has used, why and what rationale was followed for the mixing is not even described, its probabilistic. go figure.
Hay, I am not trying to diminish the greatness of the creative neural network, it is great work and it is great fun. But a) it si not an artist. it does not create something from scratch b) it is not intelligent really, honestly,. try to have a conversation with a nn 
This is what KR does: it helps us to understand what things are and how they workIt also helps us to understand if something is passed for what it is not *(evaluation)This is is why even neural network require KR, because without it, we don know what it is supposedto do, why and how and whether it does what it is supposed to do
they still have a role to play in some computation



DR Knowledge representation in neural networks is not transparent, 
PDM I d say that either is lacking or is completely random

DR Neural networks definitely capture knowledge as is evidenced by their capabilities, so I would disagree with you there.


PDM  capturing knowledge is not knowledge representation, in AI, 
capturing knowledge is only one step, the categorization of knowledge is necessary to the reasoning



 

We are used to assessing human knowledge via examinations, and I don’t see why we can’t adapt this to assessing artificial minds because assessments is very expensive, with varying degrees of effectiveness, require skills and a process -  may not be feasible when AI is embedded to test it/evaluate it

We will develop the assessment framework as we evolve and depend upon AI systems. For instance, we would want to test a vision system to see if it can robustly perceive its target environment in a wide variety of conditions. We aren’t there yet for the vision systems in self-driving cars!
Where I think we agree is that a level of transparency of reasoning is needed for systems that make decisions that we want to rely on.  Cognitive agents should be able to explain themselves in ways that make sense to their users, for instance, a self-driving car braked suddenly when it perceived a child to run out from behind a parked car.  We are less interested in the pixel processing involved, and more interested in whether the perception is robust, i.e. the car can reliably distinguish a real child from a piece of newspaper blowing across the road where the newspaper is showing a picture of a child.
It would be a huge mistake to deploy AI when the assessment framework isn’t sufficiently mature.
Best regards,
Dave Raggett <dsr@w3.org>



  
  

Received on Friday, 28 October 2022 16:01:19 UTC