What an AI can tell us about human-like AI ...

With the advances in AI, perhaps we can turn to AI for how to realise AI!  I therefore asked Open AI’s ChatGPT [1], which is like a conversational version of Wikipedia with an astonishingly wide range of knowledge. Its knowledge is, however, frozen in time to what was in its training datasets (a snapshot of the web).

> What is plausible reasoning?
> 
> Plausible reasoning is a type of reasoning that is based on what is possible or likely, rather than on certain knowledge or strict logical deduction. This type of reasoning is often used in everyday life, as well as in fields such as law and medicine, where information may be incomplete or uncertain. In AI, plausible reasoning can be used to make inferences or predictions based on incomplete or uncertain data, which can be useful in tasks such as natural language processing and decision-making.


Which isn’t bad and aligns well with what I have been working on, see [2].

> What is human-like AI?
> 
> Human-like AI, also known as artificial general intelligence, is a type of artificial intelligence that has the ability to understand or learn any intellectual task that a human being can. This type of AI is often portrayed in science fiction and is not yet a reality. Currently, most AI systems are designed to perform specific tasks, rather than having general intelligence.
> 
> How can we design human-like AI?
> 
> It is difficult to say exactly how human-like AI can be designed, as it is a very complex and challenging problem. In order to create AI that has human-like intelligence, researchers would need to understand and recreate the processes that the human brain uses to think, learn, and understand the world. This would likely require advances in many fields, including computer science, neuroscience, and psychology. Additionally, creating human-like AI would likely require a huge amount of computing power and data, as well as careful testing and evaluation.


Which isn’t much help for my research!  :-)

On the plus side, it shows that large language models, and large image models [3], are quite effective in capturing semantics, albeit in an opaque form.  The challenge is how to enable cognitive agents to introspect with such knowledge and carry out deliberative cognition. In other words, how to design neural networks that can operate on latent semantics, e.g. to insert, remove, access, update and count features in the latent semantics for images.

This will then allow the integration of System 2 cognition in the form of sequential rule execution, along with support for plausible reasoning with causal relationships, metacognition and continuous learning.  I am confident that this can be demonstrated with modest everyday computers using smaller network models, given that we only need to demonstrate the new capabilities of novel network architectures on a limited domain.  In particular, where it is possible to generate the text and image examples needed for training and evaluation. Anyone care to help?

[1] https://openai.com/blog/chatgpt/
[2] https://www.w3.org/Data/demos/chunks/reasoning/
[3] https://stablediffusionweb.com/

Dave Raggett <dsr@w3.org>

Received on Saturday, 10 December 2022 10:18:46 UTC