Re: [ontolog-forum] The immense power of human intelligence

There are some points that need to be made.
First and foremost that the groundworks for constructing AGI need to take into account all possible modes of thought, perception, formal variants of discourse and sequences of observed states.
The current bodies of literature of neuroscience, cognitive science, psychology and philosophy and even genomics point to some inconsistencies in our current non-quantum, deterministic or statistical approaches to modeling of intelligence (and consciousness) which lead to formal and conceptual approaches that are incomplete.
And the current clash between computer and other scientists set off by Chalmers and Koch shows that we are a long way from a conceptual basis for describing A(G)I and even consciousness.
  
Semiotic Roots and Buddhist Routes in Phenomenology and Intercultural Philosophy: A Peircean Study of Abhidharma Buddhist Theories of Consciousness and Perception
https://link.springer.com/chapter/10.1007/978-981-15-7122-0_4

I have had the pleasure and honor of getting access to some materials not easily shared from Buddhist sources in original interpretations of Buddhist logic, particularly the Madhyamaka Middle Way school of philosophy which is closest in conceptualization to modern quantum physics theory.
And the human brain at its most fundamental level seems to work according to quantum principles.
There is no one general system of formal description possible, because there will always be aspects that cannot be tested (yet) experimentally.
The best approach is the formalization using category theory where forgetful functors play a pivotal role in eliminating certain structures and properties from the formal descriptions and discussions.


Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Tuesday, January 30, 2024 at 08:39:48 AM AST, Paola Di Maio <paola.dimaio@gmail.com> wrote:  
 
 Slides attached, very useful references on many topics relevant to this groupJohn, in cc we take the opportunity to say thank you and hello and Happy New Year from all of us

PDM

---------- Forwarded message ---------
From: John F Sowa <sowa@bestweb.net>
Date: Tue, Jan 30, 2024 at 12:36 AM
Subject: [ontolog-forum] The immense power of human intelligence
To: ontolog-forum <ontolog-forum@googlegroups.com>, ontology-summit@googlegroups.com <ontology-summit@googlegroups.com>


The article on phaneroscopy, which I have finally finished, shows the immense power of human intelligence, of which LLMs can simulate only a tiny aspect.  Compared to earlier work on machine translation, it's an important aspect.  But in a comparison with what the human brain can do, it's pathetically weak.  In fact, it's pathetically weak compared to a rat brain.
The concluding Section 7, which is attached below, shows an illustration of an intelligent human system (Figure 18), a design for an intelligent AI system (Figure 19), and a design for AI systems that implement aspects of intelligence for practical computer systems.  They aren't AGI systems, but they would have a chance of supporting  systems that are more powerful than anything done with LLMs by themselves.  In fact, they could use LLMs to support a language interface.
The preceding Section 6 surveyed aspects of human intelligence.  But the references that follow Section 7 include two slide sets for talks that survey aspects of human intelligence and contain many references to the original research reports.  To begin, see https://jfsowa.com/talks/natlog.pdf .  For more, see https://jfsowa.com/talks/vrmind.pdf 
As I have said many times, I fully appreciate the value of LLM technology.  But ongoing research in neuroscience shows that animal brains, from the rat on up, are vastly more powerful.   AGI is far in the future.  As  I would bet, not in the 21st century.
John
 

-- 
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or 
unsubscribe to the forum, see http://ontologforum.org/info/
--- 
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/22c2e2683acb46ef8ef91ab49fa9d89a%40bestweb.net.
  

Received on Tuesday, 30 January 2024 18:03:53 UTC