- From: ProjectParadigm-ICT-Program <metadataportals@yahoo.com>
- Date: Tue, 9 Feb 2021 15:30:27 +0000 (UTC)
- To: W3C AIKR CG <public-aikr@w3.org>, public-cogai <public-cogai@w3.org>, Paola Di Maio <paoladimaio10@gmail.com>
- Message-ID: <1334395103.2985716.1612884627334@mail.yahoo.com>
Connes embedding conjecture is needed to help simulate EXTREMELY large matrices for modeling artificial brains. But the groundbreaking article with succinct title MIP*=RE shatters all hope of using simplified lower dimension matrix modeling. https://www.quantamagazine.org/landmark-computer-science-proof-cascades-through-physics-and-math-20200304/ And it gets even weirder. Buddhist philosophy provides a means of getting from nothingness to infinite dimension matrix modeling which is quantum physics based. Recent research in neuroscience shows that certain processes governing storage of knowledge, recall of knowledge and learning are all based on making use of quantum processes. Consciousness and cognition cannot be captured by formal logic, algorithms or very high dimension matrix modeling of the brain as interconnected neurons. As the Buddhists like to say, our sensory apparatus fools us into believing we can truly understand "reality" and try to capture it in models. Same goes for intelligent machines and programs. Our brains are marvels of evolution, and in the foreseeable future we may create AI which can out compute and outperform us in many ways, but we will have a hard time beating evolution which has endowed us with consciousness that is at the same time logical, irrational, quantum, efficient in terms of energy use and hardwired genetically to feel emotions and empathy. Our brains are hardwired to make shortcuts in making sense of our outer reality, and we can create AI that does not have these built in shortcuts, so the sensory input is not filtered to fit existing internalized models. But the sensory apparatus of such AI would still be subject to the quirkiness and "spookiness" of quantum physics and reality. So I am not overly optimistic about the level of human AI we can reach. Milton Ponson GSM: +297 747 8280 PO Box 1154, Oranjestad Aruba, Dutch Caribbean Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development On Tuesday, February 9, 2021, 12:51:05 AM AST, Paola Di Maio <paoladimaio10@gmail.com> wrote: Let me add a few more interesting facts - in relation to the example provided below that - the person mentioned in the case is not literate, never went to school or sat any exam- its very difficult to get any common sense out of her in general conversations- she does not remember half of the thing she says- there are many other people like her, capable at times of 'supercognition' a special ability that arises in some people under certain circumstances, and not have a clue about much else-- such abilities cannot be tested under lab conditions Implications for science and technology? we may not have the capability yet to understand the complexityand the far reaching possibility of human cognition and putting things in boxes like school grades, or trying carry out lab tests on certain peopleis a very superficial possibly inadequate way to evaluate intelligence and understanding of reality and what is beyond the ordinary P- On Tue, Feb 9, 2021 at 6:24 AM Paola Di Maio <paoladimaio10@gmail.com> wrote: Picking up on something Dave said in response to the thread. COGAI vs AIKR On Fri, Feb 5, 2021 at 11:52 PM Dave Raggett <dsr@w3.org> wrote: If we can successfully reproduce how the best people reason, we will be in a strong position to improve on that by going beyond the limits of the human brain. Dave also pointed out that he would consider best people those who score well during school exams There are clear arguments to show that scoring well at exams is often the results of good training and many conditions, including physical fitness, lifestyle, emotional environment and that furthermore, often the best reasoning ability cannot be captured by passing tests(in the case of people who can catch a snake, or navigate without compass or GPS etc)ie, reasoning is not always related to good exam results But those arguments aside, I d like to bring up a well known and documented example of a woman who wasvery sick and left for dead. without going too close to her, for fear of fetching a disease, people asked her at some distanceif she had any dying wish, any last minute wish . she left a message of farewell to be delivered to her familyand also requested her urine to be taken into a bottle and handed over to the first person who would cross the gateat a certain given place. This was agreed and done ":So ... I asked them to take my urine in a bottle and give it to whomever they met first at the Boudhanath Stupa entrance. By now I was semi-conscious, but they were kind enough to do this favor for me. The person who took my urine met a man at the gate who turned out to be a Tibetan physician. He tested my urine and diagnosed that I had been poisoned with meat, prescribed some medicine and even sent me some blessing pills. My health improved dramatically and I had many good dreams. .” Now, I know this is not your typical reasoning, and we cannot expect this from everyone nor our future AI systemsbut we should keep these examples in mind when considering what is possible for an enlightened mind and beyond the ordinary She is now alive and well and in Kathmandu, if anyone wants to look her up sometimes and learn more about beyond ordinary reasoning, https://nalanda-monastery.eu/index.php/en/teachers-of-nalanda/khadro-la?start=1 PDM On Fri, Feb 5, 2021 at 11:52 PM Dave Raggett <dsr@w3.org> wrote: On 5 Feb 2021, at 13:11, Paola Di Maio <paoladimaio10@gmail.com> wrote: an afterthought in respect to mimicking how humans reason and communicate well, each human is different, we can generalize up to a point and mimicking may result in some kind of parrot engineering ....useful to start with but nowhere near intelligence at its best You’re missing the big picture. If we can successfully reproduce how the best people reason, we will be in a strong position to improve on that by going beyond the limits of the human brain. The more we understand, the further and faster we can go. This is an evolutionary path that will go very much faster than biological evolution. At the same time we can make AI safe by ensuring that it is transparent, collaborative and embodies the best of human values. Human-like AI will succeed where logic based approaches have struggled. 500 million years of evolution is not to be dismissed so easily. I remember the enthusiastic claims around “5th generation computer systems” and logic programming at the start of the 1980’s, and had plenty of fun with the prolog language. However, the promise of logic programming fizzled out. Today, 40 years on, much of the focus of work on knowledge representation is still closely coupled to the mathematical model of logic, and this is holding us all back. We need to step away and exploit the progress in the cognitive sciences. I am especially impressed by how young children effortlessly learn language, given the complexity of language, and the difficulties that adult learners face when learning second languages. Another amazing opportunity is to understand how some children are so much better than others when it comes to demanding subjects like science and mathematics. Moreover, warm empathic AI will depend on understanding how children acquire social skills. Let’s lift up our eyes to the big picture for human-like AI. Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett W3C Data Activity Lead & W3C champion for the Web of things
Received on Tuesday, 9 February 2021 15:31:54 UTC