- From: John F Sowa <sowa@bestweb.net>
- Date: Wed, 01 Oct 2025 13:59:48 -0400
- To: ontolog-forum@googlegroups.com, Milton Ponson <rwiciamsd@gmail.com>
- Cc: ontolog-forum@googlegroups.com, public-lod <public-lod@w3.org>, semantic-web@w3.org
- Message-Id: <14bb3912000a477f820480fbdd414cbd@af128af903a246abbaa42dc2aef387a1>
Alex, I totally agree with Milton. MP: The problem here is the implicit discussion about knowledge, knowledge representation and formal knowledge representation. These are three distinct layers and because we still do not have a firm grip on the first, which is inextricably linked to consciousness . . . . . I have been saying something very similar to this point again, and again, and again. I'll repeat once more, starting with Milton's point above. For any kind of knowledge representation, there is a continuous infinity of possible starting points and levels of detail or scope. Every attempt at formalization must make a choice among an infinitely of options. Therefore, the probability that your choice of what to formalize is correct for anybody else is 1 divided by the total number of options -- in other words, 1 divided by infinity. That value is very, very close to ZERO. Therefore, your project of formalization is WORTHLESS. So DON'T do it. John ---------------------------------------- From: "Alex Shkotin" <alex.shkotin@gmail.com> Hi Milton, What do you think about representation of our theoretical knowledge as axiomatic theories? Alex ср, 1 окт. 2025 г. в 18:10, Milton Ponson <rwiciamsd@gmail.com>: As a mathematician I cannot suppress a chuckle here. The problem here is the implicit discussion about knowledge, knowledge representation and formal knowledge representation These are three distinct layers and because we still not have a firm grip on the first, which is inextricably linked to consciousness, knowledge representation remains a difficult task to accomplish, and consequently formal knowledge representation, which we are seeking will remain elusive. Large language models ignore the first layer and assume we can use token based systems to create knowledge representation emulation systems that can capture all formal knowledge representation systems. If one looks at the groundbreaking paper MIP*=RE, https://arxiv.org/abs/2001.04383, and what it states about the Connes embedding conjecture being false, this should ring a bell. Because we cannot in all cases assume that a finite matrix in a very high dimensional space can approximate a simulation of an infinite dimensional space. Which means that no matter how high we make the dimension and consequently the number of parameters used, in some cases the simulations will never even get close to approximate a finite accurate model of infinite space. Which means generative LLMs are are a mathematical dead end, and will be the reason why the AI bubble riding on generative LLMs will burst. Milton Ponson Rainbow Warriors Core Foundation CIAMSD Institute-ICT4D Program +2977459312 PO Box 1154, Oranjestad Aruba, Dutch Caribbean
Received on Monday, 6 October 2025 06:33:39 UTC