Re: definitions, problem spaces, methods

Here is a sketched path to equivalences from generalized KR&R to computability.
Start with first-order logic or any other logic with similar properties, and proof systems, go from there to models. Use category theory to go to specific model categories, apply model theory to create appropriate algebraic structures and the apply representation theory to convert to linear algebra.
Now use quantum physics specific spaces derived from functional analysis to describe infinite dimensional spaces. The groundbreaking proof of MIP*=RE shows the Connes Embedding Conjecture to be false.
Which shows that hypergraphs or any mathematically equivalent conceptualizations cannot be used as approximations for arbitrarily high n, n-dimensional vector spaces, which result has implications for large ML models.

Jump from RE (recursively enumerable languages) to Turing machines and other equivalents and it should become apparent that the KR&R for explainable AI is bound to both mathematical and computability restrictions.

And the mind blowing aspect in this comes from the MIP*=RE proof, which hints at the possibility of incorporating quantum physics concepts of probability, entanglement and uncertainty in KR&R, which seems to be in line with recent findings in neuroscience research on the biological architectures in the brain associated with cognition, that some quantum processes, including entanglement play a role.

Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Tuesday, November 8, 2022 at 05:12:58 AM AST, Dave Raggett <dsr@w3.org> wrote:  
 
 Hi Mike,


On 7 Nov 2022, at 17:39, Mike Bergman <mike@mkbergman.com> wrote:
When we do AI using something like GPT-3 we are making an active choice of how we will represent our knowledge to the computer. For GPT-3 and all massive data statistical models, that choice limits us to indexes.

That is not true as artificial neural networks are equivalent to Turing machines in the sense of being able to do whatever computations we design them to do including the ability to store, recall and transform information in a vast variety of ways.
A more interesting question is whether vector space representations are better suited to dealing with imprecise and imperfect knowledge than conventional symbolic logic.  This is very likely to be the case for systems designed to devise their own knowledge representations as they learn from training materials. Emergent knowledge will often be far from crisp until it matures, with the need to cast aside half baked ideas in favour of ideas that fare better against the training tasks.
It has long been recognised that intuition often precedes analytical progress in mathematics, see e.g. Henri Poincaré’s "Intuition and Logic in mathematics" from 1905. It makes sense to work on techniques to mimic human intuition and System 1 thinking as complementary to deliberative, analytical System 2 thinking.  You could think of logic as the tip of a very large iceberg that is submerged below the surface of the sea.

Dave Raggett <dsr@w3.org>


  

Received on Tuesday, 8 November 2022 17:48:41 UTC