Re: [ontolog-forum] RDF finally has its long awaited Generic Client!

My answer to it being static is a resounding no! Nature isn't static,
neither is the progression of science, and physics and related disciplines
aren't either.
Precisely because of this, capturing the dynamics and the often
corresponding statistical or probabilistic aspects makes this quite hard.
My guess is that if I can get my hands on an up-to-date 3D mockup of the
brain with identified clusters and a full comprehensive listing of all
brain cells with their corresponding functionality, properties and links to
clusters and other types of brain cells, I can at least tweak the mandala
graph model.
This model would also be the most relevant for KR&R for AI.

Milton Ponson
Rainbow Warriors Core Foundation
CIAMSD Institute-ICT4D Program
+2977459312
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean

On Sat, Oct 4, 2025, 06:09 Alex Shkotin <alex.shkotin@gmail.com> wrote:

> Dear Milton,
>
> Your approach to graph theory is reminiscent of S. Wolfram's Ruliad.
> Moreover, he has rules for the dynamics of his hypergraph. A related
> question for you: do you consider your structure statically? This has
> become fashionable since the advent of four-dimensional Minkowski-Poincaré
> spacetime, or is your graph dynamic?
>
> An important question for my approach is: will a theory ever emerge that
> describes a static graph-mandala, or also its dynamics if it is dynamic?
>
> Then we can discuss its axioms, definitions, theorems, inference rules,
> and proofs.
>
> Of course, it's better to start with a specific object of study, for which
> there may be several theories, as with figures in Euclidean space. Hilbert
> took points, lines, and planes as his starting points. Some might object
> that infinite objects are intuitively unacceptable.
>
> Can we expect you to publish a treatise on "The Theory of the
> Graph-Mandala"?
>
> Alex
>
> I asked Gemini 2.5 Flash to review the work in your document, and if ше
> didn't miss anything, I wholeheartedly agree with the idea that graphs are
> a powerful mathematical object, especially if the nodes, arcs, and
> subgraphs are labeled (which is usually implied).
>
> It would be great if you could rate the review for accuracy—after all,
> that's the only way to work reliably with LLM: by conducting expert reviews!
>
> Gemini 2.5 Flash
>
> The document "Networks References" contains an extensive bibliography
> dedicated to Network Science, its fundamental concepts, models, and
> applications across various scientific fields.
>
> The main ideas covered in the referenced texts can be grouped into several
> key themes:
> 1. Network Structure and Models
>
>    -
>
>    Small-World Networks Theory: The idea that even in very large
>    networks, such as social networks, the path between any two nodes (e.g.,
>    people) is very short ("six degrees of separation"). The works of D. J.
>    Watts and S. H. Strogatz (1998) describe the "small-world" model, which
>    explains how real networks combine high clustering (like regular networks)
>    with short path lengths (like random networks).
>    -
>
>    Scale-Free Networks: The work of A. Barabási and R. Albert (1999)
>    demonstrates that many real-world networks (e.g., the Internet, protein
>    networks) have a scale-free structure where most nodes have few
>    connections, but a small number of highly connected nodes, called
>    "hubs," exist. This phenomenon is explained by the mechanism of "preferential
>    attachment."
>
> 2. Network Applications in Science
>
>    -
>
>    Biological Networks:
>    -
>
>       Nervous Systems (Connectomics): Studies on the complete nervous
>       system structure of the model organism Caenorhabditis Elegans, as
>       well as work on "The Human Connectome Project," which maps the connections
>       in the human brain.
>       -
>
>       Molecular and Metabolic Networks: Analysis of the structure of
>       protein and metabolic networks, including their robustness and role in
>       diseases (Network Medicine).
>       -
>
>    Transportation and Technological Networks:
>    -
>
>       Studies of the global air transport network, its anomalous
>       centrality, and vulnerability to cascading failures.
>       -
>
>       Research on the diameter and navigability of the World Wide Web.
>       -
>
>    Social Networks and Epidemics:
>    -
>
>       Analysis of the spread of influence in social networks, including
>       the concept of "Three Degrees of Influence."
>       -
>
>       Studying the dynamics of epidemic spread, for instance, Thailand's
>       successful response to the HIV epidemic.
>
> 3. Game Theory and Cooperation
>
>    -
>
>    Evolution of Cooperation: The works of R. Axelrod and M. A. Nowak
>    explore how selfish agents can develop cooperative behavior, mainly using
>    the "Prisoner's Dilemma" game as an example. These studies show how
>    network structure and repeated interactions affect the dissemination of
>    cooperation.
>
>
> пт, 3 окт. 2025 г. в 21:30, Milton Ponson <rwiciamsd@gmail.com>:
>
>> Dear Alex,
>>
>> What you want is for me to explain how to formalize knowledge in the
>> optimal possible way.
>>
>> As a mathematician,  fascinated by quantum physics, string theory,
>> astronomy and cosmogeny/cosmology, and the Buddhist Madhyamaka Middle Way
>> philosophy, which introduces the much misunderstood concept of sunyata,  I
>> have pondered over how to overcome the limitations imposed by Godel-Skolem
>> and Turing on formal systems and (recursively enumerable) algorithms.
>>
>> Skimming through literally hundreds of articles about cognitive
>> architectures,  neuroscience,  cognition, computer science,  computational
>> biology, algebraic geometry,  bootstrapping in physics and the theoretical
>> physics describing the origin of the universe,  I noticed similarities
>> started to become visible.
>>
>> Using sunyata as starting point, we can easily see that the origin of the
>> universe and an empty void all have quantum fluctuations, and that these
>> also are at work in nature and our brains.
>>
>> The article MIP*=RE with its corollary proving the Connes embedding
>> conjecture to be false was the final peace of the puzzle.
>>
>> It is not mathematically possible to create a formal theory of
>> everything,  not even an approximation of it.
>>
>> But what we CAN DO is create "confined domains of discourse" in which we
>> can introduce formalization and consistency.
>>
>> Now how can we link all of these up into a patchwork, or network of
>> formalization that together form a formalized whole of all knowledge?
>>
>> The brain itself provided the answer, and some free thinking about
>> networks growing randomly and the clustering that will appear as by a
>> stroke of magic.
>>
>> Obviously the most intuitive visual conceptualization is by use of
>> graphs, but there is a catch. So I looked into quantum graphs,  and ended
>> up with combining the concepts of quantum graphs and co-line graphs.
>>
>> True to the Buddhist philosophy the new emerging concept is a mandala
>> graph.
>>
>> To keep it simple, each vertex constitutes a mathematical object with
>> equations, a special manifold and additional qualities attached, and
>> mathematical superimposition.
>>
>>  This construction does not distinguish between physical,  virtual or
>> abstract objects.
>>
>> Consequently we can use graph theory, algebraic geometry, category theory
>> and constructibility theory to create descriptions of quantum physics,
>> neuroscience, and the field of all mathematical knowledge described by a
>> single graph network.
>>
>> It is THIS NETWORK that can be formalized. The beauty of it being
>> mathematical is that it can sidestep the sticky concepts of time and
>> causality, which can be introduced if necessary.
>>
>> In actuality the current state-of-the-art in neuroscience is the closest
>> to describing a special case of this mandala graph concept.
>>
>> Now if we look at network theory and how clusters appear naturally we can
>> intuitively see that the mathematical underpinnings of this network concept
>> completely do away with the mathematics that defines generative large
>> language models, which is ever larger scaling high dimensional graphs and
>> networks and associated tokens.
>>
>> Scientist work as collaborators in small worlds. And thus knowledge
>> exists in clusters, but which are highly connected.
>>
>> So if I may so bold to say, I suspect the trillions of dollars thrown at
>> the Stargate AI project to be a waste of money, because the mathematics
>> behind the generative LLMs is flawed.
>>
>> I recommend you look at:
>> Https://ve42.co/networksRefs
>>
>> My article describing this generalized mandala graph concept and its use
>> is a work in progress.
>>
>>
>> Milton Ponson
>> Rainbow Warriors Core Foundation
>> CIAMSD Institute-ICT4D Program
>> +2977459312
>> PO Box 1154, Oranjestad
>> Aruba, Dutch Caribbean
>>
>> On Thu, Oct 2, 2025, 04:50 Alex Shkotin <alex.shkotin@gmail.com> wrote:
>>
>>> John,
>>>
>>> I can try to figure out what Milton means if he answers questions (It's
>>> always interesting to be Socrates). You've chosen the one closest to yours
>>> from "a continuous infinity of possible starting points."
>>>
>>> I write here from time to time that the most commonly used knowledge,
>>> presented as theories in various textbooks, articles, and lectures, is
>>> selected for formalization. You can say: don't formalize the Geometry of
>>> Hilbert, Euclid, or Tarski. And so on for physical theories, and then say
>>> the same to every computer scientist and ontologist formalizing in RDF,
>>> OWL2, CL(🤝), Isabelle, Coq, or Lean.
>>>
>>> A strange proposition for our community of practice.
>>>
>>> Formalization is not the creation of new knowledge. It is the
>>> formalization of existing, human-verified knowledge for reliable processing
>>> by computers.
>>>
>>> It should be added that we formalize (some would say, crudely, "cram"
>>> them into a computer) not only theories, but also their models and methods
>>> for solving problems about the properties of these models [1]. We spend our
>>> entire lives constructing theories and their models, and testing them in
>>> practice by solving various problems: Close your eyes and solve the problem
>>> of taking a sip from your cup of tea.
>>>
>>> LLMs show that knowledge can be concentrated, but who better than you to
>>> know that it can be concentrated in a much more compact and reliable way,
>>> without any brute force.
>>>
>>> Alex
>>>
>>> [1] Specific tasks of Ugraphia on a particular structure (formulations,
>>> solutions, placement in the framework)
>>> <https://www.researchgate.net/publication/380576198_Specific_tasks_of_Ugraphia_on_a_particular_structure_formulations_solutions_placement_in_the_framework>
>>>
>>> "This document describes a specific framework of specific tasks about a
>>> particular structure posed and solved GNaA Fig.1.1 within the framework of
>>> a specific theory, namely Ugraphia, the theory of undirected graphs, with
>>> little involvement of the theory of binary relations, Binria. The task
>>> framework stores the formulation and solution of tasks in a structured form
>>> and is intended for use by everyone in the world (be it the world of a
>>> research group or Humanity): having set a task on the structure before
>>> solving it on their own, a person can look into the task framework and see:
>>> perhaps it has already been solved. The structure and tasks about it are
>>> described in the first paragraph of the first chapter of [GSiA]."
>>>
>>>
>>> ср, 1 окт. 2025 г. в 21:00, John F Sowa <sowa@bestweb.net>:
>>>
>>>> Alex,
>>>>
>>>> I totally agree with Milton.
>>>>
>>>> MP:  The problem here is the implicit discussion about knowledge,
>>>>  knowledge representation and formal knowledge representation.  These are
>>>> three distinct layers and because we still do not have a firm grip on the
>>>> first, which is inextricably linked to consciousness . . . . .
>>>>
>>>> I have been saying something very similar to this point again, and
>>>> again, and again.
>>>>
>>>> I'll repeat once more, starting with Milton's point above. For any kind
>>>> of knowledge representation, there is a continuous infinity of possible
>>>> starting points and levels of detail or scope.  Every attempt at
>>>> formalization must make a choice among an infinitely of options.
>>>>
>>>> Therefore, the probability that your choice of what to formalize is
>>>> correct for anybody else is 1 divided by the total number of options -- in
>>>> other words, 1 divided by infinity.
>>>>
>>>> That value is very, very close to *ZERO*.   Therefore, your project of
>>>> formalization is *WORTHLESS*.
>>>>
>>>> So *DON'T *do it.
>>>>
>>>> John
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------
>>>> *From*: "Alex Shkotin" <alex.shkotin@gmail.com>
>>>>
>>>> Hi Milton,
>>>>
>>>> What do you think about representation of our theoretical knowledge as
>>>> axiomatic theories?
>>>>
>>>> Alex
>>>>
>>>> ср, 1 окт. 2025 г. в 18:10, Milton Ponson <rwiciamsd@gmail.com>:
>>>>
>>>> As a mathematician I cannot suppress a chuckle here. The problem here
>>>> is the implicit discussion about knowledge,  knowledge representation and
>>>> formal knowledge representation
>>>> These are three distinct layers and because we still not have a firm
>>>> grip on the first, which is inextricably linked to consciousness,
>>>>  knowledge representation remains a difficult task to accomplish, and
>>>> consequently formal knowledge representation, which we are seeking will
>>>> remain elusive.
>>>> Large language models ignore the first layer and assume we can use
>>>> token based systems to create knowledge representation emulation systems
>>>> that can capture all formal knowledge representation systems.
>>>> If one looks at the groundbreaking paper MIP*=RE,
>>>> https://arxiv.org/abs/2001.04383, and what it states about the Connes
>>>> embedding conjecture being false, this should ring a bell.
>>>> Because we cannot in all cases assume that a finite matrix in a very
>>>> high dimensional space can approximate a simulation of an infinite
>>>> dimensional space.
>>>> Which means that no matter how high we make the dimension and
>>>> consequently the number of parameters used, in some cases the simulations
>>>> will never even get close to approximate a finite accurate model of
>>>> infinite space.
>>>> Which means generative LLMs are are a mathematical dead end, and will
>>>> be the reason why the AI bubble riding on generative LLMs will burst.
>>>>
>>>> Milton Ponson
>>>> Rainbow Warriors Core Foundation
>>>> CIAMSD Institute-ICT4D Program
>>>> +2977459312
>>>> PO Box 1154, Oranjestad
>>>> Aruba, Dutch Caribbean
>>>>
>>>>
>>>> --
>>>> All contributions to this forum are covered by an open-source license.
>>>> For information about the wiki, the license, and how to subscribe or
>>>> unsubscribe to the forum, see http://ontologforum.org/info
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "ontolog-forum" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to ontolog-forum+unsubscribe@googlegroups.com.
>>>> To view this discussion visit
>>>> https://groups.google.com/d/msgid/ontolog-forum/14bb3912000a477f820480fbdd414cbd%40af128af903a246abbaa42dc2aef387a1
>>>> <https://groups.google.com/d/msgid/ontolog-forum/14bb3912000a477f820480fbdd414cbd%40af128af903a246abbaa42dc2aef387a1?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>

Received on Saturday, 4 October 2025 17:04:37 UTC