Re: Misconceptions about what knowledge representation truly is

Thank you Dave for your eloquent reply and exposing the numerous
interpretations of knowledge representation.

You are preaching to the choir here.
And all of my work is aimed at pinning down EXACTLY the limits of
explainability so we can proceed to maximum adequacy.
The word adequacy is defined by context sensitive application, an issue
that IMHO can best be tackled by domains of discourse, which are
accompanied by vocabularies, directories and ontologies, the use of which
can be explained in detail by librarians.

In computation we use ontologies and semantic ordering tools.

The hubris in current research and development of artificial intelligence,
artificial general intelligence and superintelligence is exemplified by the
dominating paradigm that ever increasing scaling of generative large
language models will eventually  lead to the latter two forms.

Unfortunately the MIP*=RE article disproved the Connes embedding
conjecture, which in essence states that scaling cannot be proven to
approximate or improve adequate description with increasing dimensionality.
The irony is that the MIP*=RE article was written by computer scientists,
but it has been mainly mathematicians and philosophers that have been
reading,  trying to understand and its figure out its implications.

The cash craving Wall Street investors, investment funds, venture capital
fund managers and unfortunately many millions of American citizens are
betting big on this AI promise that cannot be met.

Already investments in AI, and hyperscale datacenters have eclipsed global
investments in electric power generation and distribution and renewable
energy combined, which is insane.

Knowledge representation is best used to describe the hard mathematical and
computability limitations of formal systems, and to maximize adequacy.

The irony is that the generative LLMs have been fed garbage,  tokenized,
and with AI slop now filling up crawleable data, we need to improve the
feedstock.

That's why K3D using an ensemble of generative LLM agents and the simple 3D
representation of information or data is an indication of where we need to
look for future direction.

In terms of why I have used the term mandala for my mathematical framework,
is because a mandala can visually represent in 2D and 3D recursivity,
fractal properties and containment within a bounded space of elements that
describe reality.
And where each point represents a vertex that embeds another mandala.

Matryoshka doll layering and embedding through recursivity visualized.

To end on a philosophical note in line with both the International Science
Council and UNESCO recommendations for the future role of science and
artificial intelligence in achieving sustainable development,  I posted
online in LinkedIn that we need to replace the paradigm STEM, Science,
Technology,  Engineering and Mathematics by STICK,  Science, Technology,
Innovation,  Culture and Knowledge as the first two imply the use of
mathematics and engineering.

By doing so we emphasize the role of knowledge in its rightful context.

Milton Ponson
Rainbow Warriors Core Foundation
CIAMSD Institute-ICT4D Program
+2977459312
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean

On Fri, Nov 14, 2025, 06:50 Dave Raggett <dsr@w3.org> wrote:

> Thanks for bringing this up.
>
> According to Randall Davis, knowledge representation provides a language
> for describing the world and a computational model for consequences, albeit
> a fragmentary theory of reasoning. Another definition is that knowledge
> representation provides the ability to understand, reason and make informed
> decisions. In a similar vein: knowledge representation refers to encoding
> information about the world into formats that can be utilised to solve
> complex tasks. Another perspective is that knowledge representation refers
> to the way information is mentally represented using symbols or mental
> images. Cognitive Psychologists emphasise the importance of mental
> representations in problem solving and communication. However, these
> representations are just a convenient fiction when talking about the
> operation of the brain rather than dealing with the complex waves of neural
> activation across the brain.
>
> What about explainability versus adequacy?
>
> Formal logic provides explanations in terms of mathematical proof from the
> stated axioms. However, this comes at the cost of adopting a distorted and
> oversimplified view of the world. As such, formal logic is rarely useful in
> real world contexts despite the aspirations of AI researchers over many
> decades.
>
> Neural networks are highly effective for dealing with the complexity of
> everyday knowledge, but lack formal explainability due to the opaque
> statistical models derived from machine learning. It is more productive to
> ask for human understandable explanations for a line of reasoning. This
> replaces mathematical proof by rational argument and rhetoric, harkening
> back to Aristotle, and more recently, the Age of Enlightenment that brought
> us industrialisation.
>
> So what are the benefits from explicit knowledge representation in the era
> of strong AI?
>
> I think this relates to the mundane need to avoid misunderstanding both
> within and between businesses, along with the need for persistent records
> in support of taxation and legal actions. There is an opportunity to apply
> knowledge representation as a technical argot for *de jure* standards.
>
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Friday, 14 November 2025 14:25:19 UTC