why the blue bubbles do not suffice to capture KR for AI

I performed linguistics research in Aruba in 1992 to investigate how to
apply computational linguistics in selected domains to enhance the creole
language Papiamento as a written language.

For this project I selected the approach of how to capture natural language
based knowledge and create tools for enhancing the use of a creole language
in its written form.

I visited the International Federation of Library Associations headquarters
in The Hague, the Institute for Dutch Lexicology,  the Max Planck Institute
for Psycholinguistics and the Institute for Language Technology and
ARTIFICIAL INTELLIGENCE,  University of Tilburg, all of this in 1994,
before the Internet as we know it, and generative LLMs based as we know it
existed.

Research at that time was still at a fundamental level and not contaminated
by a dominating paradigm ( in casu generative AI based on LLMs).

I had around a dozen interviews with the Tilburg Institute and the
principal takeaway was that capturing knowledge formulated or expressible
in natural language is a hard problem. Which insight was further
strengthened by my visit to the Max Planck Institute for
Psycholinguistics,  co-located at the University of Nijmegen.

Let's get down to the chase.

First the historical perspective:
https://www.dataversity.net/articles/a-brief-history-of-large-language-models/

When we Google "the flaws of LLMs" a slew of articles appears, some of
which show that intrinsically LLMs are flawed and cannot be improved upon
to correct these flaws.

Which means that generative AI that uses LLMs and transformers suffers from
a partial GIGO flaw, which cannot be removed.

So, IMHO,  the obvious move is to go back to the drawing board and focus on
texts or visuals and not use the current paradigm.

Which brings us- There's a hole in the bucket, dear Liza, dear Liza, song
by Harry Belafonte and Odetta- back how to process language
computationally.

Which brings us back to Noah Chomsky and his foundational work on
computational linguistics and subsequent research.

When we Google "the flaws of computational linguistics " the exact list of
problems inherent in LLMs appears.

So how should we proceed then?

The following article shows an avenue of research to pursue:
https://phys.org/news/2025-11-patterns-world-languages-grammatical-universals.html

And it shows that a new paradigm is in order that could include PKN and
other notational systems.

Somehow the complexity of knowledge,  knowledge representation and language
can and should be captured in a framework, that is constructible.

The blue bubbles diagram fails in this respect.

Milton Ponson
Rainbow Warriors Core Foundation
CIAMSD Institute-ICT4D Program
+2977459312
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean

Received on Wednesday, 19 November 2025 15:00:33 UTC