Re: A mathematical ceiling limits generative AI to amateur-level creativity

I think that novel ideas and creativity stem from a combination of
processes  in both System 1 and System 2 thinking as defined in the book
"Thinking,  Fast and Slow" by Daniel Kahneman.

While System 2 thinking can be observed in controlled experimental settings
using fMRI,  System 1 thinking is harder to capture. Scientists, artists,
engineers and people in general can ponder over something consciously and
get a sudden insight, Eureka moment or aha erlebnis (A). Then there are the
gradual processes where novel ideas emerge through a not necessarily
ordered set of steps (B). And then there are the team thinking processes
that lead to novel ideas (C).

In terms of AI, knowledge representation and algorithms case (B) can be
described as using defined domains of knowledge and reasoning procedures
that hint at heuristic and even algorithmic iteration. At some point an
outlier data/information set is identified as a match. This implies that
the reasoning procedures have been tweaked not through learning but through
rational processes.

Case (A) is harder to pin down, because insights can even pop up in dreams.
In this case it isn't the procedures and rational processes that somehow
iterate heuristically or algorithmically, but domains of knowledge are
switched, compared or substituted. In terms of brain activity this can
combine fast and slow processes and involves thinking outside the box.

Case C can be categorized as ensemble thinking, where cases (A) and (B) can
emerge through processes that are either agent based or complex system
based.

Cases B and C can be captured in knowledge representation,  algorithms and
AI and are perfectly suited to the use of a combination of AI agents and
human agents.

Milton Ponson
Rainbow Warriors Core Foundation
CIAMSD Institute-ICT4D Program
+2977459312
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean

On Thu, Nov 27, 2025, 10:33 Dave Raggett <dsr@w3.org> wrote:

> Hi Milton,
>
> Thanks for the pointer. It is indeed unsurprising that LLMs are limited to
> what’s likely based upon their training data. That just begs the question
> of what would be needed to match the creativity of professionals. I suspect
> that involves System 2 thinking focused on criteria other than likelihood,
> i.e. ideas that are novel and unexpected, yet effective in the context
> under consideration.  Could a human guide an LLM Agent in this way? Or
> perhaps one agent could guide another in a meta reasoning process. I also
> wonder how well LLM Agents understand human feelings, given their lack of
> direct experience.  Humans broaden their understanding by reading novels,
> watching plays, or even banal tv soaps. You ask yourself what would you
> feel if you were in the position outlined by the script writer, who are
> often people that are acute observers of the human condition.
>
> Would it be immoral to develop AI agents that feel like us, or would it
> make them more useful to us?
>
> Best regards,
>    Dave
>
> On 26 Nov 2025, at 15:39, Milton Ponson <rwiciamsd@gmail.com> wrote:
>
>
> https://www.psypost.org/a-mathematical-ceiling-limits-generative-ai-to-amateur-level-creativity/
>
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Thursday, 27 November 2025 15:50:59 UTC