Re: The Slopification of the CCG

On Fri, Apr 24, 2026 at 2:01 AM Marcus Engvall <marcus@engvall.email> wrote:
> The point of considered writing is to structure and formulate your ideas and intent well enough so that they can be effectively received, comprehended, and potentially acted on by your counterparty.
...
> It seems to me that authors who expect their audience to use an AI to understand their original prose or their LLM-generated treatises have either abdicated responsibility of properly formulating and structuring their ideas for wider distribution

Yes, exactly this ^^^. I've been trying to think of a way to say this
in the current thread and Marcus has absolutely nailed it above.

We spend *months to years* trying to tease out the right architecture,
and then the words and prose to clearly articulate those concepts and
guidance in these specifications. We argue, with respect toward one
another, A LOT, to get there.

We end up taking that much time because "the stochastic norm" is
exactly the wrong thing to do in many cases; we're trying to bring
something cohesive into existence that has not existed before.

Well done, Marcus -- IMHO, you've identified the core of the social
norm that is broken when LLMs are used to generate reams of content to
make an unworkable idea look legitimate by placing window dressing
around it.

We're here for considered ideas and writing, not to hear stochastic
parrots regurgitate old ideas.

-- manu

PS: I do think these stochastic parrots will evolve and overtake most,
if not all, of us eventually... but that time is not now given what we
seem to be collectively experiencing. Like any tool, we'll learn to
use it better over time, norms will be established, and it might
simultaneously provide great benefit and have the ability to destroy
us all.

-- 
Manu Sporny - https://www.linkedin.com/in/manusporny/
Founder/CEO - Digital Bazaar, Inc.
https://www.digitalbazaar.com/

Received on Friday, 24 April 2026 12:46:36 UTC