Re: The Slopification of the CCG

> On 23 Apr 2026, at 19:59, Eduardo C. <e.chongkan@gmail.com> wrote:
> 
> That is what I think/expect most people would do when they see a long text, or thread. I prefer copy pasting into the CLI, than clicking the tab on my right. That is what I do, and what I expect people to do in 2026, especially when introducing a concept, code, idea etc. 

The problem with this approach is information loss. The point of considered writing is to structure and formulate your ideas and intent well enough so that they can be effectively received, comprehended, and potentially acted on by your counterparty. If your counterparty is using an LLM to fuzzily approximate prose which may or may not reflect their precise intent, and you’re using an LLM to summarise that same prose, there are at least two steps where information can be permanently lost, or worse, be corrupted. Editorial judgement, whether that is a quick check of an email before transmission or a formal document review, is supposed to prevent information loss by refining written text and thoughts to a standard that encourages effective communication and trims deadweight which can confuse, mislead, exhaust, or otherwise adversely affect readers.

It seems to me that authors who expect their audience to use an AI to understand their original prose or their LLM-generated treatises have either abdicated responsibility of properly formulating and structuring their ideas for wider distribution or they have decided to outsource their thinking itself to a probabilistic model while expecting other humans to do the actual thinking necessary for evaluation, both scenarios being, as others have pointed out, quite disrespectful and inappropriate for a forum like this.

-- 
Marcus Engvall

Principal—M. Engvall & Co.
mengvall.com

Received on Friday, 24 April 2026 06:02:05 UTC